top of page

Search CRC

142 results found with an empty search

  • Weekly Report: Cyber-based hostile influence campaigns 18th - 24th August

    [Listen to the Podcast] [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the 18 th  to the 24 th of August, 2025 we observed, collected and analyzed endpoints of information related to cyber-based  hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. This week's intelligence landscape was dominated by Russia's influence operations. Russia employed a spectrum of tactics, from sophisticated AI-generated fakes targeting European diplomacy to imposter news sites aimed at destabilizing Armenia. This digital assault was complemented by a soft-power push, as state-backed media expanded across the Global South to build narratives favorable to Moscow. [Report Highlights] ●     Russian influence networks circulated a fabricated image portraying European leaders as humiliated supplicants waiting in a White House corridor. -  NewsGuard ●     Russia is running at least two parallel influence campaigns, Storm-1516 and Matryoshka, to destabilize Armenia as it pivots politically toward the West. -  NewsGuard ●     A Chilean television channel allegedly handed over its entire broadcast signal to the Russian state-backed news broadcaster RT, prompting sanction proceedings from the country's regulator. - BBC Global Disinformation Unit   ●     British Columbia Wildfire Service formally warned residents about the public safety threat posed by viral, AI-generated fake emergency images. - The Conversation ●     Days after meeting with Vladimir Putin, President Trump publicly used the Russian leader's claims about rigged elections to justify his new push against mail-in voting in the U.S. - Politico ●     A top Greek communications official has identified disinformation as the "greatest threat faced by modern Western democracies" and a key tool of global extremist movements. -  Greek City Times ●     The G7 Rapid Response Mechanism developed and delivered a collective response framework to standardize how member states counter FIMI threats. - European External Action Service [Weekly Review] Russia Uses AI Fake to Mock European Leaders at Trump Summit  Storm-1516 deploys Imposter News Site targeting the Armenian Government Assessing Russia’s Asymmetric Media Expansion in the Global South Digital Wildfires: Countering GenAI Disinformation in Emergency Management China-Linked APT Targets Taiwan's Web Hosting Sector Following Putin Meeting, Trump's Plan to Ban Mail-In Ballots Ignites Security Fears Greece Confronts "Greatest Threat" of Political Fake News The EEAS's Counter-Interference Posture  - ‘A FIMI Toolbox’ Russia Uses AI Fake to Mock European Leaders at Trump Summit A report from NewsGuard reveals that pro-Kremlin sources are disseminating a sophisticated, AI-generated image to mock European leaders following a White House summit on the war in Ukraine. The fabricated image purports to show the leaders of France, the UK, Germany, Italy, Finland, and the European Commission looking somber and subservient in a corridor, supposedly snubbed by U.S. President Trump. Propagated across Telegram, X, and the Pravda network of websites, the narrative aimed to portray the European delegation as a “coalition of those in waiting.” The analysis confirms the deception, citing the AI detection tool Hive, which found the image to be synthetic with 99.8 percent certainty. This incident is presented as part of a recurring Russian information warfare tactic that targets high-profile diplomatic events to sow discord and project an image of Western weakness.   Source: NewsGuard, Hope Talbot, Aug 19, 2025, NewsGuard Reality Check, [online] Available at: https://www.newsguardrealitycheck.com/p/pro-russian-sources-spread-ai-generated Contents     Storm-1516 deploys Imposter News Site targeting Armenian Government An investigation by NewsGuard has uncovered a Russian influence operation, identified as Storm-1516, targeting Armenia with destabilizing disinformation. The campaign’s centerpiece is a baseless claim, originating from an imposter news site called EULeaks.eu , that the Armenian Prime Minister’s wife, Anna Hakobyan, embezzled $3.4 million from a children’s cancer charity. This narrative was amplified across multiple social media platforms. The report asserts this is a deliberate effort to undermine Prime Minister Nikol Pashinyan's government as it pivots away from Moscow’s orbit and toward the West. The analysis also reveals the imposter outlet’s tactic of using AI to rewrite articles from legitimate sources to feign authenticity. A parallel Russian campaign, dubbed Matryoshka, was observed simultaneously targeting Armenia with separate disinformation, indicating a multi-pronged assault.   Source: NewsGuard's Reality Check, Eva Maitland, Aug 19, 2025, Russia Targets Armenia Via an Imposter European News Outlet, [online] Available at: https://www.newsguardrealitycheck.com/p/russia-targets-armenia-via-an-imposter Contents     Assessing Russia’s Asymmetric Media Expansion in the Global South A recent article   from the BBC Global Disinformation Unit  details how Russian state-backed media outlets RT and Sputnik are expanding their influence across the Global South as they face increasing restrictions in the West. The analysis explains that since Russia’s 2022 invasion of Ukraine prompted bans in the EU, US, and UK, these channels have opened new bureaus and launched services in Africa, Latin America, and the Balkans. This expansion strategically fills a void left by some downsizing Western media outlets. Experts cited in the article describe Russia's approach as a "careful manipulation" that blends some accurate reporting with curated narratives and disinformation to appeal to audiences with anti-imperialist sentiments. According to the analysis, this media push is designed to undermine Western influence, build support for Moscow’s geopolitical agenda, and foster new economic partnerships in a shifting global order.   Source: BBC, Juliana Gragnani and Maria Korenyuk, August 25, 2025, How Russia is quietly trying to win over the world beyond the West, [online] Available at:  https://www.bbc.com/news/articles/cm2vr37yd4no Contents     Digital Wildfires: Countering GenAI Disinformation in Emergency Management An article   in The Conversation  warns that the proliferation of advanced, accessible generative AI tools presents a growing threat to public safety during emergencies. Citing a recent case where British Columbia’s Wildfire Service had to caution residents about fake, AI-generated fire images, the analysis explains that such content exploits human psychology. People under stress are more susceptible to sensational, emotionally charged information, which can lead to confusion, panic, and the misallocation of resources. The authors argue that this digital disinformation can cause direct harm and disproportionately affects vulnerable populations. To counter this, the article advocates for a comprehensive strategy that includes fostering public media literacy, establishing clear policies for newsrooms using AI, strengthening platform-level fact-checking, and enforcing legal deterrents against the deliberate spread of false information.   Source: The Conversation, August 2025, AI-generated misinformation can create confusion and hinder responses during emergencies, [online] Available at: https://theconversation.com/ai-generated-misinformation-can-create-confusion-and-hinder-responses-during-emergencies-263081 Contents     China-Linked APT Targets Taiwan's Web Hosting Sector According to Infosecurity , Cisco Talos, a newly identified Chinese advanced persistent threat (APT) group (UAT-7237) is targeting web infrastructure providers in Taiwan. Active since 2022, the group focuses on establishing long-term access for data theft and shows a specific interest in victims' VPN and cloud infrastructure. By compromising web hosting providers, the attackers gain access to sensitive government and corporate data. This allows for long-term espionage and the potential to disrupt critical infrastructure. The campaigns highlight the growing sophistication of state-sponsored cyberespionage and the strategic importance of protecting digital infrastructure to prevent data theft and maintain national security. Researchers assess with high confidence that UAT-7237 is a distinct Chinese state-backed actor, likely operating as a subgroup of another actor, UAT-5918, but with significant deviations in its tactics. This activity occurs amid escalating Chinese cyber intrusions against Taiwan’s critical infrastructure, as noted by Taiwanese security officials in early 2025.   Source: Infosecurity Magazine, James Coker, Chinese APT Group Targets Web Hosting Services in Taiwan, [online] Available at: https://www.infosecurity-magazine.com/news/chinese-apt-web-hosting-taiwan/ Contents     Following Putin Meeting, Trump's Plan to Ban Mail-In Ballots Ignites Security Fears After meeting with Vladimir Putin in Alaska, President Trump praised the Russian leader and echoed his claims that U.S. mail-in voting had rigged the 2020 election. A subsequent Politico report   reveals that just days later, Trump announced his intention to sign an executive order to ban mail-in ballots and some voting machines. This sequence of events has sent shockwaves through the election security community. Officials warn that dismantling modern voting infrastructure creates the very vulnerabilities it claims to solve. Experts argue that mass hand-counting is an unworkable fantasy that would be slow, error-prone, and an "open invitation" for malfeasance. Critics, including lawmakers and state officials, assert that the president's actions, influenced by an adversary known for election interference, dangerously erode faith in the democratic process and hand a strategic gift to those who wish to weaken the United States.   Source: Politico, Maggie Miller, 08/19/2025, Trump and Putin are both criticizing mail-in voting. Election officials are freaking out., [online] Available at: https://www.politico.com/news/2025/08/19/trump-and-putin-are-both-criticizing-mail-in-voting-election-officials-are-freaking-out-00515513 Contents     Greece Confronts "Greatest Threat" of Political Fake News In mid-August, a false story alleging that Greek Prime Minister Kyriakos Mitsotakis’s wife had purchased a luxury apartment was published online, only to be retracted under the threat of legal action. This incident, according to a Greek City Times article , exemplifies the "almost daily" slanderous attacks faced by the couple. The Prime Minister's Director of Digital Communication, Nikos Romanos, describes this tactic as part of a broader disinformation problem that constitutes the greatest threat to Western democracies. These campaigns exploit platforms from X to YouTube, promoting conspiracy theories on topics ranging from personal matters to national crises like the recurring summer wildfires. Romanos warns that while many stories seem absurd, they remain dangerous, highlighting legal recourse as one of the only effective countermeasures against anonymous online actors.   Source: Greek City Times, Bill Giannopoulos, August 25, 2025, Fake News Campaigns Target Greek PM Mitsotakis and Wife, [online] Available at: https://greekcitytimes.com/2025/08/25/fake-news-campaigns-target-greek-pm-mitsotakis-and-wife/ Contents     The EEAS's Counter-Interference Posture  - ‘A FIMI Toolbox’ A 2024 report   from the European External Action Service (EEAS)  details the European Union’s comprehensive strategy for combating Foreign Information Manipulation and Interference (FIMI), a threat increasingly amplified by AI-driven tools. The EEAS frames its response around a four-pillar FIMI Toolbox, focusing on situational awareness, resilience building, regulation, and external diplomatic action. Central to this effort in 2024 was protecting the European Parliament elections through enhanced cooperation via the Rapid Alert System. The EEAS highlights significant progress in operationalizing tools like the FIMI Information Sharing and Analysis Centre (FIMI-ISAC) to unite civil society responders and expanding the reach of the EUvsDisinfo platform to over 38 million people. The EU also intensified its global posture, leading the G7 Rapid Response Mechanism, signing new security partnerships with FIMI components, and applying sanctions against Russian propaganda entities to defend the integrity of the EU’s information space and support partners like Ukraine.   Source: European External Action Service (EEAS), European External Action Service (EEAS), August 2025, 2024 Report on EEAS Activities to Counter Foreign Information Manipulation and Interference (FIMI), [online] Available at: https://www.eeas.europa.eu/sites/default/files/2025/documents/2024 Report on EEAS Activities to Counter FIMI.pdf Contents     [Takeaways] When state actors can successfully inject and legitimize fabricated information at the highest political levels, it erodes the ability of institutions and the public to engage in evidence-based decision making. This creates a strategic vulnerability where policy and public safety become susceptible to manipulation by whichever narratives are most effectively propagated, regardless of their connection to facts. Russia's multi-front Hostile Information Campaigns were met with institutional responses from the EU, yet a Russian narrative was simultaneously adopted as policy by the U.S. president. The core implication is a growing strategic asymmetry between offensive and defensive actions in the information domain. While Western responses like the 'FIMI toolbox' are designed to build resilience methodically, adversaries appear to be securing strategic victories by exploiting the political dynamics within open democracies. This raises critical questions about whether the current Western counter-influence posture is adequately calibrated to address threats that are not just informational but profoundly political. Contents [Download Report]

  • Weekly Report: Cyber based hostile influence campaigns 11th - 17th August

    [Listen to the Podcast] [Introduction] During the 11 th  to the 17 th of August, 2025 , we observed, collected, and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Russian backed influence operations dominated the cyber landscape this week, demonstrating a significant evolution in tactics and targeting. Kremlin-aligned groups are moving beyond simple propaganda, employing sophisticated methods like impersonating news organizations and fact-checkers, seeding large language models with state-approved narratives, and deploying AI-generated content to spoof legitimate media. These technical escalations are paired with a consistent strategy of exploiting and manufacturing societal divisions in target nations, particularly in Canada, where campaigns incited xenophobia against Indian and Ukrainian diasporas. Concurrently, the institutional decay in nations like Bulgaria created a permissive environment for Russian disinformation to flourish, turning domestic political chaos into a vector for broader European instability. [Report Highlights] A NewsGuard Reality Check article details how the Russian influence campaign “Matroyoshka” is impersonating NewsGuard through fake videos to undermine its credibility. A POLITICO  article warns that as Russian group Storm-1679 escalates AI-driven disinformation by spoofing trusted media outlets, the U.S. government is simultaneously dismantling key agencies meant to counter foreign influence. An article by the Institute for Strategic Dialogue (ISD)  examines foreign information manipulation targeting Poland’s 2025 presidential election and its implications for democratic resilience. An International Centre for Counter-Terrorism ( ICCT)  article argues that traditional counter-narratives fail against identity-based disinformation, often reinforcing the very beliefs they seek to challenge. A BalkanInsight  article reveals how Bulgaria’s political instability and weakened institutions have turned the country into a prime gateway for Russian disinformation into the EU. An investigation uncovered 51 front companies driving large-scale election influence campaigns. [Weekly Review] The Kremlin ramps up projection tactics ahead of the Alaska conference Matroyoshka's TTPs Against NewsGuard – a Meta Campaign Pravda Operation's LLM Seeding Targets Canadian Historical Narrative Russian IO Deploys Xenophobic Tropes Against Canada's Indian Diaspora Russian Group Spoofs News Outlets Using AI-Enhanced Disinformation Razor-Thin Mandate in a Disinformation Crossfire Disinformation as a Systemic Social Media Phenomenon Report Advocates Competing With, Not Countering, Extremist Narratives Bulgaria's Institutional Decay Enables EU-Wide Disinformation Threat The Kremlin ramps up projection tactics ahead of the Alaska conference  A publication  by EUvsDisinfo   details how In the run up to the conference in Alaska, Russia’s disinformation strategy relied heavily on projection, accusing others of the very aggression and bad faith it commits. Moscow vehemently portrays itself as a key global power and peace seeker, yet this posture reflects its efforts to mask its severely weakened economy and frustrated military ambitions in Ukraine. The report details how Russia engages in bad faith negotiations while it continues its military assault, targeting civilian infrastructure and undermining Ukraine’s economy. The analysis concludes that Russia’s informational strategy is a smokescreen designed to distract from its own violations and battlefield atrocities. Source: EUvsDisinfo, 2025. Alaska and the illusion of strength. [online] Available at: https://euvsdisinfo.eu/alaska-and-the-illusion-of-strength/ [Table of Contents] Matroyoshka's TTPs Against NewsGuard – a Meta Campaign An article from NewsGuard details a Russian malign influence campaign, dubbed "Matroyoshka," that has begun directly impersonating NewsGuard to undermine its credibility. The campaign escalated after NewsGuard reported on its efforts to spread false claims about Moldova’s pro-Western government. In retaliation, Matroyoshka produced fabricated videos styled as NewsGuard reports, falsely alleging corruption within the organization. The campaign’s latest tactic involves a layered falsehood: a fake NewsGuard "rebuttal" video that purports to correct a previous fabricated claim, stating it received "only 100 million Euros" from Moldova. This meta-campaign aims to overwhelm fact-checkers and erode public trust in information arbiters. Source: NewsGuard, McKenzie Sadeghi, 2025. Russian Campaign Targets NewsGuard — Again. [online] Available at: https://www.newsguardrealitycheck.com/p/russian-campaign-targets-newsguard [Table of Contents] Pravda Operation's LLM Seeding Targets Canadian Historical Narrative An article by DisinfoWatch  details a Russian smear campaign targeting Canada’s National Memorial to the Victims of Communism as part of a broader historical revisionism strategy. The effort, amplified by a Kremlin-linked influence group known as the "Pravda Operation," uses Telegram to link the memorial to an unrelated 2023 controversy in Canada's Parliament. This tactic aims to create domestic suspicion and discredit the monument. The analysis places the campaign in the context of Russia’s annual efforts to deflect from the anniversary of the 1939 Molotov-Ribbentrop Pact with Nazi Germany. A key goal of the Pravda Operation is to seed and manipulate Large Language Models (LLMs) with Kremlin-approved narratives, embedding propaganda into future AI technologies. Source: DisinfoWatch, 2025. Canadian Memorial Targeted By Russian Pravda Operation. [online] Available at: https://disinfowatch.org/disinfo/canadian-memorial-targeted-by-russian-pravda-operation/ [Table of Contents] Russian IO Deploys Xenophobic Tropes Against Canada's Indian Diaspora A publication by DisinfoWatch  reports that Kremlin-aligned propagandists are promoting racist disinformation to incite hatred against Canadians of Indian heritage. The campaign, amplified by outlets like Tsargrad.TV and the Pravda Network falsely claim that Hindus control all three major political parties in Canada. This assertion is demonstrably false, as census data shows Hindus comprise about 2.3% of the population and hold only 1.2% of parliamentary seats. The analysis highlights specific tactics, such as substituting the NDP party's name with the Russian term "Narodniks" to invoke extremism and conflating Hindu and Sikh identities to create confusion. This campaign mirrors previous Russian efforts that targeted Ukrainian Canadians, revealing a consistent strategy of weaponizing cultural tensions to stoke xenophobic fears and undermine Canada's multicultural society. Source: DisinfoWatch, 2025. Kremlin Propagandists Promoting anti-Indian Hate in Canada. [online] Available at: https://disinfowatch.org/disinfo/kremlin-propagandists-promoting-anti-indian-hate-in-canada/ [Table of Contents] Russian Group Spoofs News Outlets Using AI-Enhanced Disinformation A POLITICO article reports that a pro-Russian propaganda group, tracked by Microsoft as Storm-1679, is intensifying its efforts to spread disinformation by spoofing reputable news organizations and leveraging artificial intelligence. The campaign produces fabricated content, particularly videos with AI-generated audio, that mimics the branding of outlets like the BBC, ABC News, and others. Storm-1679 strategically times its operations around high-profile news events, such as elections or diplomatic meetings, to maximize potential impact. While most of the group's content fails to gain traction, occasional successes, like a fake video amplified by high-profile American figures, validate its high-volume approach. This escalation of foreign influence operations comes as the U.S. administration is actively scaling back the federal agencies tasked with countering such threats. Source: POLITICO, Dana Nickel, 2025. Russia is quietly churning out fake content posing as US news. [online] Available at: https://www.politico.com/news/2025/08/17/russia-us-news-media-disinformation-campaign-00512173?ICID=ref_fark&utm_content=link&utm_medium=website&utm_source=fark [Table of Contents] Razor-Thin Mandate in a Disinformation Crossfire In an article  from Institute for Strategic Dialogue (ISD) , the 2025 Polish presidential election was heavily targeted by FIMI campaigns from Russia and Belarus pushing anti-EU, anti-Ukrainian, and anti-establishment narratives. Through Doppelganger, Operation Overload, the Pravda Network, and sanctioned Radio Belarus, alongside domestic nationalist actors, operators exploited platform gaps on X, Meta, and TikTok. Overall impact was limited by civil society mobilization and rapid-reaction networks, and no cyber incidents affected core voting. Yet the persistence of tactics since 2017 and 2021 signals an unchanged threat environment stoking divides over migration, Ukraine, and the economy. The piece urges a permanent Digital Services Coordinator, stricter DSA enforcement, and more media literacy and civil society capacity. With Karol Nawrocki winning by under one percentage point, even modest manipulation remains consequential in a polarized contest. Source: FDEI for election integrity (FIMI‑ISAC), Alliance4Europe, Debunk.org, GLOBSEC, EU DisinfoLab, DFRLab, Institute for Strategic Dialogue (ISD), 2025. FDEI for election integrity (Jan 2025 – Jan 2027): polish election country report 2025. [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/08/FDEI-POLISH-ELECTION-COUNTRY-REPORT-2025-2.pdf [Table of Contents] Disinformation as a Systemic Social Media Phenomenon A new article in Sage Journals  reframes disinformation not as isolated falsehoods but as a systemic phenomenon actively amplified by the architecture of social media. The analysis by Raquel Recuero argues that platforms confer "structural advantages" upon disinformation through algorithms and engagement-driven economic models, enabling it to outpace accurate information. This systemic view explains why simple content debunking is insufficient. Disinformation, often a mix of true and false elements, adapts and persists through a feedback loop involving platforms, actors, and audiences. The article concludes that effective analysis requires a shift from examining individual pieces of content to understanding the broader social and technical dynamics that constitute the entire communicative system.   Source:  Sage Journals, Raquel Recuero, 2025. A systemic framework for disinformation on social media platforms. [online] Available at: https://journals.sagepub.com/doi/full/10.1177/29768624251367199 [Table of Contents] Report Advocates Competing With, Not Countering, Extremist Narratives In an article  from ICCT , the authors argue that identity-based disinformation (IBD) is a primary accelerant of extremist mobilisation and communal violence, and that prevailing countermeasures underperform. They show how IBD exploits pre-existing biases and identity needs to polarise communities, citing Myanmar’s anti-Rohingya campaigns and the Great Replacement narrative, while noting the migration of content into closed channels and the “jailbreaking” of AI systems by far-right actors. Traditional counter-narratives rarely shift violent intent and can backfire when beliefs are identity-bound. As a remedy, the piece advances the Equal-Alternative Narrative (EAN) model, operationalised through Positive Identity Expansion, which provides prosocial, culturally authentic pathways that satisfy needs for belonging, status, and purpose without confronting ideology head-on. Examples include reframing Buddhist protection in Myanmar around compassion and virtue rather than violence. The Reclaiming Our Narratives toolkit from OICD translates this into practice, emphasising identity mapping, credible messengers, narrative literacy alongside media literacy, and needs-based interventions aligned with initiatives such as the UK’s Healthy Identity Intervention.   Source: ICCT, Anna Kruglova and Bruce White, Countering identity-based disinformation through positive narrative expansion. [online] Available at: https://icct.nl/publication/countering-identity-based-disinformation-through-positive-narrative-expansion [Table of Contents] Bulgaria's Institutional Decay Enables EU-Wide Disinformation Threat With seven national elections in just three years, Bulgaria's political chaos has created an institutional vacuum where disinformation now metastasizes. A BalkanInsight article explains how this environment has made Bulgarian citizens the most vulnerable in Europe to false narratives and has allowed the country to become a backdoor for Russian influence into the EU. The analysis points to a captured state broadcaster, the suspension of a national anti-disinformation coalition, and the weaponization of social media by politically connected networks. These internal failures have enabled large-scale, Russian-funded campaigns targeting elections and sowing discord over the adoption of the euro. The authors warn that unless the EU prioritizes the enforcement of existing standards, new laws will fail to address the root cause of the problem: the collapse of institutional credibility. Source: BalkanInsight, Alexandra Karppi and Vanesa Valcheva, 2025. In Dysfunctional Bulgaria, Disinformation Thrives and Spills Over into EU. [online] Available at: https://balkaninsight.com/2025/08/12/in-dysfunctional-bulgaria-disinformation-thrives-and-spills-over-into-eu/ [Table of Contents] [Takeaways] The core implication of the observed activities is that the modern information battlefield is no longer about the message, but the system that delivers it. Malign actors are successfully targeting the foundational elements of public trust: fact checking bodies, news media, and the integrity of AI-driven information technologies. The impersonation of NewsGuard and the deliberate seeding of LLMs represent a strategic effort to corrupt the very tools societies use to discern truth. Consequently, defensive strategies must evolve from content debunking to securing the information ecosystem itself, focusing on platform architecture, AI integrity, and the psychological resilience of populations. [Table of Contents] [Download Report]

  • Beyond Contractors: China’s Cyber Militia Model

    China’s Cyber Militia Model A recent article by Margin Research highlights how extensively China has integrated civilian companies and quasi-private organizations into its security and defence apparatus. [i] Figure 1 - Courtesy of Margin Research [ii]   The People’s Republic has developed its own model in which private enterprises, state authorities, and military structures are tightly interwoven, a structure that is relevant across all forms of information operations (IO) and influence activities.   While these companies operate under market principles, they are organizationally embedded within China’s military and security architecture. The model is distinct: beyond contractors and fronts, Beijing employs cyber militias. Civilian companies and institutes, such as the mentioned firm Qihoo 360 [iii] , are formally linked to defence structures, including the People’s Armed Police Forces Departments (PAFD). This results in a hybrid system, illustrated in the figure below.   Figure 2 - Courtesy of Margin Research [iv]   Figure 3 - Courtesy of Margin Research [v]   For China, cutout and front companies serve as proxies to conduct highly sensitive operations below the radar. Margin Research thus highlights that this structural approach fundamentally differs from the American model. In the U.S., the government relies on private contractors to provide cyber and information operations support, but these entities remain outside the military chain of command and operate under clear contracts and oversight. Other actors, such as Russia and Iran, use hybrid models, combining contractors with front companies and cutouts. These civilian-looking firms act as covert state proxies, deliberately blurring the line between state and non-state actors to preserve plausible deniability.   This model makes clear that dealing with China means engaging with a system where private and state interests are deeply intertwined. Its relevance extends beyond information operations to all sectors reliant on Chinese technology, as underscored by the recent discovery of rogue communication devices in Chinese solar inverters. [vi] [Footnotes] [i]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [ii]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [iii]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [iv]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [v]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [vi]   Quelle: Reuters, 2025.   Rogue communication devices found in Chinese solar power inverters.   [online]  Published 14 May 2025. Available at: https://www.reuters.com/sustainability/climate-energy/ghost-machine-rogue-communication-devices-found-chinese-inverters-2025-05-14/ DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • China’s Recurring Pattern in Responding to Foreign Attribution

    Beijing appears to have established a recurring pattern in its responses to foreign attribution of Chinese information operations (IO). A recent example is the response by the Ministry of State Security (MSS), China’s main civilian intelligence agency. [i] After Australia publicly accused China of espionage, the MSS responded using its well-known rhetorical triad:   Deny – China categorically denies engaging in any form of espionage. Deflect – The accusation is turned around: the other country, such as Australia, is itself accused of spying on China. Display – Alleged evidence of the other country’s activities is presented, whether genuine or fabricated.   Deny: Figure 1 - Courtesy of Global Times [ii]    Deflect: Figure 2 - Courtesy of Global Times [iii] Display (Alleging that the accusing country engages in similar activities): Figure 3 - Courtesy of Global Times [iv]   This form of communication is not limited to the Ministry of State Security (MSS). The Ministry of Public Security (MPS) – responsible for internal security and policing – as well as the National Computer Virus Emergency Response Center (CVERC), a state-controlled body for reporting cybersecurity vulnerabilities, also regularly rely on this pattern. This observation aligns with our recent analysis on this blog in the context of Singapore (find the post here ). [Footnotes] [i]   Global Times, 2025.   China’s Ministry of State Security issues commentary article warning Australian intelligence agencies not to exaggerate their actions.   [online] Published 15 August 2025. Available at: https://web.archive.org/web/20250815154027/https://www.globaltimes.cn/page/202508/1340931.shtml [ii]   Global Times, 2025.   China’s Ministry of State Security issues commentary article warning Australian intelligence agencies not to exaggerate their actions.   [online] Published 15 August 2025. Available at: https://web.archive.org/web/20250815154027/https://www.globaltimes.cn/page/202508/1340931.shtml [iii]   Global Times, 2025.   China’s Ministry of State Security issues commentary article warning Australian intelligence agencies not to exaggerate their actions.   [online] Published 15 August 2025. Available at: https://web.archive.org/web/20250815154027/https://www.globaltimes.cn/page/202508/1340931.shtml [iv]   Global Times, 2025.   China’s Ministry of State Security issues commentary article warning Australian intelligence agencies not to exaggerate their actions.   [online] Published 15 August 2025. Available at: https://web.archive.org/web/20250815154027/https://www.globaltimes.cn/page/202508/1340931.shtml DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based digital hostile influence campaigns 04th - 10th August

    [Listen to the Podcast] [Introduction] From the 04th to the 10th of August, 2025, we observed, collected, and analyzed endpoints of information related to Digital Hostile Influence campaigns: The following report is a summary of what we regard as the main events. The common thread weaving through this week’s events is the maturation of information as a primary battlespace. States are no longer just experimenting with influence operations; they are systematically integrating them as a core component of national strategy and warfare. Russia’s campaigns in Kursk, Moldova, and Latvia demonstrate a refined playbook for regional destabilization. Iran’s swift deployment of cyberattacks during kinetic conflict with Israel shows a tight fusion of digital and physical military action. Even in the private sector, the rise of mercenary groups in Africa is defined by a package of services that includes both lethal drones and targeted disinformation. From a domestic riot in the UK fueled by social media algorithms to a great-power wargame in Latin America, the field of conflict is increasingly informational [Report Highlights] Russia manufactures "frozen zones" of instability like Transnistria and South Ossetia not merely to control territory, but to wield them as strategic leverage to veto the sovereign foreign policy choices of entire nations. - DisinfoWatch The Belarusian security services use a database named ‘BESporiadki’ (‘Riots’) to track over 100,000 individuals suspected of participating in the 2020 protests, effectively stripping them of basic civil rights. - EUvsDisinfo Analysis of the Iran-Israel conflict demonstrated a three-tiered model of Iranian cyber operations, ranging from loosely aligned hacktivists to formally tasked state-sponsored APT groups, which collaborate to form a resilient and ideologically driven offensive front. - SecurityScorecard   The convergence of mercenaries who profit from chaos, cheap drone technology that provides non-state actors with air power, and AI-driven disinformation creates a self-reinforcing cycle of instability across Africa. - The Conversation   False narratives speculating that the Southport killer was a Muslim or an asylum-seeker received an estimated 27 million impressions on X within the first 24 hours, directly fueling subsequent real-world riots. - CyberNews Latvia possesses one of the most developed ecosystems for countering disinformation in Europe, with a multi-layered approach involving government bodies, investigative journalists, NATO-affiliated centers, and NGOs all working in concert. - EU Disinfo Lab The EU is partnering with the Nigerian Guild of Editors to build institutional capacity within newsrooms, viewing enhanced media literacy as a primary antidote to the destabilizing effects of disinformation. – EU Press and information team [Weekly Review] Divergent Priorities Degrade Sino-Russian IO  Sputnik's Kursk Thread: A Case Study in Moral Inversion   Kremlin Employs Consistent Playbook to Destabilize Post-Soviet States Lukashenka Entrenches Totalitarian Rule Five Years After Protests Iran-Linked Cyber Actors Coordinated Attacks During Kinetic Conflict The Privatization of Violence: PMC Models and Asymmetric Tech in Africa Algorithmic Amplification and Malinformation During the Southport Crisis Analysis of Russian FIMI Operations Targeting Moldova's Elections Latvia Counters Persistent Kremlin Disinformation Narratives EU-Backed Initiative Aims to Build Newsroom Verification Capacity in Nigeria Divergent Priorities Degrade Sino-Russian IO  In a simulated crisis over a Guatemalan election, China and Russia joined forces and won, successfully backing a pro-Beijing party to victory. This initial success, documented in a CSIS  wargaming report , demonstrated the potential power of their coordinated information operations in Latin America. Yet, the alliance quickly faltered. When faced with a subsequent decision on a lucrative mining deal, Russia prioritized its own economic interests over Beijing’s political goal of isolating Taiwan, creating a strategic rift. This fissure proved decisive, allowing the U.S. team, which had pursued a quieter, long-term influence-building strategy, to ultimately secure its objectives. The exercise reveals that the much-touted "no limits" partnership may be more of a brittle, opportunistic alignment, susceptible to fracturing under the pressure of competing national interests. Source: CSIS, Ryan C. Berg and Henry Ziemer, 2025. Assessing the Impact of China-Russia Coordination in the Media and Information Space. [online] Available at: https://www.csis.org/analysis/assessing-impact-china-russia-coordination-media-and-information-space [Table of Contents] Sputnik's Kursk Thread: A Case Study in Moral Inversion  An article  from DisinfoWatch  analyzes a Russian disinformation campaign, circulated by Sputnik International in August 2025, that sought to reframe Ukraine's Kursk offensive. The analysis details how Sputnik deployed a barrage of falsehoods, including grossly inflated statistics on Ukrainian casualties, recycled propaganda about "neo-Nazi thugs," and entirely unsubstantiated claims of atrocities committed by Ukrainian forces. According to the report, this narrative is a textbook example of Russian information warfare. It employs moral inversion to portray Russia as a justified liberator, not an aggressor, while simultaneously dehumanizing Ukrainians. The campaign’s strategic goals are to bolster domestic support for the war, legitimize Russian actions internationally, and erode Western backing for Ukraine. Source:  DisinfoWatch, Aug 2025, Kremlin Disinformation in Sputnik’s Account of Ukraine's Kursk Offensive. [online] Available at: https://disinfowatch.org/disinfo/kremlin-disinformation-in-sputniks-account-of-ukraines-kursk-offensive/ [Table of Contents] Kremlin Employs Consistent Playbook to Destabilize Post-Soviet States EUvsDisinfo  in their recent article , describes the characteristically consistent playbook that the Kremlin employs to destabilize post-Soviet neighbors that seek Euro-Atlantic integration. The analysis asserts that whether in Georgia, Moldova, or Ukraine, Russia repeatedly fuels separatism, spreads disinformation such as false genocide claims, and intervenes militarily under the pretext of “peace enforcement.” This strategy creates dependent proxy regions, or “frozen zones,” which serve as instruments of instability to block the sovereign ambitions of its neighbors. The report identifies the 2008 invasion of Georgia as a crucial “test run” for the tactics of disinformation, passportization, and supposed humanitarian intervention that were later scaled up for the invasion of Ukraine. Source: EUvsDisinfo, Aug 2025, Genocide Claims, Peace Enforcement and Proxy Regions: Disinformation from Georgia to Ukraine. [online] Available at: https://euvsdisinfo.eu/genocide-claims-peace-enforcement-and-proxy-regions-disinformation-from-georgia-to-ukraine/ [Table of Contents] Lukashenka Entrenches Totalitarian Rule Five Years After Protests The Belarusian state maintains a database of over 100,000 “enemies of the state” called ‘BESporiadki,’ a name whose Russian root ‘BES’ means ‘demon.’ According to a paper  by EUvsDisinfo , this list is just one tool in Aleksandr Lukashenka’s ongoing war against his own people, five years after they peacefully protested his rule. The analysis reveals a state transformed into a “repression machine” that has institutionalized torture and created an atmosphere of total fear to crush dissent. 2  Dependent on Vladimir Putin's backing, Lukashenka’s regime has drifted towards a harsh, semi-totalitarian dictatorship that no longer feigns democracy and openly persecutes anyone deemed disloyal. Source:  EUvsDisinfo, 2025, Lukashenka’s ongoing retaliation against Belarusians. [online] Available at: https://euvsdisinfo.eu/lukashenkas-ongoing-retaliation-against-belarusians/ [Table of Contents] Iran-Linked Cyber Actors Coordinated Attacks During Kinetic Conflict Within days of kinetic conflict erupting between Iran and Israel in June 2025, the Iranian state-sponsored hacking group Imperial Kitten purchased domains like stopirannukes[.]com to launch sophisticated phishing campaigns against Israeli supporters. This swift, tailored action, detailed in a SecurityScorecard   analysis , exemplifies the tight integration of cyber and kinetic warfare. The analysis of hundreds of thousands of Telegram messages shows a broader ecosystem of IRGC-linked proxies and ideologically motivated hacktivists from across the region launching disruptive attacks in concert. The report concludes that modern conflict now inherently features a digital front where state tasking and grassroots ideological fervor merge, creating a complex and agile threat that moves at the speed of war. Source: SecurityScorecard, 2025. From The Depths of the Shadows: IRGC and Hacker Collectives Of The 12-Day War. [online] Available at: https://securityscorecard.com/wp-content/uploads/2025/08/From-The-Depths-of-the-Shadows_IRGC-and-Hacker-Collectives_AUG5.pdf [Table of Contents] The Privatization of Violence: PMC Models and Asymmetric Tech in Africa A new book by Alessandro Arduino, summarized   in The Conversation , argues that conflict in Africa is being fundamentally reshaped by the convergence of privatized violence, new technology, and disinformation. The analysis tracks the rise of mercenaries and private military companies from Russia, Turkey, and China, each with distinct operational models for profiting from instability. It highlights how the proliferation of cheap, lethal drones provides non-state actors with an "off-the-shelf air force," while AI is poised to "turbocharge" disinformation campaigns targeting Africa's youthful population. The book concludes that these intersecting trends thrive in power vacuums across the continent, creating complex new threats that demand global vigilance and regulation. Source: The Conversation, Alessandro Arduino, 2025. Drones, disinformation and guns-for-hire are reshaping conflict in Africa: new book tracks the trends. [online] Available at: https://theconversation.com/drones-disinformation-and-guns-for-hire-are-reshaping-conflict-in-africa-new-book-tracks-the-trends-262256 [Table of Contents] Algorithmic Amplification and Malinformation During the Southport Crisis An article  in CyberNews , reporting on a new analysis by Amnesty International , concludes that Elon Musk and his platform X played a central role in inciting the 2024 UK riots. The analysis found that following the tragic Southport murders, X’s engagement-driven algorithm systematically amplified racially charged misinformation about the killer's true identity, directly contributing to violence against Muslim and migrant communities. Musk himself fueled the unrest, with one post declaring "civil war is inevitable." Amnesty attributes the platform’s failure to Musk's post-2022 gutting of trust and safety teams, arguing that X now lacks adequate safeguards and fundamentally prioritizes outrage over user safety. Source: CyberNews, 7 Aug 2025,  Amnesty: Musk played central role in inciting 2024 UK riots. [online] Available at: https://cybernews.com/news/united-kingdom-riots-misinformation-x-musk/ [Table of Contents] Analysis of Russian FIMI Operations Targeting Moldova's Elections Days before a landmark EU-Moldova summit in Chisinau, a Russian-linked operation dubbed "Matryoshka" released doctored videos with fake Euronews branding to spread false claims of conflict between EU and Moldovan leaders. This was part of a wider disinformation blitz analyzed in an EUvsDisinfo   article , which concludes that pro-Kremlin actors are in a full-scale offensive to derail Moldova's European path ahead of its September elections. The campaign uses a multi-platform strategy, from Telegram to TikTok, to push narratives of debt traps and lost sovereignty, seeking to exploit economic fears and ultimately undermine the democratic process itself. Source: EUvsDisinfo, Aug 04 2025, The summit on a disinformation mountain. [online] Available at: https://euvsdisinfo.eu/the-summit-on-a-disinformation-mountain/ [Table of Contents] Latvia Counters Persistent Kremlin Disinformation Narratives A recent Kremlin-backed narrative falsely claimed that Latvians read more "Mein Kampf" than Harry Potter, part of a broader effort to brand the country as a Nazi sympathizer. According to a report  from the EU Disinfo Lab , this is one of five persistent disinformation themes Russia uses to attack Latvia's statehood. The analysis finds that these narratives, which also include claims of "ethnic cleansing" of Russian speakers and portray Latvia as a "failed state", are designed to erode public trust and destabilize the nation from within. While Latvia has developed one of Europe's strongest ecosystems for countering disinformation, the report concludes that it remains on the front lines of an enduring information war. Source: EU Disinfo Lab, Aug  2025, Disinfo landscape in Latvia. [online] Available at: https://www.disinfo.eu/wp-content/uploads/2025/08/20250809_Disinfo-landscape-in-Latvia-v2.pdf [Table of Contents] EU-Backed Initiative Aims to Build Newsroom Verification Capacity in Nigeria Following an EU-backed workshop in Lagos, about 40 of Nigeria's top media editors have pledged to create dedicated fact-checking desks within their newsrooms. This initiative, announced in a press release  by the EU Delegation to Nigeria , is a direct response to the surge of fake news and propaganda that threatens to undermine the country's stability. The EU Ambassador, Gautier Mignot, framed the issue as an "existential imperative," arguing that misinformation erodes the credibility of both the message and the messenger. This collaboration between the EU and the Nigerian Guild of Editors marks a critical effort to equip media gatekeepers with the skills needed to defend the information space. Source: Press and information team of the Delegation to NIGERIA, 2025. EU, editors unite against disinformation. [online] Available at: https://www.eeas.europa.eu/delegations/nigeria/eu-editors-unite-against-disinformation_en [Table of Contents] [Takeaways] Multiple events this week, from a simulated Sino-Russian IO campaign to Russia's real-world operations in Europe, reveal that authoritarian information strategies have inherent vulnerabilities. While states are refining their playbooks for destabilization, their alliances are often brittle and transactional, prone to fracturing under the pressure of competing national interests. The most durable defense is not reactive, but proactive. Investing in the resilience of the broader information ecosystem, such as the EU-backed initiative to build verification capacity in Nigerian newsrooms, proves to be a more effective long-term strategy than attempting to counter every individual threat. [Table of Contents] [Download Report]

  • Targeting Taiwan: An Influence Campaign with Benefits

    Between March and April 2025, Taiwan’s government found itself again under considerable internal and external pressure. Intensified military exercises [i] by the People’s Republic of China and a tense post-election atmosphere placed significant strain on the newly elected administration of the Democratic Progressive Party (DPP). Mounting criticism, calls for the resignation of individual lawmakers, and resurgent street protests added to the political volatility. [ii]  The social media platform Threads, already prominent during the 2024 Taiwan elections, once again emerged as a focal point of political discourse, particularly among younger users. [iii]   Coordinated Influence Campaign Against Taiwan In this climate, Doublethink Lab uncovered  what it classified as a coordinated hostile influence campaign. [iv] Researchers identified 51 inauthentic accounts posing as Taiwanese citizens. These accounts posted identical anti-DPP content. The slogan “I am Taiwanese and I oppose the Pan-Greens” appeared over 1,000 times, 91 of those posts traced directly to the identified accounts. Through high-frequency repetition and bot-like coordination, the campaign appeared aimed at simulating organic protest and public dissent.   Technical Structure of the Manipulation The technical analysis revealed a layered structure. The accounts used stolen profile pictures of Asian influencers (T0145.001), consistently generated localized content by Fabricated Persona (T0101, T0143.002), and posted protest encouragements against the DPP (T0138). Posts were often published in identical form within tightly clustered time windows, strong evidence of automation and orchestration (T0049.003, T0084.001). [v] Figure 1 - A plot showing how many times each account posted content that was also posted by at least 10 other accounts., The right side shows 12 different accounts made 13 posts with the same content within 20 minutes. Translation: “I was gonna say, let; Courtesy of Doublethink Lab [vi] Attribution Clues and Indicators Attribution rests on three main indicators. First, 23 accounts used simplified Chinese characters in 71 posts, some partially, others entirely. Second, forensic tracing of password-reset vectors linked the accounts to phone numbers in Hong Kong and Poland. Third, the use of identical tracking links and highly structured posting behavior resembled centralized infrastructure seen in past campaigns, notably during the U.S. elections. Based on its reach and distribution, Doublethink Lab rates the campaign a 2 out of 6 on the breakout scale, because the content managed to achieve some organic engagement, but remained confined to the platform Threads.   Dual Use of the Network Interestingly, political content constituted only a fraction of the total output. Of more than 7,000 posts, just 290 were political, 275 of them verbatim anti-DPP posts. A curious detail: five posts followed the same template but targeted the Chinese Communist Party (CCP) instead. Whether this was a copy-paste error or a crude attempt at camouflage remains unclear. Still, the idea that a few mistimed clicks might unintentionally reverse a campaign’s ideological thrust carries its kind of irony. Even more striking: the campaign was saturated with sexually suggestive material, often linking to dating websites disguised via URL shorteners (T0153.002). Doublethink Lab flagged 1,122 such posts. While political posts saw significantly higher average engagement (506 likes versus 143 for non-political), the most visible content by far was those promoting physical allure, garnering over 18,000 likes. The juxtaposition of flesh and faction adds a layer of tragicomedy: when political persuasion competes with thirst traps, the algorithm may well crown the latter.   Figure 2 - Showing Sexual Suggestive Content, Courtesy of DoubleThink Lab [vii] Conclusion The campaign’s structure points to external amplification providers, so-called “influence-for-hire” actors, operating modular infrastructures that serve both political and commercial clients. This outsourcing trend is becoming increasingly visible. Social media ecosystems, each with its logic and audiences, demand tailored delivery and contractors who can navigate these spaces efficiently. For attribution efforts, this creates serious challenges. Responsibility chains grow murky as state-linked influence efforts are increasingly executed through deniable intermediaries. Doublethink Lab’s report stands out for its methodological rigor, analytical restraint, and transparency. Their investigation is ongoing and deserves close attention; well worth reading. Footnotes: [i] Institute for the Study of War & American Enterprise Institute, Sperzel, M., Shats, D., O’Neil, A., Wugang, K., Han, A., Blustajn, N., Turek, A., Chung, Y. & Chou, A., 2025.   China–Taiwan Weekly Update, April 18, 2025.   [online] Published April 18, 2025. Available at: https://www.understandingwar.org/backgrounder/china-taiwan-weekly-update-april-18-2025 [ii]   Taiwan News, Thomson, J., 2025.   Government protest takes place in Taipei.   [online] . Available at: https://www.taiwannews.com.tw/news/6095464 [iii]   Taiwan News, Thomson, J., 2024.   Threads, Instagram emerge as key info sources for Taiwan legislature protests.   [online] Published 18 June 2024. Available at: https://www.taiwannews.com.tw/news/5892100 [iv]   Doublethink Lab (Digital Intelligence Team), 2025.   Inauthentic Accounts Impersonate Taiwanese to Attack Political Party.   [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [v]   Doublethink Lab (Digital Intelligence Team), 2025.   Inauthentic Accounts Impersonate Taiwanese to Attack Political Party.   [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [vi]   Doublethink Lab (Digital Intelligence Team), 2025.   Inauthentic Accounts Impersonate Taiwanese to Attack Political Party.   [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [vii]   Doublethink Lab (Digital Intelligence Team), 2025.   Inauthentic Accounts Impersonate Taiwanese to Attack Political Party.   [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based digital hostile influence campaigns 28th July - 3rd August 2025

    [Listen to the Podcast] [Introduction] During the 28th July to the 03rd of August, 2025, we observed, collected, and analyzed endpoints of information related to cyber-based digital hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events.   This week’s developments span a broad spectrum of influence operations and digital information warfare, with three dominant themes emerging: the intensification of AI-generated disinformation across global conflicts, the increasing use of state-aligned platforms to manipulate geopolitical narratives, and the shifting responsibilities of tech companies as frontline moderators. From TikTok dismantling covert influence networks to Kremlin-backed media targeting Ukrainian officials with fabricated smears, and Iran and Israel deploying synthetic propaganda amid real-world hostilities, the week's cases expose a converging global trend. At the same time, platforms like X show declining transparency in content moderation, just as states like India and Poland respond assertively to information threats. Collectively, these events reflect the growing overlap between digital, kinetic, and psychological domains in modern statecraft and social control. [ Highlights ] As published by The Journal , the sudden and unexplained disappearance of Community Notes, X’s fact-checking tool, has raised serious concerns about the platform’s ability to counter disinformation. As reported by The Record , the Polish authorities arrested 32 individuals allegedly working for Russian intelligence to conduct sabotage and spread disinformation as part of a broader hybrid warfare campaign targeting Poland. According to an article by DisinfoWatch , Russian state media outlets RIA Novosti and Sputnik News have launched a disinformation campaign accusing Denys Yermak, brother of the head of Ukraine’s Presidential Administration, of involvement in international drug trafficking. As stated in a Carnegie Endowment Center  article, the June 2025 Israeli bombing of Iran’s notorious Evin prison was quickly weaponized in the information war between Tehran and Tel Aviv. As published by TechXplore , Cornell researchers developed a method to embed authentication codes in lighting, offering a device-agnostic approach to identifying fake or manipulated videos. India’s Government announced  that during Operation Sindoor, the Indian government launched a robust campaign to combat a surge in fake news and anti-India propaganda, much of which was traced to foreign sources, particularly Pakistan. According to an article by Cair , AI-generated images and old protest videos were circulated online to falsely portray Israeli cities under attack following military action against Iran. > TABLE OF CONTENTS SOCIAL MEDIA PLATFORMS The Vanishing Watchdog: X’s Fact-Check Tool Goes Dark  Balancing Joy and Control: TikTok’s Quiet Redesign of Responsibility Ghosts in the Feed: TikTok Battles Hidden Hands of Influence STATE ACTORS [Russia] From Telegram to Arson: How Russia’s Proxy War Targets Poland Behind the Smear: Fact-Checking Russia’s Narrative on Umerov Kremlin Media Fabricate Drug Smuggling Allegations to Undermine Ukraine [Iran] AI-Generated Media Escalated Narrative Warfare in Iran-Israel Conflict CYFLUENCE ATTACKS From Missiles to Malware: How Hybrid Threats Target South Korea GENERAL REPORTS Climate Misinformation Fuels Effort to Undermine EPA Science Synthetic Media Escalates Post-FBI Memo on Epstein Case C2PA, GANs, and the Expanding Scope of Digital Forensics How Hidden Light Patterns Could Expose Fake Videos India Blocks 1,400 URLs to Thwart Pakistan’s Misinformation From Fabrication to False Context: CAIR’s Disinfo Response Framework TAKEAWAY DOWNLOAD SECTION SOCIAL MEDIA PLATFORMS The Vanishing Watchdog: X’s Fact-Check Tool Goes Dark In a news article  from The Journal , the unexplained disappearance of Community Notes, the crowd-sourced fact-checking tool on X (formerly Twitter), has raised concerns about the platform’s compliance with EU content moderation obligations. While Community Notes remains technically active, users report it no longer appears on posts, complicating efforts to counter misinformation. X attributes the issue to a May 22 data center fire, though technical failures predate that incident. Experts such as DCU’s Eileen Culloty and ISD’s Aoife Gallagher emphasize the platform’s unreliability and susceptibility to manipulation. Community Notes had been criticized for failing to consistently debunk disinformation due to its consensus-based model. Despite its flaws, its absence increases user vulnerability to false narratives, particularly amid broader scrutiny under the EU’s Digital Services Act. Ongoing investigations by the European Commission are probing whether X’s moderation efforts meet legal standards, against a backdrop of regulatory tensions between the EU and Elon Musk’s X Corp. Source:    TheJournal, Raymond, S., 2025.   Community Notes vanishes from X feeds, raising ‘serious questions’ amid ongoing EU probe. [online]  Available at: https://www.thejournal.ie/x-community-notes-disappeared-from-platform-factchecking-misinformation-elon-musk-6717349-May2025/ [Back to Table of Contents] Balancing Joy and Control: TikTok’s Quiet Redesign of Responsibility In a press release  by TikTok Newsroom , the platform announced a comprehensive suite of features aimed at strengthening user safety, privacy, and creativity across its global user base, with a special focus on teens, families, and creators. Among the updates are enhanced Family Pairing tools that notify parents when teens upload content, more transparency into teens’ privacy settings, and broader oversight through topic management features. To promote digital wellness, TikTok introduced "Well-being Missions," gamified activities that encourage healthy digital habits, and teased a holistic in-app wellness hub. For creators, new tools include Creator Care Mode, offensive content muting, and pre-publication eligibility checks for the For You feed. Additionally, TikTok expanded the pilot of its "Footnotes" feature in the U.S., enabling users to collaboratively contextualize video content. These efforts reflect TikTok’s strategy to pre-empt regulatory scrutiny by embedding safety into user experience architecture. Source:    Source: TikTok Newsroom , Presser, A., 2025. New trust and safety tools for creators, families, and the TikTok Community. [online] Available at: https://newsroom.tiktok.com/en-us/tiktok-announces-a-suite-product-features   [Back to Table of Contents] Ghosts in the Feed: TikTok Battles Hidden Hands of Influence In a report  from TikTok , the platform disclosed the removal of six covert influence networks active in June 2025, totaling 616 inauthentic accounts and over 180,000 cumulative followers, with an additional 1,198 accounts dismantled for recidivist behavior. The report provides detailed attribution, suggesting that multiple networks were operated from Russia, the Philippines, Venezuela, and Saudi Arabia, and targeted regional political discourses through fabricated personas, AI-assisted amplification, and location-masking tactics. Networks sought to manipulate perceptions in Moldova, the EU, Saudi Arabia, the Philippines, and Ecuador, often promoting pro-Kremlin or partisan narratives while impersonating local citizens or journalists. TikTok’s disclosure highlights the platform's internal and external detection methods and indicates a commitment to disrupting politically motivated disinformation campaigns on a rolling basis. Source:    TikTok Transparency Center (TikTok), 2025.   Covert Influence Operations.   [online]. Available at: https://www.tiktok.com/transparency/en-us/covert-influence-operations [Back to Table of Contents] STATE ACTORS [Russia] From Telegram to Arson: How Russia’s Proxy War Targets Poland In a news article  published by The Record , Polish Prime Minister Donald Tusk announced that 32 individuals have been detained in connection with pro-Russian sabotage activities targeting infrastructure since the outset of Russia’s full-scale invasion of Ukraine. The suspects, a multinational group including Colombian, Belarusian, and Ukrainian nationals, are accused of arson, reconnaissance, and participation in disinformation campaigns orchestrated by Russian intelligence. One operative, a Colombian national, was already convicted in the Czech Republic for attacks carried out in both countries. Authorities assert that these acts form part of a broader Russian hybrid warfare strategy involving sabotage, false narratives, and recruitment through encrypted platforms such as Telegram. Polish counterintelligence efforts have intensified, with the government warning of further Russian attempts to destabilize national security, particularly via attacks on critical infrastructure and aviation systems. Source:    The Record, Antoniuk, D., 2025.   Poland says more than 30 suspects face trial over pro‑Russian sabotage.   [online] . Available at: https://therecord.media/poland-32-suspects-pro-russia-sabotage [Back to Table of Contents] Behind the Smear: Fact-Checking Russia’s Narrative on Umerov In a media analysis  by StopFake , the article outlines how Russian state actors and aligned media have launched a coordinated disinformation campaign aimed at discrediting Ukrainian Defense Minister Rustem Umerov. The report details attempts by Russian Foreign Ministry spokesperson Maria Zakharova and outlets like Argumenty i Fakty to falsely portray Umerov as an unqualified figure, labeling him a lifelong telecom manager and a possible agent of Turkey or the U.S. In reality, Umerov’s six-year telecom stint ended in 2010, followed by over a decade of leadership in investment, civil society, and government. His diplomatic experience includes key roles in prisoner exchanges and Ukraine’s de-occupation strategy. Allegations regarding foreign ties are based on distorted interpretations of his support for Bayraktar drone deals and his U.S.-based family. The text demonstrates how pro-Kremlin figures fabricate narratives that conflate diplomatic engagement with espionage. Source: Source: StopFake, Tsoy, I., 2025.   «Агент США и Турции», «менеджер в телефонной компании» — как российская пропаганда искажает биографию украинского министра обороны Умерова.  [online]. Available at: https://www.stopfake.org/ru/agent-ssha-i-turtsii-menedzher-v-telefonnoj-kompanii-kak-rossijskaya-propaganda-iskazhaet-biografiyu-ukrainskogo-ministra-oborony-umerova/ [Back to Table of Contents] Kremlin Media Fabricate Drug Smuggling Allegations to Undermine Ukraine A report  by DisinfoWatch  exposes a Russian state media disinformation operation accusing Denys Yermak, brother of Ukraine’s presidential chief of staff, of leading an international drug trafficking scheme on behalf of Kyiv’s leadership. The story, pushed by RIA Novosti  and Sputnik News , relies exclusively on an unnamed source and offers no independent or legal corroboration. The fabricated narrative implicates Turkish and Moldovan security services in supporting the smuggling of Afghan narcotics into Ukraine via Azerbaijan and the UAE. Analysts identify this as part of the Kremlin’s strategic playbook to discredit Zelenskyy’s inner circle, frame Ukraine as a criminalized state, and fracture NATO solidarity. With no verifiable evidence and a reliance on untraceable attribution, the story is flagged as a textbook case of narrative-driven, Kremlin-backed disinformation. Source: DisinfoWatch, 2025.   Kremlin Invents Drug Smuggling Allegations Against Zelenskyy’s Inner Circle.   [online]. Available at: https://disinfowatch.org/disinfo/kremlin-invents-drug-smuggling-allegations-against-zelenskyys-inner-circle/ [Back to Table of Contents] [Iran] AI-Generated Media Escalated Narrative Warfare in Iran-Israel Conflict When a WhatsApp video of smoke billowing over Evin prison circulated on June 23, 2025, it seemed to symbolize a break in Iran’s authoritarian stranglehold. But shortly after, Israeli officials posted another clip, slicker, cleaner, and likely fake. As detailed  in a Carnegie Endowment report , this moment encapsulated a broader war: not just of missiles but of manipulated media. Both Iranian and Israeli actors deployed AI-generated content to distort reality, with Iran suppressing real-world horrors and Israel framing its bombings as precision liberations. Civilians, cut off by blackouts and bombarded by synthetic media, could neither verify what they saw nor understand what was unfolding around them. Despite detection tools like SynthID and C2PA, their limitations leave global citizens vulnerable. The war’s ceasefire may have quieted bombs, but the erosion of trust in what is seen and shared signals a far more enduring threat. Source: Carnegie Endowment for International Peace, Alimardani, M. & Gregory, S., 2025.   Iran‑Israel AI War Propaganda Is a Warning to the World.   [online] . Available at: https://carnegieendowment.org/research/2025/07/iran-israel-ai-war-propaganda-is-a-warning-to-the-world [Back to Table of Contents] CYFLUENCE ATTACKS From Missiles to Malware: How Hybrid Threats Target South Korea A strategic analysis  from the Australian Strategic Policy Institute (ASPI)  maps 81 hybrid threat incidents targeting South Korea between 2016 and 2025, revealing an increasingly diverse and complex threat environment. While North Korea remains the dominant actor, responsible for 54% of documented activities, including missile tests, cyberattacks, and territorial provocations, China’s involvement has escalated, particularly in economic coercion, cyber intrusions, and narrative operations. Threat vectors include military pressure, IP theft, sabotage, and AI-enhanced disinformation. In response, Seoul has adopted a five-pronged approach encompassing military readiness, diplomacy, legislation, capability development, and public awareness. However, the report raises concerns about gaps in legal frameworks and underdeveloped responses to non-North Korean actors. The election of President Lee Jae-myung may signal a shift in posture, with an emphasis on domestic AI development and improved EU cooperation. ASPI underscores the value of multilateral coordination, especially as hybrid threats become regionally and technologically more diffuse. Source: Australian Strategic Policy Institute (ASPI), Fitriani, D., Shih, S. & Wai, A., 2025.   Mapping a decade’s worth of hybrid threats targeting South Korea.   [online].  Available at: https://www.aspistrategist.org.au/mapping-a-decades-worth-of-hybrid-threats-targeting-south-korea/ [Back to Table of Contents] GENERAL REPORTS Climate Misinformation Fuels Effort to Undermine EPA Science A news article  by E&E News  examines the Trump administration’s proposal to revoke the EPA’s endangerment finding, which underpins federal climate regulations, and finds it grounded in discredited science and misinformation. The proposed rule leans on fringe studies and arguments from Department of Energy personnel known for denying climate consensus. Experts highlight that the proposal misrepresents peer-reviewed climate science, selectively cites data, and ignores the overwhelming evidence linking greenhouse gas emissions to global warming. Claims that climate models are flawed, that CO₂ boosts agriculture, or that warming has net public health benefits are all refuted by decades of robust research. Critics warn that this attempt to undermine foundational climate policy endangers public trust in scientific integrity and risks reversing critical mitigation efforts at a time when climate impacts, rising temperatures, sea levels, and extreme weather are accelerating and measurable. Source: E&E News (by Politico), Harvey, C. & Waldman, S., 2025.   EPA attacks climate science. Here are the facts.   [online].   Available at: https://www.eenews.net/articles/epa-attacks-climate-science-here-are-the-facts/ [Back to Table of Contents] Synthetic Media Escalates Post-FBI Memo on Epstein Case A news report  by NewsGuard  reveals that AI-generated deepfake videos falsely depicting former President Donald Trump alongside Ghislaine Maxwell and underaged girls continue to proliferate, despite official denials of any criminal implication. Using detection tools like IdentifAI, NewsGuard identified two new deepfakes within a week of naming the Trump-Epstein fabrications its “False Claim of the Week,” bringing the total to three videos and seven images. These synthetic media assets gained over 7.2 million cumulative views on social platforms, underscoring the challenge of curbing virality, even after takedown efforts. The surge in such content followed a July 7 FBI–Justice Department memo stating no evidence exists of a client list implicating high-profile individuals in Epstein-related crimes. While Trump was publicly linked to Epstein via social engagements and flight records, he has not been charged. The persistence of these deepfakes illustrates the escalating role of AI in disinformation targeting public figures. Source: NewsGuard, Olson, J., 2025.   Trump‑Epstein Deepfakes Keep Coming.   [online]. Available at: https://www.newsguardrealitycheck.com/p/trump-epstein-deepfakes-keep-coming [Back to Table of Contents] C2PA, GANs, and the Expanding Scope of Digital Forensics In an expansive academic article  published by PNAS Nexus , Hany Farid of UC Berkeley dissects the evolving landscape of manipulated media, warning that generative AI has propelled the risks of digital deception to an unprecedented scale. While early manipulation relied on rudimentary tools, today's deepfakes, powered by GANs and diffusion models, now cross the uncanny valley, producing images and voices indistinguishable from reality. Farid examines the compounded harms: NCII, AI-generated CSAM, financial fraud, disinformation, and the emergent "liar's dividend." He offers a multilayered mitigation strategy that encompasses content provenance standards (such as C2PA), regulatory guardrails, industry liability reform, and K-12 media literacy. Importantly, he critiques the academic culture that accelerates innovation without foreseeing misuse. Farid asserts that without globally coordinated intervention, society risks entering a post-truth era where the real and the synthetic are functionally indistinguishable. Source: PNAS Nexus (Oxford Academic), Farid, H., 2025.   Mitigating the harms of manipulated media: Confronting deepfakes and digital deception.   [online].  Available at: https://academic.oup.com/pnasnexus/article/4/7/pgaf194/8209913 [Back to Table of Contents] How Hidden Light Patterns Could Expose Fake Videos In a journal article  published by TechXplore , Cornell researchers unveil a breakthrough in video authentication: a watermarking system that encodes secret data directly into imperceptible light fluctuations during filming. This "noise-coded illumination" embeds time-stamped signals into programmable light sources, such as screen displays or lighting fixtures, enabling forensic analysts to detect tampering or AI-generated forgeries by analyzing the light-encoded patterns. Unlike previous watermarking efforts that relied on camera compliance or post-processing, this technique requires no special equipment from the videographer. Lead researcher Peter Michael demonstrated that even multiple light sources can carry unique codes, significantly raising the bar for potential adversaries. While not impervious to future circumvention, this method provides a practical, hardware-level defense against the growing threat of deepfakes and misinformation. Source: TechXplore / Cornell University, Waldron, P., 2025.   Hiding secret codes in light can protect against fake videos.   [online] . Available at: https://techxplore.com/news/2025-07-secret-codes-fake-videos.html   [Back to Table of Contents] India Blocks 1,400 URLs to Thwart Pakistan’s Misinformation In an official press release  by the Press Information Bureau (PIB) , the Indian government detailed its coordinated effort to counteract a surge of Pakistani disinformation during Operation Sindoor. Through a centralized, 24x7 Control Room comprising representatives from the armed forces and media units, India actively debunked fake content targeting its military, disseminated accurate information, and blocked over 1,400 URLs under Section 69A of the IT Act. The PIB Fact Check Unit (FCU)  verified and flagged misleading content, much of it generated by Pakistan-based social media accounts, and issued advisories to prevent real-time coverage of military movements. The government’s rapid and institutionalized response highlights a growing emphasis on digital counter-propaganda frameworks to protect national security narratives. Source: Press Information Bureau (PIB), Government of India, 2025 . Government Debunks Pakistani Propaganda Against India and Armed Forces: [online]. Available at: https://www.pib.gov.in/PressReleasePage.aspx?PRID=2150213   [Back to Table of Contents] From Fabrication to False Context: CAIR’s Disinfo Response Framework In a recent briefing  by CAIR Los Angeles , the organization emphasized the urgent need for media literacy as a defense against the proliferation of misinformation and disinformation in today’s hyperconnected world. With 86% of U.S. adults consuming news through digital platforms, CAIR warns that fabricated, manipulated, and misleading content, especially involving geopolitics and Islamophobia, can shape public perception and policy. The report distinguishes between misinformation and disinformation, outlines their tactics, and presents recent examples, including disinformation surrounding U.S. and Israeli military actions in Iran. CAIR also highlights AI-generated content as a rising threat and provides practical tools for detecting false narratives. By advocating for critical media consumption and highlighting its own work monitoring biased coverage, CAIR positions media literacy as essential to resisting manufactured consent for state violence and combating anti-Muslim narratives. Source: CAIR Los Angeles , CAIR Staff, 2025. Media Literacy 101: Countering Disinformation in the Modern World . [online]. Available at: https://ca.cair.com/updates/media-literacy-101-countering-disinformation-in-the-modern-world/ [Back to Table of Contents] TAKEAWAY The fusion of generative AI with influence operations points toward an increasingly volatile information environment. As AI tools become cheaper, more realistic, and easier to deploy, both state and non-state actors will escalate their use of synthetic media to shape strategic narratives. Regulatory frameworks, such as the EU’s scrutiny of X and the expansion of provenance standards like C2PA, will likely tighten, but enforcement will remain inconsistent. Without interoperable global mechanisms, this fragmented response will perpetuate asymmetric vulnerabilities, especially in conflict and high-risk zones. [Back to Table of Contents] [Download Report]

  • Tackling Disinformation Campaigns in Indonesia

    In the latest article, Dr. Eka Nugraha Putra examines a recent case of coordinated disinformation in Indonesia to assess how the legal framework may serve as a tool to counter such phenomena. The Author argues that coordinated disinformation campaigns in Indonesia could be addressed as forms of organized crime. He points to the country’s newly reformed criminal code as a potential turning point, since it could enable legal responsibility to be extended to organizations, not just individuals. By shifting the legal focus toward the networks and entities behind digital manipulation, the article offers a forward-looking approach grounded in the Indonesian context. At the same time, it contributes to the broader international debate on how states can confront hostile influence and disinformation campaigns. [About the Author] Dr. Eka Nugraha Putra is the author of the book  “Free Speech in Indonesia: Legal Issues and Public Interest Litigation” .  Find more information here: https://ekanugrahaputra.com/ . [ Download Full Report here ]

  • Weekly Report: Cyber based influence campaigns 14th - 20th July 2025

    [Listen to the Podcast] [Introduction] During the 14 th  to the 20 th of July 2025 , we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report provides a summary of the key events we consider most significant. This week's review highlights the convergence of cyber, information, and cognitive warfare domains. The EU's sanctioning of a Russian military unit for GPS jamming underscores the kinetic potential of hybrid threats, while multinational operations target pro-Kremlin hacktivist groups. Simultaneously, actors leverage AI and inauthentic networks to inflame domestic political tensions from Europe to Asia. These events signal a complex security environment in which state and non-state actors exploit technological and societal vulnerabilities, necessitating a coordinated, multifaceted response from Western alliances.  [ Highlights ] According to a joint analysis, a botnet of over 3,600 automated accounts is conducting a targeted "informational and cultural occupation" in Russian-occupied Ukraine.- Atlantic Council The EU has sanctioned a Russian military unit for its role in widespread GPS jamming over the Baltic Sea, an action that has repeatedly disrupted civilian aviation. – EEAS Delegation to Ukraine   Despite economic fragility, China is showing signs of returning to tighter control over its AI sector, with intensified oversight of leading firms and the introduction of new content-labeling regulations. - Carnegie Endowment Influence operations in the Philippines have specifically targeted the feud between the Marcos and Duterte factions, demonstrating a tactic of amplifying existing domestic divisions. – Foreign Policy A Doublethink Lab report identified 51 inauthentic Threads accounts, likely PRC-linked, that posed as Taiwanese citizens by using stolen photos of influencers to amplify anti-government narratives. - DFRLab A bot network, which had previously shown unified support for MAGA figures, fractured and began posting contradictory messages following the controversy over the Epstein files. - NBC News  A Conversation article examines "cognitive warfare" as an emerging domain of conflict where adversaries manipulate human perception through disinformation, often below the threshold of traditional war. - The Conversation Austria has become a hotspot for persistent disinformation, with narratives often amplified by far-right political actors like the FPÖ. - EU Disinfo Lab > TABLE OF CONTENTS HOSTILE INFLUENCE CAMPAIGNS Kremlin Weaponizes History in Disinformation Campaign Against Germany Russia-Linked Group Impersonates Journalists in European Disinfo Push How Russia Tailors Propaganda for an 'Informational Occupation New EU Sanctions Target Russian FIMI, From State Media to Military Units Control vs. Growth: The New Dilemma Shaping China's AI Ambitions Pakistan's Alliance with China: A Partnership with Hidden Costs From Deepfakes to 'Influence for Hire': China's Evolving Tactics Fake Accounts Impersonate Taiwanese on Threads RELATED ARTICLES Algorithmic Lies: AI News Channels Undermine Canada’s Election AI Bot Network Fractures Amid MAGA Epstein Fallout Digital Democracy in Decline: Global Trends and Consequences GENERAL REPORTS Cognitive Warfare: The Silent Frontline of Modern Conflict Austria’s Disinformation Landscape: Narratives, Actors, and Impacts International Sweep Disrupts Pro-Russian Cybercrime Network EU Observatory Expands Fight Against Online Disinformation TAKEAWAYS DOWNLOAD SECTION HOSTILE INFLUENCE CAMPAIGNS Kremlin Weaponizes History in Disinformation Campaign Against Germany A report  from EU vs. Disinfo details a coordinated Kremlin information manipulation campaign aimed at portraying Germany as a resurgent, aggressive military power. The campaign's central narrative, promoted through state-controlled media such as Rossiya 1 and Vesti FM, distorts Germany's defensive policies into an offensive "rearmament" that betrays post-war commitments. The author states this strategy involves direct demonization of political figures, including baselessly labeling Chancellor Friedrich Merz a "descendant of Nazis." The campaign weaponizes historical trauma to justify Russia's invasion of Ukraine to domestic audiences while simultaneously seeking to undermine German public support for Ukraine and weaken transatlantic security cooperation. The report concludes that this is a top-down strategy, endorsed by Kremlin officials such as Dmitry Peskov, with clear objectives in both domestic and foreign policy. Source:    EUvsDisinfo, 2025.   Summoning the ghost of the Reich.   [online] . Available at:   https://euvsdisinfo.eu/summoning-the-ghost-of-the-reich/ [Back to Table of Contents] Russia-Linked Group Impersonates Journalists in European Disinfo Push Researchers have identified a Kremlin-linked threat actor, Storm-1516, conducting a sophisticated disinformation campaign by impersonating journalists and spoofing news websites across Europe. A report  from The Record details how the campaign targets countries including France, Armenia, Germany, Moldova, and Norway with false narratives designed to discredit political leaders and sow discord. The group's method involves using the names and photos of real reporters on fabricated articles to lend them unearned credibility. The campaign's impact is considered significant, with French authorities labelling the group a threat to European public debate. The narratives have ranged from fabricated corruption scandals involving Moldovan and Armenian leaders to false environmental crises aimed at disrupting international forums, such as the Internet Governance Forum (IGF). Source:    The Record, Antoniuk, D., 2025. Russia-linked group spoofing European journalists to spread disinformation.   [online] . Available at:   https://therecord.media/russia-group-spoofing-journalists-disinfo [Back to Table of Contents] How Russia Tailors Propaganda for an 'Informational Occupation Between January 2024 and April 2025, a network of over 3,600 automated accounts flooded Telegram channels in Russian-occupied Ukraine with pro-Kremlin comments. According to a joint analysis  by OpenMinds and DFRLab, this botnet represents a targeted "informational and cultural occupation" running parallel to the military one. The campaign tailors its messaging by disproportionately pushing narratives that praise Russian culture and governance, a different emphasis than used for domestic Russian audiences. This strategy suggests a specific goal beyond simple propaganda: the report concludes the effort is aimed at manufacturing the illusion of local support for Russia's presence, effectively creating a fabricated consensus to legitimize its control. Source: Atlantic Council, Dukach, Y., Adam, I. & Furbish, M.,  2025. Digital occupation: Pro-Russian bot networks target Ukraine’s occupied territories on Telegram.   [online].  Available at: https://www.atlanticcouncil.org/in-depth-research-reports/report/report-russian-bot-networks-occupied-ukraine/ [Back to Table of Contents] New EU Sanctions Target Russian FIMI, From State Media to Military Units The Russian military unit linked to widespread GPS jamming over the Baltic Sea, which has disrupted civil aviation, is now under EU sanctions. This action, reported on by the EEAS Delegation to Ukraine, is part of a broader package announced on July 15, 2025, targeting Russia's hybrid warfare and information manipulation campaigns. The new listings also include the state media network RTRS, which is intended to supplant Ukrainian broadcasters in occupied regions, as well as several entities created by the late Yevgeny Prigozhin. One such group, the Foundation to Battle Injustice, is cited for spreading disinformation that accused French soldiers of kidnapping children in Niger. The sanctions demonstrate the EU's strategy of targeting the full spectrum of Russia's FIMI apparatus, from military electronic warfare units to individual social media influencers. Source: Press and Information Team, Delegation to Ukraine (EEAS), 2025. Russian hybrid threats: EU lists nine individuals and six entities responsible for destabilising actions in the EU and Ukraine.   [online]. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-eu-lists-nine-individuals-and-six-entities-responsible-destabilising-actions_en [Back to Table of Contents] Control vs. Growth: The New Dilemma Shaping China's AI Ambitions A report  from the Carnegie Endowment for International Peace argues that China's AI policy follows a cyclical pattern, oscillating between prioritizing economic growth when it feels technologically vulnerable and asserting ideological control when it feels strong. The author states that the early 2025 breakthrough of the DeepSeek-R1 model has initiated a new, unprecedented "Crossroads Era." A core tension defines this period: China now possesses high technological confidence in its AI capabilities, but its lackluster economy creates a conflicting imperative. While evidence suggests a return to control, seen in intensified oversight of DeepSeek and new content-labeling regulations, economic fragility and US export controls may compel Beijing to adopt a more pragmatic, growth-oriented approach. Source: Carnegie Endowment, Singer, S. & Sheehan, M., 2025. China’s AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era.   [online].   Available at: https://carnegieendowment.org/research/2025/07/chinas-ai-policy-in-the-deepseek-era?lang=en [Back to Table of Contents] Pakistan's Alliance with China: A Partnership with Hidden Costs   A Doublethink Lab report  by Dr. Haroon ur Rasheed Baloch examines the extensive influence of the People's Republic of China (PRC) in Pakistan, primarily driven by the China-Pakistan Economic Corridor (CPEC). The author asserts that this deep integration across Pakistan's economic, military, technological, and academic sectors, while offering benefits, poses significant risks to national sovereignty and social stability. The report highlights a lack of transparency in CPEC agreements, resulting in economic burdens such as soaring electricity tariffs. It also highlights the PRC's soft power campaign, which shapes media and academic discourse to favor Beijing's narratives, as well as a growing military collaboration centered on Gwadar Port that raises regional security concerns for the US and India. Source: Doublethink Lab, Dr. Haroon ur Rasheed Baloch , China Index Spotlight: PRC’s Soft and Hard Power Influence in Pakistan.   [online]. Available at: https://medium.com/doublethinklab/prcs-soft-and-hard-power-influence-in-pakistan-5f7c454912ab [Back to Table of Contents] From Deepfakes to 'Influence for Hire': China's Evolving Tactics Recent analyses  from Foreign Policy and Doublethink Lab reveal coordinated, PRC-linked disinformation campaigns targeting domestic politics in the Philippines and Taiwan. The reports detail how these operations utilize vast networks of inauthentic accounts across platforms such as Facebook, X, and Threads to exploit internal political rivalries. In the Philippines, the campaign reportedly uses generative AI and deepfakes to inflame feuds between the Marcos and Duterte factions. In Taiwan, a similar operation impersonates locals using stolen photos to criticize the ruling Democratic Progressive Party (DPP). The author of the Doublethink Lab report suggests that these networks may be part of a commercial "influence for hire" ecosystem, blending political messaging with unrelated content to build their personas. Source: Foreign Policy, Aspinwall, N., 2025. The Philippines Is a Petri Dish for Chinese Disinformation.   [online]. Available at: https://foreignpolicy.com/2025/07/14/china-philippines-disinformation-elections/ [Back to Table of Contents] Fake Accounts Impersonate Taiwanese on Threads    Between March and April 2025, Doublethink Lab identified 51 inauthentic Threads accounts posing as Taiwanese citizens and targeting domestic political discourse. The accounts, likely linked to the People’s Republic of China (PRC), used stolen profile photos, traditional Chinese text, and localized content to amplify anti-Democratic Progressive Party (DPP) narratives. Evidence of simplified Chinese usage, Hong Kong-linked phone numbers, and copy-pasted political slogans bolstered attribution. The campaign mixed political messaging with sexually suggestive content and commercial spam, consistent with a pattern of “influence-for-hire” operations. Although engagement was limited to the Threads platform, political posts saw significantly higher interaction rates, suggesting partial success in breaching the inauthentic content bubble. Source: Doublethink Lab, Digital Intelligence Team/Doublethink Lab, 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party.   [online]. Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [Back to Table of Contents] AI RELATED ARTICLES Algorithmic Lies: AI News Channels Undermine Canada’s Election In the days after Canada’s 2025 election, dozens of YouTube videos surfaced claiming ballot box theft and recount conspiracies in ridings that no longer exist—fabrications generated by AI and viewed millions of times. DFRLab traced this coordinated campaign to 42 faceless, AI-powered channels posing as Canadian news outlets. These channels pushed partisan narratives favoring Conservative politicians, amplified Alberta separatism, and spread election disinformation under the guise of breaking news. While YouTube suspended many of the accounts, the incident highlights how “AI slop” is rapidly shaping digital discourse, exploiting platform algorithms with nearly zero human oversight or factual grounding. Source: DFRLab, Digital Forensic Research Lab,  2025. AI‑generated news channels spread election fraud and separatist narratives in Canada.   [online]. Available at: https://dfrlab.org/2025/07/17/ai-generated-news-channels-spread-election-fraud-and-separatist-narratives-in-canada/ AI Bot Network Fractures Amid MAGA Epstein Fallout NBC News reports  on a network of over 400 suspected AI-driven bot accounts on X (formerly Twitter) that automatically respond to conservative users with pro-Trump content. Tracked by researchers at Alethea and Clemson University, the network exhibits signs of coordinated inauthentic behavior, including the repetition of messages, the use of irrelevant hashtags, and the exclusive posting of replies. The bots initially maintained unified support for Trump and MAGA figures but fractured following controversy over the Epstein files, with contradictory messages appearing simultaneously. Experts suggest the AI was trained on real MAGA content and mirrors organic shifts in sentiment. The incident reflects broader concerns about AI-amplified influence operations on poorly moderated platforms. Source:    NBC News, Collier, K., 2025. A MAGA bot network on X is divided over the Trump‑Epstein backlash.   [online].   Available at: https://www.nbcnews.com/tech/internet/maga-ai-bot-network-divided-trump-epstein-backlash-rcna219167 [Back to Table of Contents] Digital Democracy in Decline: Global Trends and Consequences The Carnegie Endowment article  outlines three converging threats that undermine digital democracy: the shrinking of civic space, declining funding for digital rights, and the erosion of Western legitimacy. Civic space is increasingly suppressed through surveillance technologies, repressive legislation, and collusion between platforms and governments. Meanwhile, global funding for digital rights organizations has declined sharply due to shifts in the political landscape and nationalist policies. The article also critiques Western hypocrisy in promoting digital freedoms abroad while enabling surveillance and repression domestically, leading many Global Majority countries to turn to China and Russia for digital infrastructure. The piece concludes by advocating for sustainable, decentralized funding and renewed legitimacy through consistent, rights-based engagement. Source: Carnegie Endowment for International Peace, Sesan, ‘Gbenga, 2025 . Shrinking Civic Space, Digital Funding, and Legitimacy in a Post‑Truth Era. [online].  Available at: https://carnegieendowment.org/research/2025/07/shrinking-civic-space-digital-funding-and-legitimacy-in-a-post-truth-era?lang=en [Back to Table of Contents] GENERAL REPORTS Cognitive Warfare: The Silent Frontline of Modern Conflict The Conversation article  examines the concept of cognitive warfare (or “cog war”) as an emerging domain of conflict in which adversaries manipulate human perception and behavior through disinformation and psychological tactics, often below the threshold of traditional armed conflict. Using examples from COVID-19 and the Ukraine war, the article illustrates how false narratives, sometimes supported by AI-driven microtargeting, can lead to real-world harm. As such operations increasingly erode the boundaries between digital, cognitive, and physical domains, legal frameworks lag. Current laws of war do not address psychological harm, raising calls for expanded protections under human rights law. Source: The Conversation , Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online]. Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607 [Back to Table of Contents] Austria’s Disinformation Landscape: Narratives, Actors, and Impacts A recent report  from EU DisinfoLab outlines how Austria has become a hotspot for diverse and persistent disinformation narratives, often tied to political opportunism and ideological movements. Key themes include anti-migrant sentiment, COVID-19 conspiracy theories, pro-Russian framing of the Ukraine war, and hostility toward renewable energy and EU regulations. Disinformation has circulated widely through both alternative and mainstream media, often amplified by far-right actors such as the FPÖ. Despite repeated fact-checking efforts and legal responses, narratives such as “migrants abuse welfare” or “vaccines cause turbo cancer” continue to shape public opinion and political discourse, especially ahead of upcoming elections. Source: EU Disinfo Lab, Schäfer, C., 2025.   Disinfo‑landscape‑in‑Austria.   [online]. Available at: https://www.disinfo.eu/wp-content/uploads/2025/07/20250717_Disinfo-landscape-in-Austria-v2.pdf [Back to Table of Contents] International Sweep Disrupts Pro-Russian Cybercrime Network Operation Eastwood, a multinational cybercrime crackdown coordinated by Europol and Eurojust, targeted  the pro-Russian group NoName057(16), known for ideological DDoS attacks across Europe. The operation, involving 25 countries and private sector partners, led to two arrests, seven international warrants, and the disruption of over 100 servers. Germany identified six suspects as Russian nationals and issued multiple arrest warrants. The group’s activities escalated from attacks on Ukrainian targets to those against NATO-aligned states, including recent incidents in the Netherlands and Switzerland. Authorities highlighted the group’s use of gamified recruitment methods, cryptocurrency incentives, and decentralized operations relying on ideological volunteers. Source: Europol, 2025.   Global operation targets NoName057(16) pro‑Russian cybercrime network.   [online]. Available at: https://www.europol.europa.eu/media-press/newsroom/news/global-operation-targets-noname05716-pro-russian-cybercrime-network [Back to Table of Contents] EU Observatory Expands Fight Against Online Disinformation In 2020, eight regional hubs across Europe were funded to bolster a coordinated response against digital disinformation, marking the second phase of the European Digital Media Observatory (EDMO). The project, led by the European University Institute and involving partners from Greece, Denmark, and Italy, builds on an infrastructure launched in 2019. Its goal is to provide secure, privacy-conscious data access for researchers, boost media literacy, support fact-checking networks, and inform policymakers. Independent from EU authorities, EDMO exemplifies a pan-European attempt to consolidate fragmented anti-disinformation efforts into a cohesive, evidence-based ecosystem. Source: The Conversation , Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online]. Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607 [Back to Table of Contents] TAKEAWAYS This week's events illustrate a broad strategic pivot by state actors toward manufacturing inauthentic consensus . From botnets fabricating pro-Kremlin support in occupied Ukraine to AI-generated content simulating grassroots political movements in North America, the objective is not just to sow discord but to create the illusion of popular will. This poses a fundamental challenge to discourse in democratic societies, as distinguishing between genuine public opinion and artificially generated narratives becomes increasingly complex, the very foundation of legitimate governance is eroded. [Back to Table of Contents] [Download Report]

  • Weekly Report: Cyber based influence campaigns 21th - 27th July 2025

    [Listen to the Podcast] [Introduction] During the 21th to the 27th of July, 2025 we observed, collected, and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. The week’s reporting illuminates a strategic tension between rapidly advancing disinformation capabilities, often state-aligned, and the slower-moving, fractured responses of democracies and institutions tasked with defending truth. China's AI chatbots echo state lines with near-automated efficiency, while Russia’s hybrid warfare blends digital deception with kinetic force. Western platforms and governments appear either unwilling or ill-equipped to mount an equally agile defense, as evidenced by X’s non-compliance with EU sanctions and the U.S. military’s lagging AI adoption. This asymmetry of adaptation risks ceding the informational high ground to authoritarian actors adept at shaping perceptions before facts catch up.  [ Highlights ] A recent investigation by Alliance4Europe  revealed that X is failing to enforce EU sanctions against Russian state-affiliated media and influence operations, allowing widespread dissemination of disinformation. As published by EU vs. Disinfo , Imedi TV, one of Georgia’s most-watched channels, has become a central platform for pro-government propaganda and disinformation, mirroring Kremlin-style tactics to discredit the European Union and democratic institutions. DFRLab  reports that pro-Russian TikTok influencers are spreading disinformation by portraying occupied Donbas as thriving under Russian control, using curated videos to legitimize the occupation and distort public perception. A NewsGuard  audit revealed that five leading Chinese AI models frequently echoed false pro-Beijing narratives, failing to provide accurate information in over 60% of cases tested in both English and Mandarin. RAND  reports that the U.S. Department of Defense is exploring the use of generative AI for influence operations, raising concerns about coordination, oversight, and ethical risks in military messaging. GIOR  reports that a transnational network of national conservative actors, backed in part by Russian influence operations, is promoting coordinated illiberal narratives that threaten democratic institutions across Europe and the U.S. Google ’s Threat Analysis Group  reports that in Q2 2025, it dismantled nearly 11,000 YouTube channels and other platform accounts tied to coordinated disinformation campaigns by state-linked actors from China, Russia, Iran, and others aiming to manipulate global narratives. > TABLE OF CONTENTS SOCIAL MEDIA PLATFORMS X Fails to Enforce EU Sanctions Against Russian State Media STATE ACTORS   A Megaphone for Power: How Imedi TV Undermines Georgia’s EU Path NoName057(16) Conducts Sustained DDoS Campaign Against Ukraine's Allies Manufacturing a Reality: The Kremlin's Influencer War in Occupied Ukraine The Propaganda in the Machine: China's AI Echoes State Narratives Fabricating a Call to Arms: The False Jordan "Attack Israel" Quote AI RELATED ARTICLES Chasing Digital Ghosts While the Democratic Foundations Crumble Unlocking Generative AI’s Power in U.S. Military Influence Campaigns When AI Repeats the Lie Differently, People Believe It More GENERAL REPORTS Google Disrupts Global State-Linked Disinformation Networks at Scale Disinformation Storm: How Platforms Amplify Climate Crisis Falsehoods Transformer-Based Model Elevates Deepfake Detection Standards Fighting on Two Fronts: Ukraine’s Volunteers and the Information War Australia and Japan Launch Joint Initiative Against Information Threat s TAKEAWAYS DOWNLOAD SECTION SOCIAL MEDIA PLATFORMS X Fails to Enforce EU Sanctions Against Russian State Media A recent report  by Science Feedback and  Alliance4Europe reveals that the social media platform X is systemically failing to comply with EU sanctions against Russian state media. Researchers identified hundreds of sanction-violating posts and formally reported a sample of 125 to X using its designated tool for illegal content. The platform's response was profoundly inadequate: only 57% of reports received an acknowledgement, a breach of Digital Services Act (DSA) obligations. Furthermore, only a single post was removed. In all other cases, X claimed no violation of EU law was found, despite clear evidence to the contrary. The investigation concludes that X’s moderation mechanisms are either insufficiently equipped or the platform is unwilling to enforce sanction-related policies, posing a significant risk to the EU’s information environment. Source:    Science Feedback, Alliance4Europe, Terroille,C., Nazari S. , Casandjian E., 2025. Flagged and Ignored: Testing X’s Response to EU Sanction Violations. [online] Available at: https://strapi.wemove.eu/uploads/Flagged_and_Ignored_Testing_X_s_Response_to_EU_Sanction_Violations_1_872cbb7718.pdf [Back to Table of Contents] STATE ACTORS A Megaphone for Power: How Imedi TV Undermines Georgia’s EU Path An article  from EUvsDisinfo  details how Imedi TV, one of Georgia's most-watched television channels, operates as a powerful propaganda machine for the ruling Georgian Dream party. The outlet systematically promotes anti-Western narratives and Kremlin-style disinformation to undermine the country's European aspirations. According to the analysis, Imedi TV, owned by a businessman with alleged links to Russian elites, demonizes the political opposition and civil society. It also orchestrates smear campaigns against EU diplomats, including Ambassador Paweł Herczyński, accusing them of political interference and serving a "Deep State." By framing European institutions as corrupt and hostile, the channel functions as a key instrument in Georgia's democratic backsliding and its obstruction of a European future. Source:    EUvsDisinfo, 2025. Imedi TV: The propaganda megaphone undermining Georgia’s EU aspirations. [online] Available at: https://euvsdisinfo.eu/imedi-tv-the-propaganda-megaphone-undermining-georgias-eu-aspirations/ [Back to Table of Contents] NoName057(16) Conducts Sustained DDoS Campaign Against Ukraine's Allies A new report  from Recorded Future’s Insikt Group analyzes the pro-Russian hacktivist group NoName057(16), which has waged a large-scale distributed denial-of-service (DDoS) campaign against more than 3,700 entities over the last 13 months. The group, which emerged just after Russia’s 2022 invasion of Ukraine, primarily targets government and public-sector organizations in European nations that oppose the invasion. The analysis reveals a high operational tempo, averaging 50 unique targets daily, and a resilient multi-tiered command-and-control (C2) infrastructure. Pattern-of-life analysis strongly suggests the operators work from within a Russian time zone. The report concludes that such hacktivist operations are a persistent feature of modern geopolitical conflict, requiring robust defensive strategies and situational awareness from at-risk organizations. Source: Recorded Future, Insikt Group, 2025. [online] Available at: https://go.recordedfuture.com/hubfs/reports/cta-2025-0722.pdf [Back to Table of Contents] Manufacturing a Reality: The Kremlin's Influencer War in Occupied Ukraine In occupied Ukrainian territories, the Kremlin is training a new generation of TikTok bloggers in specialized schools to ensure their content reflects official messaging. This tactic, detailed in a recent analysis  by the DFRLab , is part of a sophisticated strategy to project an image of revival and prosperity in war-torn regions like Mariupol. Pro-Russian influencers create curated video tours of allegedly rebuilt cities, deliberately countering reports of widespread devastation. These campaigns exploit the information vacuum and the persuasive power of visual content to manufacture consent for the occupation. The ultimate goal is to create a powerful, alternative reality that erodes trust in Ukrainian and Western sources while cementing Russia's control. Source: DFRLab, Missing, 2025. Manufacturing reality: how pro-Russian TikTok accounts promote Donbass ‘revival’. [online] Available at: https://dfrlab.org/2025/07/21/manufacturing-reality-how-pro-russian-tiktok-accounts-promote-donbass-revival/ [Back to Table of Contents] The Propaganda in the Machine: China's AI Echoes State Narratives A NewsGuard audit  of five leading Chinese-backed AI models found they failed to provide accurate information 60 percent of the time when prompted with false narratives promoted by Beijing. The analysis tested chatbots from firms like Baidu, Alibaba, and Tencent, revealing a systemic tendency to repeat pro-China propaganda or provide evasive non-answers. This behavior was consistent across both English and Mandarin, indicating the bias is built into the models' fundamental design. The audit raises significant concerns about the normalization of censorship and state propaganda as these AI systems see growing adoption by international universities and banks, posing a geopolitical risk by echoing the Chinese Communist Party's narratives to a global audience. Source: NewsGuard, Lin, C., and  Sadeghi, M., 2025. Chinese AI Models: 60 Percent Fail Rate, Repeating Pro-China Claims. [online] Available at: https://www.newsguardtech.com/special-reports/chinese-ai-models-60-percent-fail-rate-pro-china-claims/ [Back to Table of Contents] Fabricating a Call to Arms: The False Jordan "Attack Israel" Quote In July 2025, a provocative quote claiming Jordan had declared it "a good time to attack Israel" began spreading across social media. The claim, amplified by pro-Iranian accounts on X, was entirely false, according to a fact-check by NewsGuard Reality Check . The article  clarifies that a review of official statements from the Jordanian government and its state news agency shows no record of such a declaration. Instead of calling for war, Jordan's leaders have consistently pushed for a diplomatic solution, urging an immediate ceasefire and the delivery of humanitarian aid to Gaza. The incident highlights how disinformation is used to inflame regional tensions by fabricating bellicose statements from key diplomatic actors. Source: NewsGuard Reality Check, Wolf, M., 2025. Concocted Jordan Statement Threatens to Inflame Regional Tensions. [online] Available at: https://www.newsguardrealitycheck.com/p/concocted-jordan-statement-threatens [Back to Table of Contents] AI RELATED ARTICLES Chasing Digital Ghosts While the Democratic Foundations Crumble   A blog post  by the Oxford Internet Institute (OII) and the University of Zurich warns that alarmist narratives about generative AI’s impact on elections risk diverting attention from deeper systemic threats to democratic integrity. The authors argue that mass political persuasion through generative AI (GenAI) tools such as ChatGPT and Gemini is far less effective than often portrayed, while long-standing structural weaknesses, such as poor electoral governance, campaign finance imbalances, voter suppression tactics, and declining press freedom, pose greater dangers. The piece highlights how focusing narrowly on GenAI can obscure other harms, including AI-enabled harassment targeting women and minorities, and may encourage excessive regulations that inadvertently chill political discourse. Moreover, overemphasizing AI threats could erode public trust in electoral processes, especially when perceptions of manipulation are amplified by media coverage. The authors caution that such narratives could damage voter confidence and democratic stability more than GenAI itself. Source: Oxford Internet Institute, Simon, F. and Altay, S., 2024. Consequences of a Skewed Discourse Around Generative AI and Elections. [online] Available at: https://www.oii.ox.ac.uk/news-events/consequences-of-a-skewed-discourse-around-generative-ai-and-elections/ [Back to Table of Contents] Unlocking Generative AI’s Power in U.S. Military Influence Campaigns In the face of accelerating competition from China and Russia, the U.S. military’s influence community finds itself unprepared to fully leverage one of the most transformative technologies of the moment: generative AI. A RAND Corporation research report , commissioned by the Irregular Warfare Technical Support Directorate , reveals that while generative AI could dramatically improve content production, audience targeting, and operational planning, its adoption across DoD influence units is hampered by fragmentation, inadequate funding, and a lack of strategic guidance. Despite promising capabilities, like sentiment analysis, audience segmentation, and planning support, AI remains a tool waiting for a coherent doctrine, structured acquisition pathways, and senior-level buy-in. RAND concludes that for the U.S. to maintain an edge in information operations, the DoD must shift from isolated pilot efforts to a unified, well-resourced, and adaptive AI implementation strategy across its influence enterprise. Source: RAND Corporation , Robinson, E.; Bicksler, B.; Mele, J.; et al., 2024. Accelerating the Integration of Generative Artificial Intelligence into Influence Activities: Findings and Recommendations for the Department of Defense . [online] Available at: https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3100/RRA3157-1/RAND_RRA3157-1.pdf [Back to Table of Contents] When AI Repeats the Lie Differently, People Believe It More A peer-reviewed study  published in PNAS Nexus  examines how generative AI alters the dynamics of repetitive propaganda on social media by comparing traditional CopyPasta tactics to AI-paraphrased variants dubbed "AIPasta." While prior campaigns relied on exact-message repetition to exploit the illusory truth effect, this research shows that AIPasta, increasing lexical variety while maintaining semantic similarity, may be more effective at enhancing perceptions of social consensus, especially among politically aligned and unfamiliar audiences. Although AIPasta did not significantly increase belief in false claims overall, it proved more persuasive among less familiar users in specific contexts. Crucially, AI detectors failed to identify AIPasta as synthetic, raising concerns about scale and detectability. The study highlights a critical evolution in information operations: generative AI can subtly shape opinion not by changing individual beliefs, but by simulating popular consensus around disinformation narratives. Source:    PNAS Nexus , Dash, K. et al., 2025. From CopyPasta to AIPasta: How Generative AI Alters the Persuasive Dynamics of Repetition in Social Media Campaigns . [online] Available at: https://academic.oup.com/pnasnexus/article/4/7/pgaf207/8209914 [Back to Table of Contents] GENERAL REPORTS Google Disrupts Global State-Linked Disinformation Networks at Scale A quarterly bulletin  from Google’s Threat Analysis Group  reveals a sweeping crackdown on coordinated influence operations across YouTube and other platforms from April to June 2025. The largest operations were linked to the People’s Republic of China (PRC), accounting for over 7,700 terminated YouTube channels, followed by Russia with more than 2,500. These campaigns consistently pushed pro-state narratives in multiple languages while attacking rival states, democratic institutions, and regional critics. The Russian operations included consulting firms, state-sponsored entities, and media outlets like RT, and targeted audiences across Europe, Asia, and the Americas. Azerbaijan emerged as a significant actor, with over 1,000 channel terminations tied to anti-Armenian and anti-dissident content. Smaller campaigns linked to Iran, Turkey, Romania, Ghana, and Israel also exploited YouTube, Blogger, and ad networks to spread politically motivated messaging. The bulletin underscores the growing industrialization and geographic diversification of coordinated digital propaganda. Source: Google, Threat Analysis Group, 2025. TAG Bulletin: Q2 2025. [online] Available at: https://blog.google/threat-analysis-group/tag-bulletin-q2-2025/ [Back to Table of Contents] Disinformation Storm: How Platforms Amplify Climate Crisis Falsehoods A report by the Center for Countering Digital Hate (CCDH)  exposes a massive failure across Meta, YouTube, and X in moderating false or misleading content about extreme weather events. Analyzing 300 high-engagement posts from the Online Deniers Dataset between April 2023 and April 2025, the study found these posts amassed over 221 million views while lacking substantive fact-checking labels or context. Wildfires and hurricanes dominated the narratives, often intertwined with conspiracy theories and political disinformation. Verification badges and monetization policies on all three platforms enhanced the reach and credibility of false claims. Notably, 88% of X posts, 73% on YouTube, and 64% on Meta  came from verified accounts. YouTube was also found to recommend further disinformation in one-third of the analyzed cases. Across the platforms, misleading posts not only undermined public trust but also posed tangible risks during crises, misdirecting aid, threatening public safety, and fueling real-world violence. Source: Center for Countering Digital Hate (CCDH) , 2025. Extreme Weather Disinformation Report . [online] Available at: https://counterhate.com/research/extreme-weather-false-claims     [Back to Table of Contents] National Conservatism’s Global Rise and the Weimar Warning A report from the Global Influence Operations Report (GIOR)  examines the Global National Conservative Alliance (GNCA), a transnational coalition of far-right parties and influencers, arguing it echoes the Weimar-era conservative miscalculation that empowered Hitler. The report details how parties like Hungary’s Fidesz, France’s National Rally, and Germany’s AfD form a modern “respectable radicalism,” bolstered by foreign autocracies such as Russia and China via engineered feedback loops. Hungary, serving as the ideological hub, leverages institutions like the Mathias Corvinus Collegium to spread illiberal ideology across the EU and the U.S. The study outlines how regulatory capture, legal harassment, and disinformation campaigns erode democratic institutions in Hungary, the U.S., Germany, and Israel. Drawing parallels to 1930s Germany, the report warns that conservative elites repeatedly overestimate their ability to control authoritarian movements, underestimating the acceleration digital platforms provide. The GNCA thrives where democratic institutions fail to defend themselves. Source: Global Influence Operations Report , GIOR Staff, 2025. National Conservatism’s Weimar Echo (Series Part 3 of 3) . [online] Available at: https://www.global-influence-ops.com/gior-analysis-weimar-echo-national-conservative/ [Back to Table of Contents] Transformer-Based Model Elevates Deepfake Detection Standards In a recent article  by Tech Xplore , researchers from UC Riverside and Google  unveiled a cutting-edge system to combat video-based disinformation. Their tool, UNITE (Universal Network for Identifying Tampered and synthEtic videos) , detects manipulated content by analyzing entire video frames, extending beyond traditional face-based deepfake detection. UNITE leverages transformer-based architecture and innovations like “attention-diversity loss” to identify inconsistencies in spatial and temporal features, even when no human faces are present. The system is trained on diverse synthetic content using the SigLIP framework and is designed to detect forgeries ranging from simple face swaps to fully AI-generated videos. Presented at the 2025 CVPR conference , this model positions itself as a universal detector at a time when generative tools for fabricating realistic video content are increasingly accessible, presenting risks to public trust, democratic discourse, and institutional integrity. Source: Tech Xplore, David Danelski, 2025. AI system identifies fake videos beyond face swaps and altered speech. [online] Available at: https://techxplore.com/news/2025-07-scientists-tool-fake-videos.html [Back to Table of Contents] Fighting on Two Fronts: Ukraine’s Volunteers and  Information War As drone strikes lit up Kyiv in early 2025, Steffen Berr documented the aftermath not as a journalist, but as a soon-to-be soldier, one of thousands of foreign volunteers driven to Ukraine by conviction and a refusal to accept disinformation. In a feature by Tech Xplore, Berr’s transition from fundraiser to fighter reveals how international volunteers counter Russian narratives through personal testimony. These frontline witnesses wield a kind of “lived credibility” that resists Moscow’s attempts to label them as extremists or NATO puppets. The second half of the piece shifts to technology, introducing UNITE, a cutting-edge AI model co-developed by UC Riverside and Google. Built to detect tampered videos beyond facial recognition, UNITE is a response to the rapid evolution of generative disinformation. From human witnesses to universal detectors, the article highlights how truth is defended in both physical and digital arenas, each fighting the same war over perception. Source: The Conversation , Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online]. Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607 [Back to Table of Contents] Australia and Japan Launch Joint Initiative Against Information Threats As AI-generated disinformation proliferates in the Indo-Pacific, where over five billion people are exposed to rising digital threats, a new partnership between ASPI and JNI , announced on July 23, 2025, signals a joint resolve to push back. This agreement between two premier institutions , Australia’s research-driven ASPI and Japan’s field-focused JNI, builds a framework to detect, analyze, and respond to state-sponsored influence campaigns that increasingly blur lines between war, diplomacy, and deception. By merging strengths in open-source intelligence and strategic communication, the partners aim to build regional capacity against hybrid threats, elevate cross-border information security, and shore up trust in democratic systems. The move reinforces the Australia–Japan security axis while acknowledging that today’s information conflicts demand not just national but multilateral, technology-enabled defenses. Source: Australian Strategic Policy Institute (ASPI) , 2025. ASPI and JNI establish strategic partnership to counter state-sponsored information operations and hybrid threats in the Indo-Pacific . [online] Available at: https://www.aspi.org.au/news/aspi-and-jni-establish-strategic-partnership-to-counter-state-sponsored-information-operations-and-hybrid-threats-in-the-indo-pacific/ [Back to Table of Contents] TAKEAWAYS Recent developments indicate that generative AI will likely become central to both offensive and defensive operations in the information domain, as seen in deepfake detection tools and AI-driven propaganda systems. However, structural lags in institutional response, especially among democratic states, will likely continue. Unless governance, acquisition, and platform accountability mechanisms accelerate, the information space may increasingly tilt toward authoritarian exploitation. Expect escalating reliance on AI tools by both malign actors and defenders, with the balance shaped by agility more than intent. [Back to Table of Contents] [Download Report]

bottom of page