Search CRC
100 results found with an empty search
- Singapore Drops the Name, Shifting the Game
Introduction Public attribution in the context of influence can be likened to an open letter, backed by technical reasoning and signed with a clear political signature. A specific incident is not only investigated but officially linked to a particular actor. Such attribution is more than a forensic statement: it is a form of strategic messaging . Public attribution always addresses a broader audience, allies, undecided third states, domestic publics, and potential future adversaries. Attribution to UNC38886 [i] Until now, Singapore has avoided taking such a step regarding China. The previous tactic was to publish technical details and share indicators, while leaving the actual originator in the dark. This time, however, that cover was dropped [ii] : intrusions into critical infrastructure were attributed to UNC3886, a group described by Mandiant [iii] as a Chinese APT, meaning a state or state-aligned hacking group. While the statement did not explicitly mention "China," the message was clear enough to be registered. The response followed promptly. The Chinese embassy in Singapore rejected the accusations and “courteously” offered to cooperate in the investigation. Figure 1 – Courtesy of Global Times [iv] A short time later, the Global Times published a report by the Cybersecurity Association of China, claiming that the United States had exploited zero-day vulnerabilities in Microsoft Exchange to attack a major Chinese defence company. #_edn6 Figure 2 – Courtesy of Global Times [vi] Singapore appeared in the report only in passing, as the location of “proxy IPs.” [v] Figure 2 – Courtesy of Global Times [vi] Singapore also faced a degree of backlash, albeit not particularly strong: On August 5, 2025, the Chinese embassy responded in the Straits Times “Opinion Forum” [vii] to the article published on July 25 [viii], 2025, rejecting the allegations and criticizing the newspaper itself. Figure 3 - Courtesy of Strait Times [ix] A Facebook post on the topic followed, drawing comments and shares that made it clear the matter hit home. Figure 4 - Courtesy of Facebook [x] China’s Attribution Response Pattern Instead of placing Singapore at the center of a counter-campaign, Beijing shifted the focus to its strategic rival, the United States. It seems that Chinese reactions to attribution now follow a well-rehearsed pattern: Country X names Chinese actors; within a short time, a state-directed entity such as the CVERC (China Vulnerability Emergency Response Center) [xi] issues a report accusing the United States of having done the same. This maneuver flips the script, redirects focus, and channels the discussion into another narrative. Noteworthy in this context is the rapid response to the SharePoint incident; the counter-report was released in about a week. In previous cases, such as Volt Typhoon (also addressed by the CVERC), reports from Chinese front organizations have taken considerably longer to surface. It will be interesting to see how this story develops and whether we soon see a dedicated CVERC report on Singapore. Conclusion The public attribution of the alleged Chinese group UNC3886 by Singapore marks a notable departure from the country’s previously discreet communication style in matters of cyber threats. Instead of issuing technical reports or relying on diplomatic channels, Singapore deliberately opted for a visible public stance. This can be understood as a strategic signal, both to potential attackers and to international partners. The effectiveness of such attributions, however, remains contested. While publicly naming an actor increases the costs for attackers, forcing them to adapt tactics or infrastructure, authoritarian states such as China often respond not by retreating, but by issuing counteraccusations. This was evident in the immediate reaction of the Chinese Embassy in Singapore. Such behaviour is particularly relevant in the context of hostile digital influence, as counteraccusations are not merely a defensive move but a deliberate attempt to reshape the narrative, cast doubt on the original claims, and undermine trust in the accuser. In these situations, attribution becomes part of a broader information conflict, where the struggle is not only over establishing technical culpability but also over influencing public perception, framing international discourse, and eroding the credibility of opposing voices. For Singapore, this new visibility is not without risks. Attribution creates expectations, narrows the room for maneuver, and carries the potential to exacerbate tensions with China, particularly if the naming is interpreted not in isolation but as part of a broader political shift. [Footnotes] [i] UNC refers to uncategorised” or “unclassified [ii] The Straits Times, Jaipragas, B., 2025. No idle punt: Why Singapore called out cyber saboteur UNC3886 by name. [online] Published 25 July 2025. Available at: https://www.straitstimes.com/opinion/no-idle-punt-why-singapore-called-out-cyber-saboteur-unc3886-by-name [iii] Google Cloud Blog (Mandiant), 2024. Cloaked and Covert: Uncovering UNC3886 Espionage Operations. [online] Published 18 June 2024. Available at: https://cloud.google.com/blog/topics/threat-intelligence/uncovering-unc3886-espionage-operations?hl=en [iv] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report. [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [v] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report. [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [vi] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report. [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [vii] The Straits Times, “Forum: China firmly against and cracks down on all forms of cyber attacks”, 2025. [online] Published 5 August 2025. Available at: https://www.straitstimes.com/opinion/forum/forum-china-firmly-against-and-cracks-down-on-all-forms-of-cyber-attacks [viii] The Straits Times, Jaipragas, B., 2025. No idle punt: Why Singapore called out cyber saboteur UNC3886 by name. [online] Published 25 July 2025. Available at: https://www.straitstimes.com/opinion/no-idle-punt-why-singapore-called-out-cyber-saboteur-unc3886-by-name [ix] The Straits Times, “Forum: China firmly against and cracks down on all forms of cyber attacks”, 2025. [online] Published 5 August 2025. Available at: https://www.straitstimes.com/opinion/forum/forum-china-firmly-against-and-cracks-down-on-all-forms-of-cyber-attacks [x] Facebook, China Embassy in Singapore, 2025 . Facebook post. [online] Available at: https://www.facebook.com/Chinaemb.SG/posts/1043167338010724/ [xi] The China National Computer Virus Emergency Response Center (CVERC) is a state-run institution responsible for analyzing cyber threats and coordinating national responses to viruses and software vulnerabilities. It also plays a role in China's geopolitical cyber strategy. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Weekly Report: Cyber based digital hostile influence campaigns 04th - 10th August
[Listen to the Podcast] [Introduction] From the 04th to the 10th of August, 2025, we observed, collected, and analyzed endpoints of information related to Digital Hostile Influence campaigns: The following report is a summary of what we regard as the main events. The common thread weaving through this week’s events is the maturation of information as a primary battlespace. States are no longer just experimenting with influence operations; they are systematically integrating them as a core component of national strategy and warfare. Russia’s campaigns in Kursk, Moldova, and Latvia demonstrate a refined playbook for regional destabilization. Iran’s swift deployment of cyberattacks during kinetic conflict with Israel shows a tight fusion of digital and physical military action. Even in the private sector, the rise of mercenary groups in Africa is defined by a package of services that includes both lethal drones and targeted disinformation. From a domestic riot in the UK fueled by social media algorithms to a great-power wargame in Latin America, the field of conflict is increasingly informational [Report Highlights] Russia manufactures "frozen zones" of instability like Transnistria and South Ossetia not merely to control territory, but to wield them as strategic leverage to veto the sovereign foreign policy choices of entire nations. - DisinfoWatch The Belarusian security services use a database named ‘BESporiadki’ (‘Riots’) to track over 100,000 individuals suspected of participating in the 2020 protests, effectively stripping them of basic civil rights. - EUvsDisinfo Analysis of the Iran-Israel conflict demonstrated a three-tiered model of Iranian cyber operations, ranging from loosely aligned hacktivists to formally tasked state-sponsored APT groups, which collaborate to form a resilient and ideologically driven offensive front. - SecurityScorecard The convergence of mercenaries who profit from chaos, cheap drone technology that provides non-state actors with air power, and AI-driven disinformation creates a self-reinforcing cycle of instability across Africa. - The Conversation False narratives speculating that the Southport killer was a Muslim or an asylum-seeker received an estimated 27 million impressions on X within the first 24 hours, directly fueling subsequent real-world riots. - CyberNews Latvia possesses one of the most developed ecosystems for countering disinformation in Europe, with a multi-layered approach involving government bodies, investigative journalists, NATO-affiliated centers, and NGOs all working in concert. - EU Disinfo Lab The EU is partnering with the Nigerian Guild of Editors to build institutional capacity within newsrooms, viewing enhanced media literacy as a primary antidote to the destabilizing effects of disinformation. – EU Press and information team [Weekly Review] Divergent Priorities Degrade Sino-Russian IO Sputnik's Kursk Thread: A Case Study in Moral Inversion Kremlin Employs Consistent Playbook to Destabilize Post-Soviet States Lukashenka Entrenches Totalitarian Rule Five Years After Protests Iran-Linked Cyber Actors Coordinated Attacks During Kinetic Conflict The Privatization of Violence: PMC Models and Asymmetric Tech in Africa Algorithmic Amplification and Malinformation During the Southport Crisis Analysis of Russian FIMI Operations Targeting Moldova's Elections Latvia Counters Persistent Kremlin Disinformation Narratives EU-Backed Initiative Aims to Build Newsroom Verification Capacity in Nigeria Divergent Priorities Degrade Sino-Russian IO In a simulated crisis over a Guatemalan election, China and Russia joined forces and won, successfully backing a pro-Beijing party to victory. This initial success, documented in a CSIS wargaming report , demonstrated the potential power of their coordinated information operations in Latin America. Yet, the alliance quickly faltered. When faced with a subsequent decision on a lucrative mining deal, Russia prioritized its own economic interests over Beijing’s political goal of isolating Taiwan, creating a strategic rift. This fissure proved decisive, allowing the U.S. team, which had pursued a quieter, long-term influence-building strategy, to ultimately secure its objectives. The exercise reveals that the much-touted "no limits" partnership may be more of a brittle, opportunistic alignment, susceptible to fracturing under the pressure of competing national interests. Source: CSIS, Ryan C. Berg and Henry Ziemer, 2025. Assessing the Impact of China-Russia Coordination in the Media and Information Space. [online] Available at: https://www.csis.org/analysis/assessing-impact-china-russia-coordination-media-and-information-space [Table of Contents] Sputnik's Kursk Thread: A Case Study in Moral Inversion An article from DisinfoWatch analyzes a Russian disinformation campaign, circulated by Sputnik International in August 2025, that sought to reframe Ukraine's Kursk offensive. The analysis details how Sputnik deployed a barrage of falsehoods, including grossly inflated statistics on Ukrainian casualties, recycled propaganda about "neo-Nazi thugs," and entirely unsubstantiated claims of atrocities committed by Ukrainian forces. According to the report, this narrative is a textbook example of Russian information warfare. It employs moral inversion to portray Russia as a justified liberator, not an aggressor, while simultaneously dehumanizing Ukrainians. The campaign’s strategic goals are to bolster domestic support for the war, legitimize Russian actions internationally, and erode Western backing for Ukraine. Source: DisinfoWatch, Aug 2025, Kremlin Disinformation in Sputnik’s Account of Ukraine's Kursk Offensive. [online] Available at: https://disinfowatch.org/disinfo/kremlin-disinformation-in-sputniks-account-of-ukraines-kursk-offensive/ [Table of Contents] Kremlin Employs Consistent Playbook to Destabilize Post-Soviet States EUvsDisinfo in their recent article , describes the characteristically consistent playbook that the Kremlin employs to destabilize post-Soviet neighbors that seek Euro-Atlantic integration. The analysis asserts that whether in Georgia, Moldova, or Ukraine, Russia repeatedly fuels separatism, spreads disinformation such as false genocide claims, and intervenes militarily under the pretext of “peace enforcement.” This strategy creates dependent proxy regions, or “frozen zones,” which serve as instruments of instability to block the sovereign ambitions of its neighbors. The report identifies the 2008 invasion of Georgia as a crucial “test run” for the tactics of disinformation, passportization, and supposed humanitarian intervention that were later scaled up for the invasion of Ukraine. Source: EUvsDisinfo, Aug 2025, Genocide Claims, Peace Enforcement and Proxy Regions: Disinformation from Georgia to Ukraine. [online] Available at: https://euvsdisinfo.eu/genocide-claims-peace-enforcement-and-proxy-regions-disinformation-from-georgia-to-ukraine/ [Table of Contents] Lukashenka Entrenches Totalitarian Rule Five Years After Protests The Belarusian state maintains a database of over 100,000 “enemies of the state” called ‘BESporiadki,’ a name whose Russian root ‘BES’ means ‘demon.’ According to a paper by EUvsDisinfo , this list is just one tool in Aleksandr Lukashenka’s ongoing war against his own people, five years after they peacefully protested his rule. The analysis reveals a state transformed into a “repression machine” that has institutionalized torture and created an atmosphere of total fear to crush dissent. 2 Dependent on Vladimir Putin's backing, Lukashenka’s regime has drifted towards a harsh, semi-totalitarian dictatorship that no longer feigns democracy and openly persecutes anyone deemed disloyal. Source: EUvsDisinfo, 2025, Lukashenka’s ongoing retaliation against Belarusians. [online] Available at: https://euvsdisinfo.eu/lukashenkas-ongoing-retaliation-against-belarusians/ [Table of Contents] Iran-Linked Cyber Actors Coordinated Attacks During Kinetic Conflict Within days of kinetic conflict erupting between Iran and Israel in June 2025, the Iranian state-sponsored hacking group Imperial Kitten purchased domains like stopirannukes[.]com to launch sophisticated phishing campaigns against Israeli supporters. This swift, tailored action, detailed in a SecurityScorecard analysis , exemplifies the tight integration of cyber and kinetic warfare. The analysis of hundreds of thousands of Telegram messages shows a broader ecosystem of IRGC-linked proxies and ideologically motivated hacktivists from across the region launching disruptive attacks in concert. The report concludes that modern conflict now inherently features a digital front where state tasking and grassroots ideological fervor merge, creating a complex and agile threat that moves at the speed of war. Source: SecurityScorecard, 2025. From The Depths of the Shadows: IRGC and Hacker Collectives Of The 12-Day War. [online] Available at: https://securityscorecard.com/wp-content/uploads/2025/08/From-The-Depths-of-the-Shadows_IRGC-and-Hacker-Collectives_AUG5.pdf [Table of Contents] The Privatization of Violence: PMC Models and Asymmetric Tech in Africa A new book by Alessandro Arduino, summarized in The Conversation , argues that conflict in Africa is being fundamentally reshaped by the convergence of privatized violence, new technology, and disinformation. The analysis tracks the rise of mercenaries and private military companies from Russia, Turkey, and China, each with distinct operational models for profiting from instability. It highlights how the proliferation of cheap, lethal drones provides non-state actors with an "off-the-shelf air force," while AI is poised to "turbocharge" disinformation campaigns targeting Africa's youthful population. The book concludes that these intersecting trends thrive in power vacuums across the continent, creating complex new threats that demand global vigilance and regulation. Source: The Conversation, Alessandro Arduino, 2025. Drones, disinformation and guns-for-hire are reshaping conflict in Africa: new book tracks the trends. [online] Available at: https://theconversation.com/drones-disinformation-and-guns-for-hire-are-reshaping-conflict-in-africa-new-book-tracks-the-trends-262256 [Table of Contents] Algorithmic Amplification and Malinformation During the Southport Crisis An article in CyberNews , reporting on a new analysis by Amnesty International , concludes that Elon Musk and his platform X played a central role in inciting the 2024 UK riots. The analysis found that following the tragic Southport murders, X’s engagement-driven algorithm systematically amplified racially charged misinformation about the killer's true identity, directly contributing to violence against Muslim and migrant communities. Musk himself fueled the unrest, with one post declaring "civil war is inevitable." Amnesty attributes the platform’s failure to Musk's post-2022 gutting of trust and safety teams, arguing that X now lacks adequate safeguards and fundamentally prioritizes outrage over user safety. Source: CyberNews, 7 Aug 2025, Amnesty: Musk played central role in inciting 2024 UK riots. [online] Available at: https://cybernews.com/news/united-kingdom-riots-misinformation-x-musk/ [Table of Contents] Analysis of Russian FIMI Operations Targeting Moldova's Elections Days before a landmark EU-Moldova summit in Chisinau, a Russian-linked operation dubbed "Matryoshka" released doctored videos with fake Euronews branding to spread false claims of conflict between EU and Moldovan leaders. This was part of a wider disinformation blitz analyzed in an EUvsDisinfo article , which concludes that pro-Kremlin actors are in a full-scale offensive to derail Moldova's European path ahead of its September elections. The campaign uses a multi-platform strategy, from Telegram to TikTok, to push narratives of debt traps and lost sovereignty, seeking to exploit economic fears and ultimately undermine the democratic process itself. Source: EUvsDisinfo, Aug 04 2025, The summit on a disinformation mountain. [online] Available at: https://euvsdisinfo.eu/the-summit-on-a-disinformation-mountain/ [Table of Contents] Latvia Counters Persistent Kremlin Disinformation Narratives A recent Kremlin-backed narrative falsely claimed that Latvians read more "Mein Kampf" than Harry Potter, part of a broader effort to brand the country as a Nazi sympathizer. According to a report from the EU Disinfo Lab , this is one of five persistent disinformation themes Russia uses to attack Latvia's statehood. The analysis finds that these narratives, which also include claims of "ethnic cleansing" of Russian speakers and portray Latvia as a "failed state", are designed to erode public trust and destabilize the nation from within. While Latvia has developed one of Europe's strongest ecosystems for countering disinformation, the report concludes that it remains on the front lines of an enduring information war. Source: EU Disinfo Lab, Aug 2025, Disinfo landscape in Latvia. [online] Available at: https://www.disinfo.eu/wp-content/uploads/2025/08/20250809_Disinfo-landscape-in-Latvia-v2.pdf [Table of Contents] EU-Backed Initiative Aims to Build Newsroom Verification Capacity in Nigeria Following an EU-backed workshop in Lagos, about 40 of Nigeria's top media editors have pledged to create dedicated fact-checking desks within their newsrooms. This initiative, announced in a press release by the EU Delegation to Nigeria , is a direct response to the surge of fake news and propaganda that threatens to undermine the country's stability. The EU Ambassador, Gautier Mignot, framed the issue as an "existential imperative," arguing that misinformation erodes the credibility of both the message and the messenger. This collaboration between the EU and the Nigerian Guild of Editors marks a critical effort to equip media gatekeepers with the skills needed to defend the information space. Source: Press and information team of the Delegation to NIGERIA, 2025. EU, editors unite against disinformation. [online] Available at: https://www.eeas.europa.eu/delegations/nigeria/eu-editors-unite-against-disinformation_en [Table of Contents] [Takeaways] Multiple events this week, from a simulated Sino-Russian IO campaign to Russia's real-world operations in Europe, reveal that authoritarian information strategies have inherent vulnerabilities. While states are refining their playbooks for destabilization, their alliances are often brittle and transactional, prone to fracturing under the pressure of competing national interests. The most durable defense is not reactive, but proactive. Investing in the resilience of the broader information ecosystem, such as the EU-backed initiative to build verification capacity in Nigerian newsrooms, proves to be a more effective long-term strategy than attempting to counter every individual threat. [Table of Contents] [Download Report]
- Targeting Taiwan: An Influence Campaign with Benefits
Between March and April 2025, Taiwan’s government found itself again under considerable internal and external pressure. Intensified military exercises [i] by the People’s Republic of China and a tense post-election atmosphere placed significant strain on the newly elected administration of the Democratic Progressive Party (DPP). Mounting criticism, calls for the resignation of individual lawmakers, and resurgent street protests added to the political volatility. [ii] The social media platform Threads, already prominent during the 2024 Taiwan elections, once again emerged as a focal point of political discourse, particularly among younger users. [iii] Coordinated Influence Campaign Against Taiwan In this climate, Doublethink Lab uncovered what it classified as a coordinated hostile influence campaign. [iv] Researchers identified 51 inauthentic accounts posing as Taiwanese citizens. These accounts posted identical anti-DPP content. The slogan “I am Taiwanese and I oppose the Pan-Greens” appeared over 1,000 times, 91 of those posts traced directly to the identified accounts. Through high-frequency repetition and bot-like coordination, the campaign appeared aimed at simulating organic protest and public dissent. Technical Structure of the Manipulation The technical analysis revealed a layered structure. The accounts used stolen profile pictures of Asian influencers (T0145.001), consistently generated localized content by Fabricated Persona (T0101, T0143.002), and posted protest encouragements against the DPP (T0138). Posts were often published in identical form within tightly clustered time windows, strong evidence of automation and orchestration (T0049.003, T0084.001). [v] Figure 1 - A plot showing how many times each account posted content that was also posted by at least 10 other accounts., The right side shows 12 different accounts made 13 posts with the same content within 20 minutes. Translation: “I was gonna say, let; Courtesy of Doublethink Lab [vi] Attribution Clues and Indicators Attribution rests on three main indicators. First, 23 accounts used simplified Chinese characters in 71 posts, some partially, others entirely. Second, forensic tracing of password-reset vectors linked the accounts to phone numbers in Hong Kong and Poland. Third, the use of identical tracking links and highly structured posting behavior resembled centralized infrastructure seen in past campaigns, notably during the U.S. elections. Based on its reach and distribution, Doublethink Lab rates the campaign a 2 out of 6 on the breakout scale, because the content managed to achieve some organic engagement, but remained confined to the platform Threads. Dual Use of the Network Interestingly, political content constituted only a fraction of the total output. Of more than 7,000 posts, just 290 were political, 275 of them verbatim anti-DPP posts. A curious detail: five posts followed the same template but targeted the Chinese Communist Party (CCP) instead. Whether this was a copy-paste error or a crude attempt at camouflage remains unclear. Still, the idea that a few mistimed clicks might unintentionally reverse a campaign’s ideological thrust carries its kind of irony. Even more striking: the campaign was saturated with sexually suggestive material, often linking to dating websites disguised via URL shorteners (T0153.002). Doublethink Lab flagged 1,122 such posts. While political posts saw significantly higher average engagement (506 likes versus 143 for non-political), the most visible content by far was those promoting physical allure, garnering over 18,000 likes. The juxtaposition of flesh and faction adds a layer of tragicomedy: when political persuasion competes with thirst traps, the algorithm may well crown the latter. Figure 2 - Showing Sexual Suggestive Content, Courtesy of DoubleThink Lab [vii] Conclusion The campaign’s structure points to external amplification providers, so-called “influence-for-hire” actors, operating modular infrastructures that serve both political and commercial clients. This outsourcing trend is becoming increasingly visible. Social media ecosystems, each with its logic and audiences, demand tailored delivery and contractors who can navigate these spaces efficiently. For attribution efforts, this creates serious challenges. Responsibility chains grow murky as state-linked influence efforts are increasingly executed through deniable intermediaries. Doublethink Lab’s report stands out for its methodological rigor, analytical restraint, and transparency. Their investigation is ongoing and deserves close attention; well worth reading. Footnotes: [i] Institute for the Study of War & American Enterprise Institute, Sperzel, M., Shats, D., O’Neil, A., Wugang, K., Han, A., Blustajn, N., Turek, A., Chung, Y. & Chou, A., 2025. China–Taiwan Weekly Update, April 18, 2025. [online] Published April 18, 2025. Available at: https://www.understandingwar.org/backgrounder/china-taiwan-weekly-update-april-18-2025 [ii] Taiwan News, Thomson, J., 2025. Government protest takes place in Taipei. [online] . Available at: https://www.taiwannews.com.tw/news/6095464 [iii] Taiwan News, Thomson, J., 2024. Threads, Instagram emerge as key info sources for Taiwan legislature protests. [online] Published 18 June 2024. Available at: https://www.taiwannews.com.tw/news/5892100 [iv] Doublethink Lab (Digital Intelligence Team), 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party. [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [v] Doublethink Lab (Digital Intelligence Team), 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party. [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [vi] Doublethink Lab (Digital Intelligence Team), 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party. [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [vii] Doublethink Lab (Digital Intelligence Team), 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party. [online] Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Weekly Report: Cyber based digital hostile influence campaigns 28th July - 3rd August 2025
[Listen to the Podcast] [Introduction] During the 28th July to the 03rd of August, 2025, we observed, collected, and analyzed endpoints of information related to cyber-based digital hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. This week’s developments span a broad spectrum of influence operations and digital information warfare, with three dominant themes emerging: the intensification of AI-generated disinformation across global conflicts, the increasing use of state-aligned platforms to manipulate geopolitical narratives, and the shifting responsibilities of tech companies as frontline moderators. From TikTok dismantling covert influence networks to Kremlin-backed media targeting Ukrainian officials with fabricated smears, and Iran and Israel deploying synthetic propaganda amid real-world hostilities, the week's cases expose a converging global trend. At the same time, platforms like X show declining transparency in content moderation, just as states like India and Poland respond assertively to information threats. Collectively, these events reflect the growing overlap between digital, kinetic, and psychological domains in modern statecraft and social control. [ Highlights ] As published by The Journal , the sudden and unexplained disappearance of Community Notes, X’s fact-checking tool, has raised serious concerns about the platform’s ability to counter disinformation. As reported by The Record , the Polish authorities arrested 32 individuals allegedly working for Russian intelligence to conduct sabotage and spread disinformation as part of a broader hybrid warfare campaign targeting Poland. According to an article by DisinfoWatch , Russian state media outlets RIA Novosti and Sputnik News have launched a disinformation campaign accusing Denys Yermak, brother of the head of Ukraine’s Presidential Administration, of involvement in international drug trafficking. As stated in a Carnegie Endowment Center article, the June 2025 Israeli bombing of Iran’s notorious Evin prison was quickly weaponized in the information war between Tehran and Tel Aviv. As published by TechXplore , Cornell researchers developed a method to embed authentication codes in lighting, offering a device-agnostic approach to identifying fake or manipulated videos. India’s Government announced that during Operation Sindoor, the Indian government launched a robust campaign to combat a surge in fake news and anti-India propaganda, much of which was traced to foreign sources, particularly Pakistan. According to an article by Cair , AI-generated images and old protest videos were circulated online to falsely portray Israeli cities under attack following military action against Iran. > TABLE OF CONTENTS SOCIAL MEDIA PLATFORMS The Vanishing Watchdog: X’s Fact-Check Tool Goes Dark Balancing Joy and Control: TikTok’s Quiet Redesign of Responsibility Ghosts in the Feed: TikTok Battles Hidden Hands of Influence STATE ACTORS [Russia] From Telegram to Arson: How Russia’s Proxy War Targets Poland Behind the Smear: Fact-Checking Russia’s Narrative on Umerov Kremlin Media Fabricate Drug Smuggling Allegations to Undermine Ukraine [Iran] AI-Generated Media Escalated Narrative Warfare in Iran-Israel Conflict CYFLUENCE ATTACKS From Missiles to Malware: How Hybrid Threats Target South Korea GENERAL REPORTS Climate Misinformation Fuels Effort to Undermine EPA Science Synthetic Media Escalates Post-FBI Memo on Epstein Case C2PA, GANs, and the Expanding Scope of Digital Forensics How Hidden Light Patterns Could Expose Fake Videos India Blocks 1,400 URLs to Thwart Pakistan’s Misinformation From Fabrication to False Context: CAIR’s Disinfo Response Framework TAKEAWAY DOWNLOAD SECTION SOCIAL MEDIA PLATFORMS The Vanishing Watchdog: X’s Fact-Check Tool Goes Dark In a news article from The Journal , the unexplained disappearance of Community Notes, the crowd-sourced fact-checking tool on X (formerly Twitter), has raised concerns about the platform’s compliance with EU content moderation obligations. While Community Notes remains technically active, users report it no longer appears on posts, complicating efforts to counter misinformation. X attributes the issue to a May 22 data center fire, though technical failures predate that incident. Experts such as DCU’s Eileen Culloty and ISD’s Aoife Gallagher emphasize the platform’s unreliability and susceptibility to manipulation. Community Notes had been criticized for failing to consistently debunk disinformation due to its consensus-based model. Despite its flaws, its absence increases user vulnerability to false narratives, particularly amid broader scrutiny under the EU’s Digital Services Act. Ongoing investigations by the European Commission are probing whether X’s moderation efforts meet legal standards, against a backdrop of regulatory tensions between the EU and Elon Musk’s X Corp. Source: TheJournal, Raymond, S., 2025. Community Notes vanishes from X feeds, raising ‘serious questions’ amid ongoing EU probe. [online] Available at: https://www.thejournal.ie/x-community-notes-disappeared-from-platform-factchecking-misinformation-elon-musk-6717349-May2025/ [Back to Table of Contents] Balancing Joy and Control: TikTok’s Quiet Redesign of Responsibility In a press release by TikTok Newsroom , the platform announced a comprehensive suite of features aimed at strengthening user safety, privacy, and creativity across its global user base, with a special focus on teens, families, and creators. Among the updates are enhanced Family Pairing tools that notify parents when teens upload content, more transparency into teens’ privacy settings, and broader oversight through topic management features. To promote digital wellness, TikTok introduced "Well-being Missions," gamified activities that encourage healthy digital habits, and teased a holistic in-app wellness hub. For creators, new tools include Creator Care Mode, offensive content muting, and pre-publication eligibility checks for the For You feed. Additionally, TikTok expanded the pilot of its "Footnotes" feature in the U.S., enabling users to collaboratively contextualize video content. These efforts reflect TikTok’s strategy to pre-empt regulatory scrutiny by embedding safety into user experience architecture. Source: Source: TikTok Newsroom , Presser, A., 2025. New trust and safety tools for creators, families, and the TikTok Community. [online] Available at: https://newsroom.tiktok.com/en-us/tiktok-announces-a-suite-product-features [Back to Table of Contents] Ghosts in the Feed: TikTok Battles Hidden Hands of Influence In a report from TikTok , the platform disclosed the removal of six covert influence networks active in June 2025, totaling 616 inauthentic accounts and over 180,000 cumulative followers, with an additional 1,198 accounts dismantled for recidivist behavior. The report provides detailed attribution, suggesting that multiple networks were operated from Russia, the Philippines, Venezuela, and Saudi Arabia, and targeted regional political discourses through fabricated personas, AI-assisted amplification, and location-masking tactics. Networks sought to manipulate perceptions in Moldova, the EU, Saudi Arabia, the Philippines, and Ecuador, often promoting pro-Kremlin or partisan narratives while impersonating local citizens or journalists. TikTok’s disclosure highlights the platform's internal and external detection methods and indicates a commitment to disrupting politically motivated disinformation campaigns on a rolling basis. Source: TikTok Transparency Center (TikTok), 2025. Covert Influence Operations. [online]. Available at: https://www.tiktok.com/transparency/en-us/covert-influence-operations [Back to Table of Contents] STATE ACTORS [Russia] From Telegram to Arson: How Russia’s Proxy War Targets Poland In a news article published by The Record , Polish Prime Minister Donald Tusk announced that 32 individuals have been detained in connection with pro-Russian sabotage activities targeting infrastructure since the outset of Russia’s full-scale invasion of Ukraine. The suspects, a multinational group including Colombian, Belarusian, and Ukrainian nationals, are accused of arson, reconnaissance, and participation in disinformation campaigns orchestrated by Russian intelligence. One operative, a Colombian national, was already convicted in the Czech Republic for attacks carried out in both countries. Authorities assert that these acts form part of a broader Russian hybrid warfare strategy involving sabotage, false narratives, and recruitment through encrypted platforms such as Telegram. Polish counterintelligence efforts have intensified, with the government warning of further Russian attempts to destabilize national security, particularly via attacks on critical infrastructure and aviation systems. Source: The Record, Antoniuk, D., 2025. Poland says more than 30 suspects face trial over pro‑Russian sabotage. [online] . Available at: https://therecord.media/poland-32-suspects-pro-russia-sabotage [Back to Table of Contents] Behind the Smear: Fact-Checking Russia’s Narrative on Umerov In a media analysis by StopFake , the article outlines how Russian state actors and aligned media have launched a coordinated disinformation campaign aimed at discrediting Ukrainian Defense Minister Rustem Umerov. The report details attempts by Russian Foreign Ministry spokesperson Maria Zakharova and outlets like Argumenty i Fakty to falsely portray Umerov as an unqualified figure, labeling him a lifelong telecom manager and a possible agent of Turkey or the U.S. In reality, Umerov’s six-year telecom stint ended in 2010, followed by over a decade of leadership in investment, civil society, and government. His diplomatic experience includes key roles in prisoner exchanges and Ukraine’s de-occupation strategy. Allegations regarding foreign ties are based on distorted interpretations of his support for Bayraktar drone deals and his U.S.-based family. The text demonstrates how pro-Kremlin figures fabricate narratives that conflate diplomatic engagement with espionage. Source: Source: StopFake, Tsoy, I., 2025. «Агент США и Турции», «менеджер в телефонной компании» — как российская пропаганда искажает биографию украинского министра обороны Умерова. [online]. Available at: https://www.stopfake.org/ru/agent-ssha-i-turtsii-menedzher-v-telefonnoj-kompanii-kak-rossijskaya-propaganda-iskazhaet-biografiyu-ukrainskogo-ministra-oborony-umerova/ [Back to Table of Contents] Kremlin Media Fabricate Drug Smuggling Allegations to Undermine Ukraine A report by DisinfoWatch exposes a Russian state media disinformation operation accusing Denys Yermak, brother of Ukraine’s presidential chief of staff, of leading an international drug trafficking scheme on behalf of Kyiv’s leadership. The story, pushed by RIA Novosti and Sputnik News , relies exclusively on an unnamed source and offers no independent or legal corroboration. The fabricated narrative implicates Turkish and Moldovan security services in supporting the smuggling of Afghan narcotics into Ukraine via Azerbaijan and the UAE. Analysts identify this as part of the Kremlin’s strategic playbook to discredit Zelenskyy’s inner circle, frame Ukraine as a criminalized state, and fracture NATO solidarity. With no verifiable evidence and a reliance on untraceable attribution, the story is flagged as a textbook case of narrative-driven, Kremlin-backed disinformation. Source: DisinfoWatch, 2025. Kremlin Invents Drug Smuggling Allegations Against Zelenskyy’s Inner Circle. [online]. Available at: https://disinfowatch.org/disinfo/kremlin-invents-drug-smuggling-allegations-against-zelenskyys-inner-circle/ [Back to Table of Contents] [Iran] AI-Generated Media Escalated Narrative Warfare in Iran-Israel Conflict When a WhatsApp video of smoke billowing over Evin prison circulated on June 23, 2025, it seemed to symbolize a break in Iran’s authoritarian stranglehold. But shortly after, Israeli officials posted another clip, slicker, cleaner, and likely fake. As detailed in a Carnegie Endowment report , this moment encapsulated a broader war: not just of missiles but of manipulated media. Both Iranian and Israeli actors deployed AI-generated content to distort reality, with Iran suppressing real-world horrors and Israel framing its bombings as precision liberations. Civilians, cut off by blackouts and bombarded by synthetic media, could neither verify what they saw nor understand what was unfolding around them. Despite detection tools like SynthID and C2PA, their limitations leave global citizens vulnerable. The war’s ceasefire may have quieted bombs, but the erosion of trust in what is seen and shared signals a far more enduring threat. Source: Carnegie Endowment for International Peace, Alimardani, M. & Gregory, S., 2025. Iran‑Israel AI War Propaganda Is a Warning to the World. [online] . Available at: https://carnegieendowment.org/research/2025/07/iran-israel-ai-war-propaganda-is-a-warning-to-the-world [Back to Table of Contents] CYFLUENCE ATTACKS From Missiles to Malware: How Hybrid Threats Target South Korea A strategic analysis from the Australian Strategic Policy Institute (ASPI) maps 81 hybrid threat incidents targeting South Korea between 2016 and 2025, revealing an increasingly diverse and complex threat environment. While North Korea remains the dominant actor, responsible for 54% of documented activities, including missile tests, cyberattacks, and territorial provocations, China’s involvement has escalated, particularly in economic coercion, cyber intrusions, and narrative operations. Threat vectors include military pressure, IP theft, sabotage, and AI-enhanced disinformation. In response, Seoul has adopted a five-pronged approach encompassing military readiness, diplomacy, legislation, capability development, and public awareness. However, the report raises concerns about gaps in legal frameworks and underdeveloped responses to non-North Korean actors. The election of President Lee Jae-myung may signal a shift in posture, with an emphasis on domestic AI development and improved EU cooperation. ASPI underscores the value of multilateral coordination, especially as hybrid threats become regionally and technologically more diffuse. Source: Australian Strategic Policy Institute (ASPI), Fitriani, D., Shih, S. & Wai, A., 2025. Mapping a decade’s worth of hybrid threats targeting South Korea. [online]. Available at: https://www.aspistrategist.org.au/mapping-a-decades-worth-of-hybrid-threats-targeting-south-korea/ [Back to Table of Contents] GENERAL REPORTS Climate Misinformation Fuels Effort to Undermine EPA Science A news article by E&E News examines the Trump administration’s proposal to revoke the EPA’s endangerment finding, which underpins federal climate regulations, and finds it grounded in discredited science and misinformation. The proposed rule leans on fringe studies and arguments from Department of Energy personnel known for denying climate consensus. Experts highlight that the proposal misrepresents peer-reviewed climate science, selectively cites data, and ignores the overwhelming evidence linking greenhouse gas emissions to global warming. Claims that climate models are flawed, that CO₂ boosts agriculture, or that warming has net public health benefits are all refuted by decades of robust research. Critics warn that this attempt to undermine foundational climate policy endangers public trust in scientific integrity and risks reversing critical mitigation efforts at a time when climate impacts, rising temperatures, sea levels, and extreme weather are accelerating and measurable. Source: E&E News (by Politico), Harvey, C. & Waldman, S., 2025. EPA attacks climate science. Here are the facts. [online]. Available at: https://www.eenews.net/articles/epa-attacks-climate-science-here-are-the-facts/ [Back to Table of Contents] Synthetic Media Escalates Post-FBI Memo on Epstein Case A news report by NewsGuard reveals that AI-generated deepfake videos falsely depicting former President Donald Trump alongside Ghislaine Maxwell and underaged girls continue to proliferate, despite official denials of any criminal implication. Using detection tools like IdentifAI, NewsGuard identified two new deepfakes within a week of naming the Trump-Epstein fabrications its “False Claim of the Week,” bringing the total to three videos and seven images. These synthetic media assets gained over 7.2 million cumulative views on social platforms, underscoring the challenge of curbing virality, even after takedown efforts. The surge in such content followed a July 7 FBI–Justice Department memo stating no evidence exists of a client list implicating high-profile individuals in Epstein-related crimes. While Trump was publicly linked to Epstein via social engagements and flight records, he has not been charged. The persistence of these deepfakes illustrates the escalating role of AI in disinformation targeting public figures. Source: NewsGuard, Olson, J., 2025. Trump‑Epstein Deepfakes Keep Coming. [online]. Available at: https://www.newsguardrealitycheck.com/p/trump-epstein-deepfakes-keep-coming [Back to Table of Contents] C2PA, GANs, and the Expanding Scope of Digital Forensics In an expansive academic article published by PNAS Nexus , Hany Farid of UC Berkeley dissects the evolving landscape of manipulated media, warning that generative AI has propelled the risks of digital deception to an unprecedented scale. While early manipulation relied on rudimentary tools, today's deepfakes, powered by GANs and diffusion models, now cross the uncanny valley, producing images and voices indistinguishable from reality. Farid examines the compounded harms: NCII, AI-generated CSAM, financial fraud, disinformation, and the emergent "liar's dividend." He offers a multilayered mitigation strategy that encompasses content provenance standards (such as C2PA), regulatory guardrails, industry liability reform, and K-12 media literacy. Importantly, he critiques the academic culture that accelerates innovation without foreseeing misuse. Farid asserts that without globally coordinated intervention, society risks entering a post-truth era where the real and the synthetic are functionally indistinguishable. Source: PNAS Nexus (Oxford Academic), Farid, H., 2025. Mitigating the harms of manipulated media: Confronting deepfakes and digital deception. [online]. Available at: https://academic.oup.com/pnasnexus/article/4/7/pgaf194/8209913 [Back to Table of Contents] How Hidden Light Patterns Could Expose Fake Videos In a journal article published by TechXplore , Cornell researchers unveil a breakthrough in video authentication: a watermarking system that encodes secret data directly into imperceptible light fluctuations during filming. This "noise-coded illumination" embeds time-stamped signals into programmable light sources, such as screen displays or lighting fixtures, enabling forensic analysts to detect tampering or AI-generated forgeries by analyzing the light-encoded patterns. Unlike previous watermarking efforts that relied on camera compliance or post-processing, this technique requires no special equipment from the videographer. Lead researcher Peter Michael demonstrated that even multiple light sources can carry unique codes, significantly raising the bar for potential adversaries. While not impervious to future circumvention, this method provides a practical, hardware-level defense against the growing threat of deepfakes and misinformation. Source: TechXplore / Cornell University, Waldron, P., 2025. Hiding secret codes in light can protect against fake videos. [online] . Available at: https://techxplore.com/news/2025-07-secret-codes-fake-videos.html [Back to Table of Contents] India Blocks 1,400 URLs to Thwart Pakistan’s Misinformation In an official press release by the Press Information Bureau (PIB) , the Indian government detailed its coordinated effort to counteract a surge of Pakistani disinformation during Operation Sindoor. Through a centralized, 24x7 Control Room comprising representatives from the armed forces and media units, India actively debunked fake content targeting its military, disseminated accurate information, and blocked over 1,400 URLs under Section 69A of the IT Act. The PIB Fact Check Unit (FCU) verified and flagged misleading content, much of it generated by Pakistan-based social media accounts, and issued advisories to prevent real-time coverage of military movements. The government’s rapid and institutionalized response highlights a growing emphasis on digital counter-propaganda frameworks to protect national security narratives. Source: Press Information Bureau (PIB), Government of India, 2025 . Government Debunks Pakistani Propaganda Against India and Armed Forces: [online]. Available at: https://www.pib.gov.in/PressReleasePage.aspx?PRID=2150213 [Back to Table of Contents] From Fabrication to False Context: CAIR’s Disinfo Response Framework In a recent briefing by CAIR Los Angeles , the organization emphasized the urgent need for media literacy as a defense against the proliferation of misinformation and disinformation in today’s hyperconnected world. With 86% of U.S. adults consuming news through digital platforms, CAIR warns that fabricated, manipulated, and misleading content, especially involving geopolitics and Islamophobia, can shape public perception and policy. The report distinguishes between misinformation and disinformation, outlines their tactics, and presents recent examples, including disinformation surrounding U.S. and Israeli military actions in Iran. CAIR also highlights AI-generated content as a rising threat and provides practical tools for detecting false narratives. By advocating for critical media consumption and highlighting its own work monitoring biased coverage, CAIR positions media literacy as essential to resisting manufactured consent for state violence and combating anti-Muslim narratives. Source: CAIR Los Angeles , CAIR Staff, 2025. Media Literacy 101: Countering Disinformation in the Modern World . [online]. Available at: https://ca.cair.com/updates/media-literacy-101-countering-disinformation-in-the-modern-world/ [Back to Table of Contents] TAKEAWAY The fusion of generative AI with influence operations points toward an increasingly volatile information environment. As AI tools become cheaper, more realistic, and easier to deploy, both state and non-state actors will escalate their use of synthetic media to shape strategic narratives. Regulatory frameworks, such as the EU’s scrutiny of X and the expansion of provenance standards like C2PA, will likely tighten, but enforcement will remain inconsistent. Without interoperable global mechanisms, this fragmented response will perpetuate asymmetric vulnerabilities, especially in conflict and high-risk zones. [Back to Table of Contents] [Download Report]
- Tackling Disinformation Campaigns in Indonesia
In the latest article, Dr. Eka Nugraha Putra examines a recent case of coordinated disinformation in Indonesia to assess how the legal framework may serve as a tool to counter such phenomena. The Author argues that coordinated disinformation campaigns in Indonesia could be addressed as forms of organized crime. He points to the country’s newly reformed criminal code as a potential turning point, since it could enable legal responsibility to be extended to organizations, not just individuals. By shifting the legal focus toward the networks and entities behind digital manipulation, the article offers a forward-looking approach grounded in the Indonesian context. At the same time, it contributes to the broader international debate on how states can confront hostile influence and disinformation campaigns. [About the Author] Dr. Eka Nugraha Putra is the author of the book “Free Speech in Indonesia: Legal Issues and Public Interest Litigation” . Find more information here: https://ekanugrahaputra.com/ . [ Download Full Report here ]
- Weekly Report: Cyber based influence campaigns 14th - 20th July 2025
[Listen to the Podcast] [Introduction] During the 14 th to the 20 th of July 2025 , we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report provides a summary of the key events we consider most significant. This week's review highlights the convergence of cyber, information, and cognitive warfare domains. The EU's sanctioning of a Russian military unit for GPS jamming underscores the kinetic potential of hybrid threats, while multinational operations target pro-Kremlin hacktivist groups. Simultaneously, actors leverage AI and inauthentic networks to inflame domestic political tensions from Europe to Asia. These events signal a complex security environment in which state and non-state actors exploit technological and societal vulnerabilities, necessitating a coordinated, multifaceted response from Western alliances. [ Highlights ] According to a joint analysis, a botnet of over 3,600 automated accounts is conducting a targeted "informational and cultural occupation" in Russian-occupied Ukraine.- Atlantic Council The EU has sanctioned a Russian military unit for its role in widespread GPS jamming over the Baltic Sea, an action that has repeatedly disrupted civilian aviation. – EEAS Delegation to Ukraine Despite economic fragility, China is showing signs of returning to tighter control over its AI sector, with intensified oversight of leading firms and the introduction of new content-labeling regulations. - Carnegie Endowment Influence operations in the Philippines have specifically targeted the feud between the Marcos and Duterte factions, demonstrating a tactic of amplifying existing domestic divisions. – Foreign Policy A Doublethink Lab report identified 51 inauthentic Threads accounts, likely PRC-linked, that posed as Taiwanese citizens by using stolen photos of influencers to amplify anti-government narratives. - DFRLab A bot network, which had previously shown unified support for MAGA figures, fractured and began posting contradictory messages following the controversy over the Epstein files. - NBC News A Conversation article examines "cognitive warfare" as an emerging domain of conflict where adversaries manipulate human perception through disinformation, often below the threshold of traditional war. - The Conversation Austria has become a hotspot for persistent disinformation, with narratives often amplified by far-right political actors like the FPÖ. - EU Disinfo Lab > TABLE OF CONTENTS HOSTILE INFLUENCE CAMPAIGNS Kremlin Weaponizes History in Disinformation Campaign Against Germany Russia-Linked Group Impersonates Journalists in European Disinfo Push How Russia Tailors Propaganda for an 'Informational Occupation New EU Sanctions Target Russian FIMI, From State Media to Military Units Control vs. Growth: The New Dilemma Shaping China's AI Ambitions Pakistan's Alliance with China: A Partnership with Hidden Costs From Deepfakes to 'Influence for Hire': China's Evolving Tactics Fake Accounts Impersonate Taiwanese on Threads RELATED ARTICLES Algorithmic Lies: AI News Channels Undermine Canada’s Election AI Bot Network Fractures Amid MAGA Epstein Fallout Digital Democracy in Decline: Global Trends and Consequences GENERAL REPORTS Cognitive Warfare: The Silent Frontline of Modern Conflict Austria’s Disinformation Landscape: Narratives, Actors, and Impacts International Sweep Disrupts Pro-Russian Cybercrime Network EU Observatory Expands Fight Against Online Disinformation TAKEAWAYS DOWNLOAD SECTION HOSTILE INFLUENCE CAMPAIGNS Kremlin Weaponizes History in Disinformation Campaign Against Germany A report from EU vs. Disinfo details a coordinated Kremlin information manipulation campaign aimed at portraying Germany as a resurgent, aggressive military power. The campaign's central narrative, promoted through state-controlled media such as Rossiya 1 and Vesti FM, distorts Germany's defensive policies into an offensive "rearmament" that betrays post-war commitments. The author states this strategy involves direct demonization of political figures, including baselessly labeling Chancellor Friedrich Merz a "descendant of Nazis." The campaign weaponizes historical trauma to justify Russia's invasion of Ukraine to domestic audiences while simultaneously seeking to undermine German public support for Ukraine and weaken transatlantic security cooperation. The report concludes that this is a top-down strategy, endorsed by Kremlin officials such as Dmitry Peskov, with clear objectives in both domestic and foreign policy. Source: EUvsDisinfo, 2025. Summoning the ghost of the Reich. [online] . Available at: https://euvsdisinfo.eu/summoning-the-ghost-of-the-reich/ [Back to Table of Contents] Russia-Linked Group Impersonates Journalists in European Disinfo Push Researchers have identified a Kremlin-linked threat actor, Storm-1516, conducting a sophisticated disinformation campaign by impersonating journalists and spoofing news websites across Europe. A report from The Record details how the campaign targets countries including France, Armenia, Germany, Moldova, and Norway with false narratives designed to discredit political leaders and sow discord. The group's method involves using the names and photos of real reporters on fabricated articles to lend them unearned credibility. The campaign's impact is considered significant, with French authorities labelling the group a threat to European public debate. The narratives have ranged from fabricated corruption scandals involving Moldovan and Armenian leaders to false environmental crises aimed at disrupting international forums, such as the Internet Governance Forum (IGF). Source: The Record, Antoniuk, D., 2025. Russia-linked group spoofing European journalists to spread disinformation. [online] . Available at: https://therecord.media/russia-group-spoofing-journalists-disinfo [Back to Table of Contents] How Russia Tailors Propaganda for an 'Informational Occupation Between January 2024 and April 2025, a network of over 3,600 automated accounts flooded Telegram channels in Russian-occupied Ukraine with pro-Kremlin comments. According to a joint analysis by OpenMinds and DFRLab, this botnet represents a targeted "informational and cultural occupation" running parallel to the military one. The campaign tailors its messaging by disproportionately pushing narratives that praise Russian culture and governance, a different emphasis than used for domestic Russian audiences. This strategy suggests a specific goal beyond simple propaganda: the report concludes the effort is aimed at manufacturing the illusion of local support for Russia's presence, effectively creating a fabricated consensus to legitimize its control. Source: Atlantic Council, Dukach, Y., Adam, I. & Furbish, M., 2025. Digital occupation: Pro-Russian bot networks target Ukraine’s occupied territories on Telegram. [online]. Available at: https://www.atlanticcouncil.org/in-depth-research-reports/report/report-russian-bot-networks-occupied-ukraine/ [Back to Table of Contents] New EU Sanctions Target Russian FIMI, From State Media to Military Units The Russian military unit linked to widespread GPS jamming over the Baltic Sea, which has disrupted civil aviation, is now under EU sanctions. This action, reported on by the EEAS Delegation to Ukraine, is part of a broader package announced on July 15, 2025, targeting Russia's hybrid warfare and information manipulation campaigns. The new listings also include the state media network RTRS, which is intended to supplant Ukrainian broadcasters in occupied regions, as well as several entities created by the late Yevgeny Prigozhin. One such group, the Foundation to Battle Injustice, is cited for spreading disinformation that accused French soldiers of kidnapping children in Niger. The sanctions demonstrate the EU's strategy of targeting the full spectrum of Russia's FIMI apparatus, from military electronic warfare units to individual social media influencers. Source: Press and Information Team, Delegation to Ukraine (EEAS), 2025. Russian hybrid threats: EU lists nine individuals and six entities responsible for destabilising actions in the EU and Ukraine. [online]. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-eu-lists-nine-individuals-and-six-entities-responsible-destabilising-actions_en [Back to Table of Contents] Control vs. Growth: The New Dilemma Shaping China's AI Ambitions A report from the Carnegie Endowment for International Peace argues that China's AI policy follows a cyclical pattern, oscillating between prioritizing economic growth when it feels technologically vulnerable and asserting ideological control when it feels strong. The author states that the early 2025 breakthrough of the DeepSeek-R1 model has initiated a new, unprecedented "Crossroads Era." A core tension defines this period: China now possesses high technological confidence in its AI capabilities, but its lackluster economy creates a conflicting imperative. While evidence suggests a return to control, seen in intensified oversight of DeepSeek and new content-labeling regulations, economic fragility and US export controls may compel Beijing to adopt a more pragmatic, growth-oriented approach. Source: Carnegie Endowment, Singer, S. & Sheehan, M., 2025. China’s AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era. [online]. Available at: https://carnegieendowment.org/research/2025/07/chinas-ai-policy-in-the-deepseek-era?lang=en [Back to Table of Contents] Pakistan's Alliance with China: A Partnership with Hidden Costs A Doublethink Lab report by Dr. Haroon ur Rasheed Baloch examines the extensive influence of the People's Republic of China (PRC) in Pakistan, primarily driven by the China-Pakistan Economic Corridor (CPEC). The author asserts that this deep integration across Pakistan's economic, military, technological, and academic sectors, while offering benefits, poses significant risks to national sovereignty and social stability. The report highlights a lack of transparency in CPEC agreements, resulting in economic burdens such as soaring electricity tariffs. It also highlights the PRC's soft power campaign, which shapes media and academic discourse to favor Beijing's narratives, as well as a growing military collaboration centered on Gwadar Port that raises regional security concerns for the US and India. Source: Doublethink Lab, Dr. Haroon ur Rasheed Baloch , China Index Spotlight: PRC’s Soft and Hard Power Influence in Pakistan. [online]. Available at: https://medium.com/doublethinklab/prcs-soft-and-hard-power-influence-in-pakistan-5f7c454912ab [Back to Table of Contents] From Deepfakes to 'Influence for Hire': China's Evolving Tactics Recent analyses from Foreign Policy and Doublethink Lab reveal coordinated, PRC-linked disinformation campaigns targeting domestic politics in the Philippines and Taiwan. The reports detail how these operations utilize vast networks of inauthentic accounts across platforms such as Facebook, X, and Threads to exploit internal political rivalries. In the Philippines, the campaign reportedly uses generative AI and deepfakes to inflame feuds between the Marcos and Duterte factions. In Taiwan, a similar operation impersonates locals using stolen photos to criticize the ruling Democratic Progressive Party (DPP). The author of the Doublethink Lab report suggests that these networks may be part of a commercial "influence for hire" ecosystem, blending political messaging with unrelated content to build their personas. Source: Foreign Policy, Aspinwall, N., 2025. The Philippines Is a Petri Dish for Chinese Disinformation. [online]. Available at: https://foreignpolicy.com/2025/07/14/china-philippines-disinformation-elections/ [Back to Table of Contents] Fake Accounts Impersonate Taiwanese on Threads Between March and April 2025, Doublethink Lab identified 51 inauthentic Threads accounts posing as Taiwanese citizens and targeting domestic political discourse. The accounts, likely linked to the People’s Republic of China (PRC), used stolen profile photos, traditional Chinese text, and localized content to amplify anti-Democratic Progressive Party (DPP) narratives. Evidence of simplified Chinese usage, Hong Kong-linked phone numbers, and copy-pasted political slogans bolstered attribution. The campaign mixed political messaging with sexually suggestive content and commercial spam, consistent with a pattern of “influence-for-hire” operations. Although engagement was limited to the Threads platform, political posts saw significantly higher interaction rates, suggesting partial success in breaching the inauthentic content bubble. Source: Doublethink Lab, Digital Intelligence Team/Doublethink Lab, 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party. [online]. Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13 [Back to Table of Contents] AI RELATED ARTICLES Algorithmic Lies: AI News Channels Undermine Canada’s Election In the days after Canada’s 2025 election, dozens of YouTube videos surfaced claiming ballot box theft and recount conspiracies in ridings that no longer exist—fabrications generated by AI and viewed millions of times. DFRLab traced this coordinated campaign to 42 faceless, AI-powered channels posing as Canadian news outlets. These channels pushed partisan narratives favoring Conservative politicians, amplified Alberta separatism, and spread election disinformation under the guise of breaking news. While YouTube suspended many of the accounts, the incident highlights how “AI slop” is rapidly shaping digital discourse, exploiting platform algorithms with nearly zero human oversight or factual grounding. Source: DFRLab, Digital Forensic Research Lab, 2025. AI‑generated news channels spread election fraud and separatist narratives in Canada. [online]. Available at: https://dfrlab.org/2025/07/17/ai-generated-news-channels-spread-election-fraud-and-separatist-narratives-in-canada/ AI Bot Network Fractures Amid MAGA Epstein Fallout NBC News reports on a network of over 400 suspected AI-driven bot accounts on X (formerly Twitter) that automatically respond to conservative users with pro-Trump content. Tracked by researchers at Alethea and Clemson University, the network exhibits signs of coordinated inauthentic behavior, including the repetition of messages, the use of irrelevant hashtags, and the exclusive posting of replies. The bots initially maintained unified support for Trump and MAGA figures but fractured following controversy over the Epstein files, with contradictory messages appearing simultaneously. Experts suggest the AI was trained on real MAGA content and mirrors organic shifts in sentiment. The incident reflects broader concerns about AI-amplified influence operations on poorly moderated platforms. Source: NBC News, Collier, K., 2025. A MAGA bot network on X is divided over the Trump‑Epstein backlash. [online]. Available at: https://www.nbcnews.com/tech/internet/maga-ai-bot-network-divided-trump-epstein-backlash-rcna219167 [Back to Table of Contents] Digital Democracy in Decline: Global Trends and Consequences The Carnegie Endowment article outlines three converging threats that undermine digital democracy: the shrinking of civic space, declining funding for digital rights, and the erosion of Western legitimacy. Civic space is increasingly suppressed through surveillance technologies, repressive legislation, and collusion between platforms and governments. Meanwhile, global funding for digital rights organizations has declined sharply due to shifts in the political landscape and nationalist policies. The article also critiques Western hypocrisy in promoting digital freedoms abroad while enabling surveillance and repression domestically, leading many Global Majority countries to turn to China and Russia for digital infrastructure. The piece concludes by advocating for sustainable, decentralized funding and renewed legitimacy through consistent, rights-based engagement. Source: Carnegie Endowment for International Peace, Sesan, ‘Gbenga, 2025 . Shrinking Civic Space, Digital Funding, and Legitimacy in a Post‑Truth Era. [online]. Available at: https://carnegieendowment.org/research/2025/07/shrinking-civic-space-digital-funding-and-legitimacy-in-a-post-truth-era?lang=en [Back to Table of Contents] GENERAL REPORTS Cognitive Warfare: The Silent Frontline of Modern Conflict The Conversation article examines the concept of cognitive warfare (or “cog war”) as an emerging domain of conflict in which adversaries manipulate human perception and behavior through disinformation and psychological tactics, often below the threshold of traditional armed conflict. Using examples from COVID-19 and the Ukraine war, the article illustrates how false narratives, sometimes supported by AI-driven microtargeting, can lead to real-world harm. As such operations increasingly erode the boundaries between digital, cognitive, and physical domains, legal frameworks lag. Current laws of war do not address psychological harm, raising calls for expanded protections under human rights law. Source: The Conversation , Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online]. Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607 [Back to Table of Contents] Austria’s Disinformation Landscape: Narratives, Actors, and Impacts A recent report from EU DisinfoLab outlines how Austria has become a hotspot for diverse and persistent disinformation narratives, often tied to political opportunism and ideological movements. Key themes include anti-migrant sentiment, COVID-19 conspiracy theories, pro-Russian framing of the Ukraine war, and hostility toward renewable energy and EU regulations. Disinformation has circulated widely through both alternative and mainstream media, often amplified by far-right actors such as the FPÖ. Despite repeated fact-checking efforts and legal responses, narratives such as “migrants abuse welfare” or “vaccines cause turbo cancer” continue to shape public opinion and political discourse, especially ahead of upcoming elections. Source: EU Disinfo Lab, Schäfer, C., 2025. Disinfo‑landscape‑in‑Austria. [online]. Available at: https://www.disinfo.eu/wp-content/uploads/2025/07/20250717_Disinfo-landscape-in-Austria-v2.pdf [Back to Table of Contents] International Sweep Disrupts Pro-Russian Cybercrime Network Operation Eastwood, a multinational cybercrime crackdown coordinated by Europol and Eurojust, targeted the pro-Russian group NoName057(16), known for ideological DDoS attacks across Europe. The operation, involving 25 countries and private sector partners, led to two arrests, seven international warrants, and the disruption of over 100 servers. Germany identified six suspects as Russian nationals and issued multiple arrest warrants. The group’s activities escalated from attacks on Ukrainian targets to those against NATO-aligned states, including recent incidents in the Netherlands and Switzerland. Authorities highlighted the group’s use of gamified recruitment methods, cryptocurrency incentives, and decentralized operations relying on ideological volunteers. Source: Europol, 2025. Global operation targets NoName057(16) pro‑Russian cybercrime network. [online]. Available at: https://www.europol.europa.eu/media-press/newsroom/news/global-operation-targets-noname05716-pro-russian-cybercrime-network [Back to Table of Contents] EU Observatory Expands Fight Against Online Disinformation In 2020, eight regional hubs across Europe were funded to bolster a coordinated response against digital disinformation, marking the second phase of the European Digital Media Observatory (EDMO). The project, led by the European University Institute and involving partners from Greece, Denmark, and Italy, builds on an infrastructure launched in 2019. Its goal is to provide secure, privacy-conscious data access for researchers, boost media literacy, support fact-checking networks, and inform policymakers. Independent from EU authorities, EDMO exemplifies a pan-European attempt to consolidate fragmented anti-disinformation efforts into a cohesive, evidence-based ecosystem. Source: The Conversation , Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online]. Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607 [Back to Table of Contents] TAKEAWAYS This week's events illustrate a broad strategic pivot by state actors toward manufacturing inauthentic consensus . From botnets fabricating pro-Kremlin support in occupied Ukraine to AI-generated content simulating grassroots political movements in North America, the objective is not just to sow discord but to create the illusion of popular will. This poses a fundamental challenge to discourse in democratic societies, as distinguishing between genuine public opinion and artificially generated narratives becomes increasingly complex, the very foundation of legitimate governance is eroded. [Back to Table of Contents] [Download Report]
- Weekly Report: Cyber based influence campaigns 21th - 27th July 2025
[Listen to the Podcast] [Introduction] During the 21th to the 27th of July, 2025 we observed, collected, and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. The week’s reporting illuminates a strategic tension between rapidly advancing disinformation capabilities, often state-aligned, and the slower-moving, fractured responses of democracies and institutions tasked with defending truth. China's AI chatbots echo state lines with near-automated efficiency, while Russia’s hybrid warfare blends digital deception with kinetic force. Western platforms and governments appear either unwilling or ill-equipped to mount an equally agile defense, as evidenced by X’s non-compliance with EU sanctions and the U.S. military’s lagging AI adoption. This asymmetry of adaptation risks ceding the informational high ground to authoritarian actors adept at shaping perceptions before facts catch up. [ Highlights ] A recent investigation by Alliance4Europe revealed that X is failing to enforce EU sanctions against Russian state-affiliated media and influence operations, allowing widespread dissemination of disinformation. As published by EU vs. Disinfo , Imedi TV, one of Georgia’s most-watched channels, has become a central platform for pro-government propaganda and disinformation, mirroring Kremlin-style tactics to discredit the European Union and democratic institutions. DFRLab reports that pro-Russian TikTok influencers are spreading disinformation by portraying occupied Donbas as thriving under Russian control, using curated videos to legitimize the occupation and distort public perception. A NewsGuard audit revealed that five leading Chinese AI models frequently echoed false pro-Beijing narratives, failing to provide accurate information in over 60% of cases tested in both English and Mandarin. RAND reports that the U.S. Department of Defense is exploring the use of generative AI for influence operations, raising concerns about coordination, oversight, and ethical risks in military messaging. GIOR reports that a transnational network of national conservative actors, backed in part by Russian influence operations, is promoting coordinated illiberal narratives that threaten democratic institutions across Europe and the U.S. Google ’s Threat Analysis Group reports that in Q2 2025, it dismantled nearly 11,000 YouTube channels and other platform accounts tied to coordinated disinformation campaigns by state-linked actors from China, Russia, Iran, and others aiming to manipulate global narratives. > TABLE OF CONTENTS SOCIAL MEDIA PLATFORMS X Fails to Enforce EU Sanctions Against Russian State Media STATE ACTORS A Megaphone for Power: How Imedi TV Undermines Georgia’s EU Path NoName057(16) Conducts Sustained DDoS Campaign Against Ukraine's Allies Manufacturing a Reality: The Kremlin's Influencer War in Occupied Ukraine The Propaganda in the Machine: China's AI Echoes State Narratives Fabricating a Call to Arms: The False Jordan "Attack Israel" Quote AI RELATED ARTICLES Chasing Digital Ghosts While the Democratic Foundations Crumble Unlocking Generative AI’s Power in U.S. Military Influence Campaigns When AI Repeats the Lie Differently, People Believe It More GENERAL REPORTS Google Disrupts Global State-Linked Disinformation Networks at Scale Disinformation Storm: How Platforms Amplify Climate Crisis Falsehoods Transformer-Based Model Elevates Deepfake Detection Standards Fighting on Two Fronts: Ukraine’s Volunteers and the Information War Australia and Japan Launch Joint Initiative Against Information Threat s TAKEAWAYS DOWNLOAD SECTION SOCIAL MEDIA PLATFORMS X Fails to Enforce EU Sanctions Against Russian State Media A recent report by Science Feedback and Alliance4Europe reveals that the social media platform X is systemically failing to comply with EU sanctions against Russian state media. Researchers identified hundreds of sanction-violating posts and formally reported a sample of 125 to X using its designated tool for illegal content. The platform's response was profoundly inadequate: only 57% of reports received an acknowledgement, a breach of Digital Services Act (DSA) obligations. Furthermore, only a single post was removed. In all other cases, X claimed no violation of EU law was found, despite clear evidence to the contrary. The investigation concludes that X’s moderation mechanisms are either insufficiently equipped or the platform is unwilling to enforce sanction-related policies, posing a significant risk to the EU’s information environment. Source: Science Feedback, Alliance4Europe, Terroille,C., Nazari S. , Casandjian E., 2025. Flagged and Ignored: Testing X’s Response to EU Sanction Violations. [online] Available at: https://strapi.wemove.eu/uploads/Flagged_and_Ignored_Testing_X_s_Response_to_EU_Sanction_Violations_1_872cbb7718.pdf [Back to Table of Contents] STATE ACTORS A Megaphone for Power: How Imedi TV Undermines Georgia’s EU Path An article from EUvsDisinfo details how Imedi TV, one of Georgia's most-watched television channels, operates as a powerful propaganda machine for the ruling Georgian Dream party. The outlet systematically promotes anti-Western narratives and Kremlin-style disinformation to undermine the country's European aspirations. According to the analysis, Imedi TV, owned by a businessman with alleged links to Russian elites, demonizes the political opposition and civil society. It also orchestrates smear campaigns against EU diplomats, including Ambassador Paweł Herczyński, accusing them of political interference and serving a "Deep State." By framing European institutions as corrupt and hostile, the channel functions as a key instrument in Georgia's democratic backsliding and its obstruction of a European future. Source: EUvsDisinfo, 2025. Imedi TV: The propaganda megaphone undermining Georgia’s EU aspirations. [online] Available at: https://euvsdisinfo.eu/imedi-tv-the-propaganda-megaphone-undermining-georgias-eu-aspirations/ [Back to Table of Contents] NoName057(16) Conducts Sustained DDoS Campaign Against Ukraine's Allies A new report from Recorded Future’s Insikt Group analyzes the pro-Russian hacktivist group NoName057(16), which has waged a large-scale distributed denial-of-service (DDoS) campaign against more than 3,700 entities over the last 13 months. The group, which emerged just after Russia’s 2022 invasion of Ukraine, primarily targets government and public-sector organizations in European nations that oppose the invasion. The analysis reveals a high operational tempo, averaging 50 unique targets daily, and a resilient multi-tiered command-and-control (C2) infrastructure. Pattern-of-life analysis strongly suggests the operators work from within a Russian time zone. The report concludes that such hacktivist operations are a persistent feature of modern geopolitical conflict, requiring robust defensive strategies and situational awareness from at-risk organizations. Source: Recorded Future, Insikt Group, 2025. [online] Available at: https://go.recordedfuture.com/hubfs/reports/cta-2025-0722.pdf [Back to Table of Contents] Manufacturing a Reality: The Kremlin's Influencer War in Occupied Ukraine In occupied Ukrainian territories, the Kremlin is training a new generation of TikTok bloggers in specialized schools to ensure their content reflects official messaging. This tactic, detailed in a recent analysis by the DFRLab , is part of a sophisticated strategy to project an image of revival and prosperity in war-torn regions like Mariupol. Pro-Russian influencers create curated video tours of allegedly rebuilt cities, deliberately countering reports of widespread devastation. These campaigns exploit the information vacuum and the persuasive power of visual content to manufacture consent for the occupation. The ultimate goal is to create a powerful, alternative reality that erodes trust in Ukrainian and Western sources while cementing Russia's control. Source: DFRLab, Missing, 2025. Manufacturing reality: how pro-Russian TikTok accounts promote Donbass ‘revival’. [online] Available at: https://dfrlab.org/2025/07/21/manufacturing-reality-how-pro-russian-tiktok-accounts-promote-donbass-revival/ [Back to Table of Contents] The Propaganda in the Machine: China's AI Echoes State Narratives A NewsGuard audit of five leading Chinese-backed AI models found they failed to provide accurate information 60 percent of the time when prompted with false narratives promoted by Beijing. The analysis tested chatbots from firms like Baidu, Alibaba, and Tencent, revealing a systemic tendency to repeat pro-China propaganda or provide evasive non-answers. This behavior was consistent across both English and Mandarin, indicating the bias is built into the models' fundamental design. The audit raises significant concerns about the normalization of censorship and state propaganda as these AI systems see growing adoption by international universities and banks, posing a geopolitical risk by echoing the Chinese Communist Party's narratives to a global audience. Source: NewsGuard, Lin, C., and Sadeghi, M., 2025. Chinese AI Models: 60 Percent Fail Rate, Repeating Pro-China Claims. [online] Available at: https://www.newsguardtech.com/special-reports/chinese-ai-models-60-percent-fail-rate-pro-china-claims/ [Back to Table of Contents] Fabricating a Call to Arms: The False Jordan "Attack Israel" Quote In July 2025, a provocative quote claiming Jordan had declared it "a good time to attack Israel" began spreading across social media. The claim, amplified by pro-Iranian accounts on X, was entirely false, according to a fact-check by NewsGuard Reality Check . The article clarifies that a review of official statements from the Jordanian government and its state news agency shows no record of such a declaration. Instead of calling for war, Jordan's leaders have consistently pushed for a diplomatic solution, urging an immediate ceasefire and the delivery of humanitarian aid to Gaza. The incident highlights how disinformation is used to inflame regional tensions by fabricating bellicose statements from key diplomatic actors. Source: NewsGuard Reality Check, Wolf, M., 2025. Concocted Jordan Statement Threatens to Inflame Regional Tensions. [online] Available at: https://www.newsguardrealitycheck.com/p/concocted-jordan-statement-threatens [Back to Table of Contents] AI RELATED ARTICLES Chasing Digital Ghosts While the Democratic Foundations Crumble A blog post by the Oxford Internet Institute (OII) and the University of Zurich warns that alarmist narratives about generative AI’s impact on elections risk diverting attention from deeper systemic threats to democratic integrity. The authors argue that mass political persuasion through generative AI (GenAI) tools such as ChatGPT and Gemini is far less effective than often portrayed, while long-standing structural weaknesses, such as poor electoral governance, campaign finance imbalances, voter suppression tactics, and declining press freedom, pose greater dangers. The piece highlights how focusing narrowly on GenAI can obscure other harms, including AI-enabled harassment targeting women and minorities, and may encourage excessive regulations that inadvertently chill political discourse. Moreover, overemphasizing AI threats could erode public trust in electoral processes, especially when perceptions of manipulation are amplified by media coverage. The authors caution that such narratives could damage voter confidence and democratic stability more than GenAI itself. Source: Oxford Internet Institute, Simon, F. and Altay, S., 2024. Consequences of a Skewed Discourse Around Generative AI and Elections. [online] Available at: https://www.oii.ox.ac.uk/news-events/consequences-of-a-skewed-discourse-around-generative-ai-and-elections/ [Back to Table of Contents] Unlocking Generative AI’s Power in U.S. Military Influence Campaigns In the face of accelerating competition from China and Russia, the U.S. military’s influence community finds itself unprepared to fully leverage one of the most transformative technologies of the moment: generative AI. A RAND Corporation research report , commissioned by the Irregular Warfare Technical Support Directorate , reveals that while generative AI could dramatically improve content production, audience targeting, and operational planning, its adoption across DoD influence units is hampered by fragmentation, inadequate funding, and a lack of strategic guidance. Despite promising capabilities, like sentiment analysis, audience segmentation, and planning support, AI remains a tool waiting for a coherent doctrine, structured acquisition pathways, and senior-level buy-in. RAND concludes that for the U.S. to maintain an edge in information operations, the DoD must shift from isolated pilot efforts to a unified, well-resourced, and adaptive AI implementation strategy across its influence enterprise. Source: RAND Corporation , Robinson, E.; Bicksler, B.; Mele, J.; et al., 2024. Accelerating the Integration of Generative Artificial Intelligence into Influence Activities: Findings and Recommendations for the Department of Defense . [online] Available at: https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3100/RRA3157-1/RAND_RRA3157-1.pdf [Back to Table of Contents] When AI Repeats the Lie Differently, People Believe It More A peer-reviewed study published in PNAS Nexus examines how generative AI alters the dynamics of repetitive propaganda on social media by comparing traditional CopyPasta tactics to AI-paraphrased variants dubbed "AIPasta." While prior campaigns relied on exact-message repetition to exploit the illusory truth effect, this research shows that AIPasta, increasing lexical variety while maintaining semantic similarity, may be more effective at enhancing perceptions of social consensus, especially among politically aligned and unfamiliar audiences. Although AIPasta did not significantly increase belief in false claims overall, it proved more persuasive among less familiar users in specific contexts. Crucially, AI detectors failed to identify AIPasta as synthetic, raising concerns about scale and detectability. The study highlights a critical evolution in information operations: generative AI can subtly shape opinion not by changing individual beliefs, but by simulating popular consensus around disinformation narratives. Source: PNAS Nexus , Dash, K. et al., 2025. From CopyPasta to AIPasta: How Generative AI Alters the Persuasive Dynamics of Repetition in Social Media Campaigns . [online] Available at: https://academic.oup.com/pnasnexus/article/4/7/pgaf207/8209914 [Back to Table of Contents] GENERAL REPORTS Google Disrupts Global State-Linked Disinformation Networks at Scale A quarterly bulletin from Google’s Threat Analysis Group reveals a sweeping crackdown on coordinated influence operations across YouTube and other platforms from April to June 2025. The largest operations were linked to the People’s Republic of China (PRC), accounting for over 7,700 terminated YouTube channels, followed by Russia with more than 2,500. These campaigns consistently pushed pro-state narratives in multiple languages while attacking rival states, democratic institutions, and regional critics. The Russian operations included consulting firms, state-sponsored entities, and media outlets like RT, and targeted audiences across Europe, Asia, and the Americas. Azerbaijan emerged as a significant actor, with over 1,000 channel terminations tied to anti-Armenian and anti-dissident content. Smaller campaigns linked to Iran, Turkey, Romania, Ghana, and Israel also exploited YouTube, Blogger, and ad networks to spread politically motivated messaging. The bulletin underscores the growing industrialization and geographic diversification of coordinated digital propaganda. Source: Google, Threat Analysis Group, 2025. TAG Bulletin: Q2 2025. [online] Available at: https://blog.google/threat-analysis-group/tag-bulletin-q2-2025/ [Back to Table of Contents] Disinformation Storm: How Platforms Amplify Climate Crisis Falsehoods A report by the Center for Countering Digital Hate (CCDH) exposes a massive failure across Meta, YouTube, and X in moderating false or misleading content about extreme weather events. Analyzing 300 high-engagement posts from the Online Deniers Dataset between April 2023 and April 2025, the study found these posts amassed over 221 million views while lacking substantive fact-checking labels or context. Wildfires and hurricanes dominated the narratives, often intertwined with conspiracy theories and political disinformation. Verification badges and monetization policies on all three platforms enhanced the reach and credibility of false claims. Notably, 88% of X posts, 73% on YouTube, and 64% on Meta came from verified accounts. YouTube was also found to recommend further disinformation in one-third of the analyzed cases. Across the platforms, misleading posts not only undermined public trust but also posed tangible risks during crises, misdirecting aid, threatening public safety, and fueling real-world violence. Source: Center for Countering Digital Hate (CCDH) , 2025. Extreme Weather Disinformation Report . [online] Available at: https://counterhate.com/research/extreme-weather-false-claims [Back to Table of Contents] National Conservatism’s Global Rise and the Weimar Warning A report from the Global Influence Operations Report (GIOR) examines the Global National Conservative Alliance (GNCA), a transnational coalition of far-right parties and influencers, arguing it echoes the Weimar-era conservative miscalculation that empowered Hitler. The report details how parties like Hungary’s Fidesz, France’s National Rally, and Germany’s AfD form a modern “respectable radicalism,” bolstered by foreign autocracies such as Russia and China via engineered feedback loops. Hungary, serving as the ideological hub, leverages institutions like the Mathias Corvinus Collegium to spread illiberal ideology across the EU and the U.S. The study outlines how regulatory capture, legal harassment, and disinformation campaigns erode democratic institutions in Hungary, the U.S., Germany, and Israel. Drawing parallels to 1930s Germany, the report warns that conservative elites repeatedly overestimate their ability to control authoritarian movements, underestimating the acceleration digital platforms provide. The GNCA thrives where democratic institutions fail to defend themselves. Source: Global Influence Operations Report , GIOR Staff, 2025. National Conservatism’s Weimar Echo (Series Part 3 of 3) . [online] Available at: https://www.global-influence-ops.com/gior-analysis-weimar-echo-national-conservative/ [Back to Table of Contents] Transformer-Based Model Elevates Deepfake Detection Standards In a recent article by Tech Xplore , researchers from UC Riverside and Google unveiled a cutting-edge system to combat video-based disinformation. Their tool, UNITE (Universal Network for Identifying Tampered and synthEtic videos) , detects manipulated content by analyzing entire video frames, extending beyond traditional face-based deepfake detection. UNITE leverages transformer-based architecture and innovations like “attention-diversity loss” to identify inconsistencies in spatial and temporal features, even when no human faces are present. The system is trained on diverse synthetic content using the SigLIP framework and is designed to detect forgeries ranging from simple face swaps to fully AI-generated videos. Presented at the 2025 CVPR conference , this model positions itself as a universal detector at a time when generative tools for fabricating realistic video content are increasingly accessible, presenting risks to public trust, democratic discourse, and institutional integrity. Source: Tech Xplore, David Danelski, 2025. AI system identifies fake videos beyond face swaps and altered speech. [online] Available at: https://techxplore.com/news/2025-07-scientists-tool-fake-videos.html [Back to Table of Contents] Fighting on Two Fronts: Ukraine’s Volunteers and Information War As drone strikes lit up Kyiv in early 2025, Steffen Berr documented the aftermath not as a journalist, but as a soon-to-be soldier, one of thousands of foreign volunteers driven to Ukraine by conviction and a refusal to accept disinformation. In a feature by Tech Xplore, Berr’s transition from fundraiser to fighter reveals how international volunteers counter Russian narratives through personal testimony. These frontline witnesses wield a kind of “lived credibility” that resists Moscow’s attempts to label them as extremists or NATO puppets. The second half of the piece shifts to technology, introducing UNITE, a cutting-edge AI model co-developed by UC Riverside and Google. Built to detect tampered videos beyond facial recognition, UNITE is a response to the rapid evolution of generative disinformation. From human witnesses to universal detectors, the article highlights how truth is defended in both physical and digital arenas, each fighting the same war over perception. Source: The Conversation , Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online]. Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607 [Back to Table of Contents] Australia and Japan Launch Joint Initiative Against Information Threats As AI-generated disinformation proliferates in the Indo-Pacific, where over five billion people are exposed to rising digital threats, a new partnership between ASPI and JNI , announced on July 23, 2025, signals a joint resolve to push back. This agreement between two premier institutions , Australia’s research-driven ASPI and Japan’s field-focused JNI, builds a framework to detect, analyze, and respond to state-sponsored influence campaigns that increasingly blur lines between war, diplomacy, and deception. By merging strengths in open-source intelligence and strategic communication, the partners aim to build regional capacity against hybrid threats, elevate cross-border information security, and shore up trust in democratic systems. The move reinforces the Australia–Japan security axis while acknowledging that today’s information conflicts demand not just national but multilateral, technology-enabled defenses. Source: Australian Strategic Policy Institute (ASPI) , 2025. ASPI and JNI establish strategic partnership to counter state-sponsored information operations and hybrid threats in the Indo-Pacific . [online] Available at: https://www.aspi.org.au/news/aspi-and-jni-establish-strategic-partnership-to-counter-state-sponsored-information-operations-and-hybrid-threats-in-the-indo-pacific/ [Back to Table of Contents] TAKEAWAYS Recent developments indicate that generative AI will likely become central to both offensive and defensive operations in the information domain, as seen in deepfake detection tools and AI-driven propaganda systems. However, structural lags in institutional response, especially among democratic states, will likely continue. Unless governance, acquisition, and platform accountability mechanisms accelerate, the information space may increasingly tilt toward authoritarian exploitation. Expect escalating reliance on AI tools by both malign actors and defenders, with the balance shaped by agility more than intent. [Back to Table of Contents] [Download Report]
- Weekly Report: Cyber based influence campaigns 7th - 13th July 2025
[Listen to the Podcast] [Introduction] During the 7th to the 13th of July, 2025 , we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report provides a summary of the key events we consider most significant. Across Europe and beyond, the war for narrative control escalates. Russian influence efforts, from weaponized religious networks to disinformation about NATO, intensify alongside Iran’s targeting of dissident voices and Qatar’s courtship of U.S. media. This week's review reveals a trend among state actors to innovate within constrained environments, exploiting gray zones in law, information, and civil society. The strategic stakes are no longer limited to borders, but to belief systems and electoral legitimacy. [ Highlights ] According to an article by t he Record, France's intelligence chief warned that Russian operations now blend physical sabotage with online disinformation to deliberately "disrupt our information space." As reported by NewsGuard's Reality Check , despite being banned in the EU, Russian state media RT is being promoted in Italy through a campaign involving billboards and public screenings of pro-Kremlin documentaries. A Global Influence Operations (GIOR) Report states several cases where Russian Orthodox Church ROC clergy were found possessing dossiers on prominent individuals, with the information likely intended for blackmail or recruitment purposes by Russian intelligence services. According to an article by CyberNews , the pro-Iranian hacktivist group Handala justified an attack on Iran International, allegedly exfiltrating personal data of over 70,000 supporters and staff, branding the independent outlet a "propaganda weapon" for Mossad. A report by NewsGuard notes that Tucker Carlson did not challenge any of Pezeshkian's false claims, which were then widely amplified by Iranian, Russian, and Chinese state media. The Washington Examiner reports that following the 2024 election, Qatari foreign agents shifted their media outreach, with communications to conservative outlets surging from 10% to over 50% of their total. > TABLE OF CONTENTS STATE ACTORS [Russia] Coffins by the Eiffel Tower: Russia’s War of Influence Escalates Russian Propaganda Finds New Life on Italian Streets The Kolbasna Arsenal: A Flashpoint in Russia’s War on Moldova The Kremlin's Holy Spies: Russia Weaponizes Its Orthodox Church Russia-Linked Network Spreads Biolab Hoax in Armenia [Iran] Hacktivist Group Handala Threatens Journalists After Alleged Breach Assassination Plots on UK Soil Reveal Escalating Threat from Iran How a Carlson Interview Became a 'Propaganda Victory' for Tehran GENERAL REPORTS UISD Report Details Coordinated Disinformation in German Election Qatar's Conservative Media Influence Operations TAKEAWAYS DOWNLOAD SECTION STATE ACTORS [Russia] Coffins by the Eiffel Tower: Russia’s War of Influence Escalates An operation leaving coffins draped in the French flag near the Eiffel Tower exemplifies the evolving nature of Russia's "war of influence," according to France's intelligence chief, Nicolas Lerner. In a report from The Record, Lerner warned that these are not amateur acts but calculated efforts to disrupt Europe's information space. His assessment places such incidents within a broader campaign of Russian-linked hybrid threats, including cyberattacks on NATO-related bodies and an arson attack in London directed by the Wagner Group. This synthesis of physical sabotage and digital disruption, Lerner asserts, constitutes a medium- and long-term "existential threat" to European security and democratic values. Source: The Record, Antoniuk, D., 2025. French intel chief warns of evolving Russian hybrid operations, ‘existential threat’ to Europe. [online] Available at: https://therecord.media/french-intelligence-chief-russia-threat [Back to Table of Contents] Russian Propaganda Finds New Life on Italian Streets A billboard in Bologna proclaims, “They ban the truth, we show it,” signaling the quiet return of Russia’s state media RT to EU territory, despite an official ban. In defiance of a 2022 EU regulation, a wave of RT-aligned documentaries has begun screening across Italy, disseminated through public venues and promoted by pro-Kremlin figures operating from occupied Donetsk. Films such as Children of the Donbass and I Am Alive advance Moscow’s narrative on the Ukraine conflict, often under the guise of peace activism or alternative media. While organizers deny formal ties to RT, pro-democracy think tanks like ISD point to a coordinated effort amplified by social media and Telegram channels, such as Donbass Italia. As EU leaders call for investigations into the campaign’s funding, public protests have already derailed several screenings. Yet, with over 13 million online engagements this year, the campaign illustrates how disinformation actors adapt tactics to penetrate democratic societies, even under regulatory constraints. Source: NewsGuard, Pozzi, G. & Badilini, S., 2025. ‘They Ban the Truth, We Show It’: Russia’s RT Makes a Comeback in Italy. [online] Available at: https://www.newsguardrealitycheck.com/p/they-ban-the-truth-we-show-it-russias [Back to Table of Contents] The Kolbasna Arsenal: A Flashpoint in Russia’s War on Moldova Russia is intensifying hybrid operations aimed at destabilizing Moldova ahead of its September 2025 parliamentary elections, according to multiple sources cited by the Jamestown Foundation . The Kremlin is deploying a range of non-military tactics, disinformation, propaganda, bribery, and influence operations to weaken Moldova’s pro-EU government and potentially install a pro-Russian regime. Officials fear Moscow may exploit separatist regions like Transnistria and Gagauzia to launch provocations or false-flag operations, especially given Moldova’s strategic location bordering Ukraine and Romania. Russian analysts have expressed concern over the vast Kolbasna ammunition depot in Transnistria potentially falling into Ukrainian hands, amplifying geopolitical tensions. Moldova has responded by restoring bomb shelters and strengthening internal defenses. While experts agree that large-scale Russian troop deployments are unlikely due to logistical constraints, they warn that Moscow could foment internal unrest to create a conflict similar to the one in Donbas. The Kremlin’s strategy appears modeled on past influence campaigns and may escalate if pro-Russian parties lose electoral ground. Source: Jamestown Foundation (Eurasia Daily Monitor), Kirillova, K., 2025. Moscow Seeks to Retain Influence Over Moldova. [online] Available at: https://jamestown.org/program/moscow-seeks-to-retain-influence-over-moldova/ [Back to Table of Contents] The Kremlin's Holy Spies: Russia Weaponizes Its Orthodox Church A Global Influence Operations Report (GIOR) analysis concludes that the Russian Orthodox Church (ROC) has been weaponized into a central instrument of Kremlin foreign policy. Operating beyond its religious mandate, the ROC disseminates pro-Kremlin propaganda, champions "traditional values" to build conservative networks abroad, and coordinates with Russian state agencies to undermine Western integration in nations like Estonia and Bulgaria. The report details how the Church's infrastructure is also used for intelligence operations, including recruiting agents, gathering blackmail material, and placing facilities near sensitive infrastructure. This fusion of spiritual authority and state power enables the Kremlin to conduct multifaceted, hybrid operations under the guise of religious activity. Source: GIOR Analysis, 2025. The Russian Orthodox Church – Putin Weaponizes Religion. [online] Available at: https://www.global-influence-ops.com/gior-analysis-the-russian-orthodox-church-putin-weaponizes-religion// [Back to Table of Contents] Russia-Linked Network Spreads Biolab Hoax in Armenia A Kremlin-linked disinformation campaign has targeted Armenia with a false narrative alleging that U.S.-backed biolabs are conducting military experiments on Armenian civilians. According to DFRLab , the campaign, led by the Russian Foundation to Battle Injustice (R-FBI), disseminated content in multiple languages via a coordinated network of fringe websites and Telegram channels. Originating from an English-language article on [londontimes.live], the hoax accused Armenian officials of colluding with Western pharmaceutical companies. This was followed by further fabricated reports suggesting U.S. biolab operations were hidden behind international conferences. The campaign involved media laundering by a 56-website network linked to RT, with synchronized publishing patterns and reuse of RT-hosted assets. Amplification occurred across platforms in at least seven languages, reaching audiences in Europe, the U.S., Iran, and Azerbaijan. Researchers connect the operation to entities linked to Yevgeny Prigozhin’s influence apparatus, highlighting Russia’s continued use of biolab conspiracies as tools to erode public trust and undermine pro-Western governance in Armenia. Source: DisinfoWatch, 2025 . U.S. Republican Lawmaker Echoes False Russian Claim About Orthodox Church Ban in Ukraine . [online] Available at: https://disinfowatch.org/disinfo/u-s-republican-lawmaker-echoes-false-russian-claim-about-orthodox-church-ban-in-ukraine/ [Back to Table of Contents] Russia’s Faux Peace Rhetoric Paints Europe as Aggressor The Kremlin is escalating its disinformation campaign by portraying Europe as an aggressor to justify Russia's war aims and undermine Western support for Ukraine, according to EUvsDisinfo . This narrative shift, following the NATO summit, reframes European defense initiatives as "militarization" and depicts France and Germany as primary threats. Despite claiming to seek peace, pro-Kremlin outlets saturate media with war rhetoric, suggesting NATO and EU states are preparing attacks on Russia. These efforts coincide with absurd claims about Russia’s humanitarianism, often invoking genocide narratives and alleged Western oppression. The Kremlin simultaneously fabricates reports of human rights abuses in Europe while suppressing civil society at home. Additionally, recurring falsehoods such as NATO’s alleged plans to occupy Odesa and the proliferation of Ukrainian arms on black markets continue to be recycled to erode public trust. These themes reflect a broader strategy: presenting Russia as a besieged moral power while deflecting attention from its aggression and repression. Source: EUvsDisinfo, 2025. The Kremlin’s tale of the European bogeyman. [online] Available at: https://euvsdisinfo.eu/the-kremlins-tale-of-the-european-bogeyman/ [Back to Table of Contents] [Iran] Hacktivist Group Handala Threatens Journalists After Alleged Breach Cybernews highlights that on Tuesday, the pro-Iranian hacktivist group Handala declared “Operation Handala” a success, claiming it had infiltrated the entire communications infrastructure of Iran International, one of the few independent news outlets serving Iranians. From its dark web blog and short-lived Telegram channels, the group issued a barrage of threats, targeting journalists by name, including UK-based reporter Mojtaba Pourmohsen, and warning viewers that their identities were logged and under surveillance. While Handala offered no proof of the breach, it claimed to possess a vast trove of personal data and framed the attack as retaliation against what it called “media-based espionage.” Analysts trace the group’s roots to Iran’s expanding hacktivist ecosystem, now swelling in the wake of regional tensions. This marked shift, from targeting foreign entities to striking Iranian dissident media, signals an escalation in Tehran-linked cyber repression and a troubling willingness to weaponize doxxing and digital fear against perceived domestic adversaries abroad. Source: Cybernews, Schappert, S., 2025. Iranian hacktivists target Iran's only independent news outlet, threaten London journalists. [online] Available at: https://cybernews.com/news/iran-international-cyberattack-handala-hacktivists-threaten-journalists/ [Back to Table of Contents] Assassination Plots on UK Soil Reveal Escalating Threat from Iran A UK parliamentary report by the Intelligence and Security Committee (ISC) concludes that Iran poses a persistent, wide-ranging, and unpredictable threat to UK national security. Drawing on evidence from 2021–2023, the report outlines Iran’s use of asymmetric tactics, including espionage, cyber operations, interference, and support for militant proxies. Iran's intelligence services, shaped by internal factionalism and directed by the Supreme Leader, were found to be increasingly active in targeting dissidents in the UK, with at least 15 assassination or kidnapping attempts since 2022. Cyber threats remain a key concern, particularly given Iran’s aggressive posture and low-cost, high-impact capabilities. While the UK has strengthened its legislative tools and intelligence partnerships, the report criticizes the government’s crisis-driven Iran policy, its lack of long-term strategic vision, and inconsistent resourcing. It calls for reform of the Official Secrets Act, a stronger cyber deterrence posture, and urgent consideration of proscribing the IRGC. Source: UK Intelligence and Security Committee of Parliament, 2025. Press Notice: Intelligence and Security Committee of Parliament [Report on Iran]. [online] Available at: https://isc.independent.gov.uk/wp-content/uploads/2025/07/Intelligence-and-Security-Committee-of-Parliament-Iran-Press-Notice.pdf [Back to Table of Contents] How a Carlson Interview Became a 'Propaganda Victory' for Tehran A report by NewsGuard analyzes how Iranian state media leveraged a high-profile interview between President Masoud Pezeshkian and U.S. commentator Tucker Carlson to advance disinformation. The analysis details several false claims made by Pezeshkian—and left unchallenged by Carlson, including a denial that Iran had issued a fatwa against Donald Trump, just weeks after two were publicly announced. Other falsehoods involved Iran's cooperation with the IAEA and the meaning of the "Death to America" slogan. Iranian, Russian, and Chinese state media subsequently amplified these claims. The interview is framed as a strategic "propaganda victory" for Tehran, successfully using a Western media platform to launder its narratives and bolster its legitimacy. Source: NewsGuard, Sadeghi, M., 2025. Iranian President’s Interview with Tucker Carlson Generates False Claims in Iranian State Media. [online] Available at: https://www.newsguardrealitycheck.com/p/iranian-presidents-interview-with [Back to Table of Contents] GENERAL REPORTS UISD Report Details Coordinated Disinformation in German Election A new report by ISD and partners analyzes foreign information manipulation and interference (FIMI) during Germany’s 2025 federal election, revealing a high volume of coordinated disinformation campaigns, primarily orchestrated by Russia-aligned actors. These efforts exploited social fault lines around migration, the economy, and national identity, using AI-generated content, impersonation of institutions, and bot networks to manipulate public discourse. Notable operations included Storm-1516, Operation Overload, and Doppelgänger. Domestic actors, including the far-right AfD party, also adopted similar tactics, further blurring the boundary between legitimate and deceptive political communication. Despite platform efforts, enforcement gaps allowed sanctioned outlets, such as RT DE, to reach German audiences. The report advocates for a whole-of-society approach, encompassing regulatory reform and media literacy, to foster resilience against such threats. It also highlights deficiencies in policy enforcement, platform accountability, and long-term strategy across government and civil institutions. Source: ISD (Institute for Strategic Dialogue) & contributing organisations, n.d. Country Report: Assessment of Foreign Information Manipulation and Interference (FIMI) in the 2025 German Federal Election. [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/07/Germany-CERA_Final.pdf [Back to Table of Contents] Qatar's Conservative Media Influence Operations As the new U.S. Attorney General, a former lobbyist for Qatar, moves to limit the enforcement of foreign influence laws, Qatar is simultaneously ramping up its campaign to win over conservative American media. A Washington Examiner investigation found that since the 2024 election, the Gulf state's registered agents have dramatically increased their outreach to right-wing outlets, securing friendly interviews with figures like Tucker Carlson and successfully pitching pro-Qatar stories. This surge in overt influence, combined with a new Justice Department policy that critics say grants "greater license to engage in unregistered foreign lobbying," highlights a confluence of events that could significantly expand Qatar's influence in Washington. Source: Washington Examiner, Schmad, R., 2025. Conservative media targeted by Qatari foreign influence operations. [online] Available at: https://www.washingtonexaminer.com/news/investigations/3414894/conservative-media-targeted-qatari-foreign-influence-operations/ [Back to Table of Contents] TAKEAWAYS This week's primary trend is the increasing integration of hybrid threats. Operations in France, Moldova, and Germany demonstrate that disinformation, cyber-attacks, and physical acts are not separate tactics but often components of a single, coordinated strategy. A key question for policymakers is whether current defences, often siloed by agency or threat type, are adequate to counter this combined attack. Illustrative of this hypothesis is the UK’s critique of its crisis-driven Iran policy, which suggested an urgent need for a more unified and long-term strategic vision. [Back to Table of Contents] [Download Report]
- Weekly Report: Cyber based influence campaigns 30th June - 06th July 2025
[Listen to the Podcast] [ Report Highlights ] RAND highlights that Russia is aggressively expanding its influence in Armenia, Georgia, and Moldova, exploiting their uncertain EU prospects to deepen division and control. With Western support weakening, these countries are increasingly vulnerable to Moscow’s strategic pressure and manipulation. The Institute for the Study of War reports that Russia employs cognitive warfare to manipulate how foreign audiences perceive reality, using disinformation, selective truths, and psychological tactics to shape decision-making and undermine adversaries without direct conflict. Following an in-depth investigation by ABC News and the Organized Crime and Corruption Reporting Project (OCCRP) , Australia has sanctioned Pravfond, a Russian-backed group linked to disinformation and espionage. This move highlights Australia’s growing efforts to counter covert foreign influence within its borders. The Telegraph reports that the United Kingdom has been targeted by a long-running covert influence campaign by Iran, involving disinformation, ideological manipulation, cyber operations, regime-linked NGOs, and hidden financial networks aimed at destabilizing British society. The European Union announced it has renewed its sweeping sanctions against Russia until January 2026 in response to the Kremlin’s ongoing aggression in Ukraine. > TABLE OF CONTENTS < DIGITAL HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] The Waiting Zone: Strategic Neglect on Europe’s Eastern Edge In the Shadow of Matryoshka: Disinformation Ahead of the EU-Summit Russia's Cognitive Warfare: A Strategic Threat [The War in Ukraine] Once Upon a Time in the Classroom: Russia’s New History Textbooks A Question of Faith: False Claims Against Ukraine From Russia with Funds: Australia Sanctions Pravfond’s [Iran] Covered Hostilities: Iran’s Disinformation War in the UK GENERAL REPORTS NewsGuard Launches Reality Gap Index FRAMEWORKS TO COUNTER DISINFORMATION EU Sanctions Fight Russian Disinformation UK launches Foreign Influence Registration Scheme EU's Push for Accountability to Counter Disinformation GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] The Waiting Zone: Strategic Neglect on Europe’s Eastern Edge RAND reports that Russia is attempting to exert influence over Armenia, Georgia, and Moldova, three states without a clear EU accession timeline. In Moldova, it spreads disinformation, interferes in elections, and stirs unrest in the separatist region of Transnistria. Georgia is burdened by an authoritarian government that increasingly leans toward Russia, suppresses the opposition, and undermines civil society. In Armenia, Moscow uses the aftermath of the 2023 Nagorno-Karabakh crisis to retain military presence and increase pressure through the threat of further Azerbaijani attacks. These countries are especially vulnerable because their path to the EU remains uncertain, and key Western actors, such as USAID, NED, and Radio Free Europe, have scaled back or suspended their support. This has created a vacuum that Russia exploits through disinformation and coercion. Moscow has nothing to offer, no functioning economy, no absolute security guarantees, and no vision for development. The EU could respond with measures, including granting candidacy status to Armenia, accelerating integration steps for Moldova, and taking a clear stance against democratic backsliding in Georgia. However, internal divisions, particularly vetoes from Hungary or Slovakia, hinder unified action. That’s why national responses matter. Countries like the United Kingdom, though no longer part of the EU, retain global diplomatic networks, financial leverage, and strategic partnerships in the region. With targeted sanctions, media support, and financial aid, they could help strengthen pro-European forces. “ History has also shown that any compromise, appeasement, or apathy in the response to Vladimir Putin's strategic ambitions on the continent only encourages further aggression . ” Source: RAND Corporation, Kennedy, J. & Dunbar, W., 2025. Countering Russian Influence: Support for Armenia, Georgia, and Moldova in the 'Waiting Room of the West' . [online] Available at: https://www.rand.org/pubs/commentary/2025/07/countering-russian-influence-support-for-armenia-georgia.html [Back to Table of Contents] In the Shadow of Matryoshka: Disinformation Ahead of the EU-Summit In recent coverage , EUvsDisinfo has traced the latest surge in Russian information warfare surrounding the 2025 EU-Moldova Summit back to the “Matryoshka network”, which has been active since 2023. The disinformation network operates in two stages: First, it produces fake content, including manipulated videos, fabricated news stories, and counterfeit imitations of Western media content. In the second phase, a swarm of affiliated accounts (“quoters”) amplifies the material by replying to posts from public figures or media outlets, often tagging them to draw attention. A central target is Moldova's President Maia Sandu, who is attacked through fabricated scandals and conspiracy theories, particularly in the lead-up to national elections and the EU-Moldova Summit in Chișinău. EU countries, such as Estonia, Romania, and France, have also been drawn into the campaign through fabricated content designed to suggest diplomatic conflicts and security threats. Source: EUvsDisinfo, 2025. Matryoshka’s Moldovan Manipulation . [online] Available at: https://euvsdisinfo.eu/matryoshkas-moldovan-manipulation/ [Back to Table of Contents] Russia's Cognitive Warfare: A Strategic Threat A recent study by the Institute for the Study of War (ISW) examines the nature, purpose, and methods of cognitive warfare as a central threat to U.S. national security. Cognitive warfare is defined as the deliberate manipulation of an opponent’s thinking, decision-making, and behavior to achieve strategic objectives without resorting to direct military force. Russia, China, Iran, and North Korea use this method systematically against the United States and its allies, with Russia identified as the leading actor and a model for others. Russia’s approach goes far beyond traditional disinformation. It relies on half-truths and selective messaging to construct a false sense of reality. These narratives are disseminated through media outlets, social networks, diplomatic channels, international institutions, and individual influencers. The information campaign is reinforced by physical actions such as cyberattacks, sabotage, and military posturing. Russia’s operations are long-term, and narratives are adapted, paused, and reactivated over the years. The aim is to influence foreign decision-making in Russia’s favor by fostering confusion, passivity, or distorted perceptions of threats. Moscow employs cognitive warfare to bridge the gap between its ambitions and its limited military and economic capabilities. The author warns that simply debunking false claims is insufficient. What matters is identifying and rejecting the underlying assumptions that the Kremlin seeks to implant in Western thinking. Only by countering these embedded narratives can cognitive warfare be effectively neutralized. This is not a war of weapons, but a war over perception and reasoning, and it is already underway. Source: ISW, Bugayova, N. & Stepanenko, K., 2025. A Primer on Russian Cognitive Warfare . [online] Available at: https://www.understandingwar.org/sites/default/files/ISW%20CW%20Russian%20Cognitive%20Warfare%2C%20June%2030%2C%202025%20PDF%20FINAL_0.pdf [Back to Table of Contents] [The War in Ukraine] Once Upon a Time in the Classroom: Russia’s New History Textbooks According to a report by EU vs. Disinfo , Russia’s newly issued state-approved history textbooks for grades 10 and 11 represent a significant escalation in the Kremlin’s disinformation campaign. Written in part by Putin’s aide Vladimir Medinsky, these books include entire chapters on the still-ongoing war in Ukraine, which is framed not as aggression but as a noble “special military operation.” Far from offering historical analysis, the textbooks promote propaganda, recasting the invasion as a civilizational defense and portraying Ukraine as both a threat and an artificial construct. Emotional language, conspiracy theories, and aggressive revisionism serve to indoctrinate students with a distorted, state-approved worldview. Disinformation is at the heart of this educational overhaul. The books glorify war, create a cult of heroism around known war criminals, and push the narrative that Russia is under constant threat from the West. They claim that NATO and the U.S. orchestrated everything from the fall of Yugoslavia to the Euromaidan revolution in Ukraine, with the aim of “dismembering” Russia. These narratives are presented not only as historical facts but also as urgent truths for young Russians to internalize and reflect upon. The language urges students to “draw their conclusions,” while steering them firmly toward Kremlin-approved ideology. Perhaps most striking is the erasure of Ukrainian identity itself. Ukrainians are repeatedly referred to as “Little Russians,” and their statehood is depicted as the product of Western manipulation. Ukraine is presented as a puppet of hostile powers, with its nationalism portrayed as a manufactured tool to harm Russia. Source: EUvsDisinfo, 2025. Putin’s final chapter: how war became Russia’s national curriculum . [online] Available at: https://euvsdisinfo.eu/putins-final-chapter-how-war-became-russias-national-curriculum/ [Back to Table of Contents] A Question of Faith: False Claims Against Ukraine DisinfoWatch has identified a false claim made by U.S. Representative Anna Paulina Luna, who stated on X that the Ukrainian government under Zelensky has banned the Orthodox Church entirely. She invoked this Kremlin-aligned disinformation narrative to argue for ending U.S. military aid to Ukraine. In reality, no such blanket ban exists. The law, enacted in 2024 and titled “On the Protection of the Constitutional Order in the Field of Activities of Religious Organizations,” allows the government to restrict religious groups affiliated with aggressor states. It is grounded in national security concerns, not in religious suppression. The measure has been widely understood to apply to the Russian Orthodox Church (ROC), which has played a well-documented role in promoting Kremlin propaganda, legitimizing the war against Ukraine, and spreading anti-Western conspiracy narratives. Source: DisinfoWatch, 2025. U.S. Republican Lawmaker Echoes False Russian Claim About Orthodox Church Ban in Ukraine . [online] Available at: https://disinfowatch.org/disinfo/u-s-republican-lawmaker-echoes-false-russian-claim-about-orthodox-church-ban-in-ukraine/ [Back to Table of Contents] From Russia with Funds: Australia Sanctions Pravfond’s According to an article by ABC News , Australia has sanctioned Pravfond, a Russian-backed organization, which is accused of supporting disinformation and espionage. Australia’s measures followed an investigation by ABC and the Organized Crime and Corruption Reporting Project ( OCCRP ). Pravfond, officially the "Foundation to Support and Defend the Rights of Compatriots Living Abroad," has funded the legal defenses of high-profile figures such as alleged pro-Putin propagandist Simeon Boikov ("Aussie Cossack"), Russian arms dealer Viktor Bout, and convicted assassin Vadim Krasikov. The investigation uncovered thousands of internal emails revealing Pravfond’s direct financial and political support for Boikov, who has promoted Russian propaganda while sheltering in Sydney’s Russian consulate. Authorities say Pravfond operates under the guise of humanitarian aid while enabling Russia’s influence operations abroad, including disinformation campaigns linked to the Ukraine war. The Australian government’s sanctions freeze the organization’s assets and bar any financial dealings, citing its role in manipulating public opinion in favor of Russia’s illegal invasion. Despite these findings, Pravfond and Russian officials have dismissed the claims as “fake stories,” framing the organization as a legitimate human rights group unfairly targeted by Western governments. Sources: ABC News, Nicholls, S., 2025. Russia-backed fund Pravfond, which bankrolled 'Aussie Cossack', was sanctioned by Australia over anti-Ukraine propaganda . [online] Available at: https://www.abc.net.au/news/2025-07-02/australia-sanctions-pravfond-over-anti-ukraine-propaganda/105481074 OCCRP, Lozovsky, I. & Laine, M., 2025. Russian Foundation, Aimed at Helping ‘Compatriots’ Abroad, Supports Spies, Criminals, and Propagandists . [online] Available at: https://www.occrp.org/en/project/dear-compatriots/russian-foundation-aimed-at-helping-compatriots-abroad-supports-spies-criminals-and-propagandists [Back to Table of Contents] [Iran] Covered Hostilities: Iran’s Disinformation War in the UK The Telegraph reports that the United Kingdom has long been the target of a covert influence campaign by the Islamic Republic of Iran. This shadow war includes disinformation, ideological manipulation, cyber operations, regime-linked NGOs, and concealed financial networks. A central propaganda tool was the state-controlled broadcaster Press TV, whose UK license was revoked in 2012. Nevertheless, it continues to spread anti-British content online. Programmes like Palestine Declassified have glorified Hamas’s armed resistance. In recent years, Iran has increasingly shifted its efforts online. Between 2013 and 2014, Iranian-linked actors ran a Facebook page promoting separatist views ahead of the Scottish independence referendum, mirroring Russian disinformation strategies. Similar Iranian activity reappeared during the 2021 Scottish parliamentary elections. These campaigns are reinforced by regime-affiliated organisations such as the Islamic Human Rights Commission (IHRC), which organises annual Quds Day rallies and has reportedly supported Palestine Action, a group involved in sabotage against Israeli-linked sites in the UK. Despite documented ties to Tehran, the IHRC remains active. Iran’s financial networks also play a key role. Bank Melli and Bank Saderat, both linked to the financing of Hamas, Islamic Jihad, and Kataib Hezbollah, still maintain offices in London’s financial district. Although officials, such as MI5’s director and senior politicians, have acknowledged the threat, concrete responses remain limited. Sources: The Telegraph, Ramani, S., 2025. The Iranian Threat Already Inside Britain . [online] Available at: https://www.telegraph.co.uk/news/2025/06/30/iranian-threat-already-inside-britain/ [Back to Table of Contents] GENERAL REPORTS NewsGuard Launches Reality Gap Index On July 1 st , 2025, NewsGuard introduced the Reality Gap Index, the first ongoing metric designed to track how effectively false claims circulating online are penetrating American public belief. Developed in collaboration with YouGov, the Index surveys a representative sample of 1,000 Americans each month to assess belief in three of the most widespread false claims identified through NewsGuard’s False Claim Fingerprints database. The June 2025 survey found that 49% of Americans believed at least one of the month’s top false claims. The selected false claims for June included widely debunked narratives: that bricks were planted to arm protesters during anti-deportation demonstrations in Los Angeles, that U.S. senators spent $800,000 of taxpayer money on hotels during a Ukraine trip, and that white South Africans are being systematically killed in a so-called "white genocide." Only 7% of respondents were able to identify all three statements as false information, while 74% were unsure about at least one of them. The Reality Gap Index will be updated monthly and published in NewsGuard’s Reality Check newsletter. NewsGuard co-CEO Steven Brill emphasized that while misinformation efforts are constantly tracked, the new index uniquely measures the extent to which these falsehoods influence public opinion. Source: NewsGuard, 2025. NewsGuard Launches ‘Reality Gap Index’ to Track How False Claims Are Penetrating Through to the American Mainstream . [online] Available at: https://www.newsguardtech.com/press/reality-gap-index/ [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION EU Sanctions Fight Russian Disinformation The European Union announced it has renewed its sweeping sanctions against Russia until January 2026 in response to the Kremlin’s ongoing aggression in Ukraine. Originally imposed in 2014 and expanded significantly since the 2022 invasion, these measures target key sectors, including trade, finance, energy, and technology. Crucially, they also include bans on Russian state-backed media outlets known for spreading disinformation, as well as measures to prevent circumvention of sanctions. Russian disinformation has played a central role in its war efforts, not only justifying its invasion but also seeking to destabilize European unity and public opinion. The EU recognized that this information warfare violates the principles of international law just as severely as military aggression. To counter this, sanctions are being used not only to restrict Russia’s economic capacity but also to silence channels that propagate state-sponsored lies and conspiracy theories. Source: Council of the European Union, 2025. Russia’s war of aggression against Ukraine: Council extends economic restrictive measures for a further 6 months . [online] Available at: https://www.consilium.europa.eu/en/press/press-releases/2025/06/30/russia-s-war-of-aggression-against-ukraine-council-extends-economic-restrictive-measures-for-a-further-6-months/ [Back to Table of Contents] UK launches Foreign Influence Registration Scheme According to a report by the GOV. On July 1 st , 2025, the UK officially launched the Foreign Influence Registration Scheme (FIRS) as part of its Plan for Change, aiming to protect national security and democratic integrity from covert foreign interference. Introduced under the National Security Act 2023, FIRS creates transparency around political activities conducted on behalf of foreign powers, particularly those of adversarial states such as Iran and Russia, which have been placed under the scheme’s most stringent “enhanced tier.” FIRS is a two-tier scheme: a political tier, which requires the registration of political influence activities such as lobbying and communications, and an enhanced tier that mandates strict reporting for activities directed by hostile states, including their military and intelligence services. Failure to register can result in criminal prosecution, although registration itself does not imply illegitimacy or the need to cease the activity. The UK government emphasized that while legitimate foreign engagement is welcome, covert manipulation of the political system will not be tolerated. To ensure the scheme is proportionate, FIRS includes exemptions, including for recognized news publishers, legal professionals acting during legal proceedings or providing legal advice, diplomats and their families, and arrangements involving the UK government. Source: UK Government, Jarvis, D.; MBE MP, 2025. UK launches Foreign Influence Registration Scheme . [online] Available at: https://www.gov.uk/government/news/uk-launches-foreign-influence-registration-scheme [Back to Table of Contents] EU's Push for Accountability to Counter Disinformation A report by Tech Policy states that Europe’s Code of Conduct on Disinformation officially came into effect on July 1, 2025, as part of the Digital Services Act (DSA). Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are now required to meet stricter transparency and auditing obligations. This includes annual independent audits to ensure that disinformation risks are adequately addressed. Full compliance with the Code is considered a key risk mitigation tool and a benchmark for DSA compliance. Through these measures, the EU holds platforms more accountable to create a safer and more transparent online environment. Criticism primarily comes from the United States, especially from MAGA-aligned politicians and tech companies. They accuse the EU of censorship, fearing that the Code and DSA could restrict freedom of expression and establish global standards for content moderation. The concern is that content might be systematically removed or suppressed, leading to increased monitoring and control of speech. The European Commission firmly rejects these accusations. It emphasizes that the Code is not a tool for censorship, but a framework that promotes transparency, fairness, and safety online while upholding fundamental rights, including freedom of expression. The focus is on systemic accountability of platforms rather than removing individual pieces of content. In particular, the aim is to combat opaque recommendation algorithms and advertising networks that amplify polarizing and manipulated content. While the Code was initially voluntary, adherence to it is now part of the DSA’s risk mitigation measures. DSA obligations are legally binding, whereas compliance with the Code remains voluntary but is taken into consideration during audits. Experts warn that without clear audit frameworks and transparent data access, the effectiveness of these measures will suffer. Overall, the pressure on platforms to take responsibility is increasing, as the EU strives to balance security with freedom of expression. Source: TechPolicy, Jahangir, R., 2025. EU Disinformation Code Takes Effect Amid Censorship Claims and Trade Tensions . [online] Available at: https://www.techpolicy.press/eu-disinformation-code-takes-effect-amid-censorship-claims-and-trade-tensions/ [Back to Table of Contents] [Download Report]
- Weekly Report: Cyber based influence campaigns 23rd - 29th of June 2025
[Listen to the Podcast] [ Report Highlights ] CheckFirst's third report, dated June 26, 2025, reveals how the Russian disinformation campaign Operation Overload specifically targets six countries: France, Germany, Moldova, Poland, Ukraine, and the United States. By flooding media outlets and fact-checkers with targeted false information, the campaign aims to overwhelm and paralyze their efforts. The Robert Lansing Institute reports that Russia orchestrated a failed coup attempt in Serbia using disinformation, paramilitary networks, and religious influencers to destabilize the country and obstruct its pro-Western trajectory. The UK Defence Journal reports that dozens of pro-Scottish independence accounts on X, believed to be part of an Iranian disinformation campaign to weaken the UK, went dark simultaneously after Israeli airstrikes disrupted Iranian cyber infrastructure, exposing a direct link between physical attacks and online influence operations. A recent study by the Atlantic Council's Digital Forensic Research Lab (DFRLab) revealed that Elon Musk’s AI chatbot, Grok, played a troubling role in spreading disinformation during the early days of the Israel-Iran conflict. A New York Times investigation highlights how artificial intelligence has evolved beyond novelty into a weaponized disinformation tool, deployed by foreign regimes and domestic actors to sow distrust in democracies worldwide. NewsGuard Reality Check reports that a false claim on X about China sending military aid to Iran, stemming from misinterpreted flight data, was amplified by a pro-Iran commentator, some mainstream media, and notably, multiple AI chatbots, revealing a significant flaw in how misinformation spreads. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] From Headlines to Lies: Global Events as Vehicles for Disinformation History, Rewritten – Generation, Redefined Operation Overload: Attacking Democracy’s Immune System Selective Truths: RT Takes Aim at the New MI6 Director “Nobody Leaves the Family": The Coup Attempt in Serbia [China] China's Use of Quanzhen Taoism to Spread Disinformation [Iran] Manufacturing Victory: Iran’s Disinformation Efforts The Fiction of Retreat: Iran’s Disinformation on U.S. Withdrawal The Usual Suspects Are Missing: Tehran’s Fake Scots Go Silent AI-RELATED ARTICLES Grok, We Have a Problem: Disinfo in the Israel–Iran War The Post-Truth Machine: AI-driven Disinformation Threatens Democracy The Flight That Never Was: AI Boosts False China-Iran Claim GENERAL REPORTS Disinformation Undermines Polish Democracy Amid Contentious Election Trump and Hannity's Post-Bombing Disinformation Digital Fog of War: AI Slop and Information Control in the Iran Conflict Strategic Rivals Celebrate US’s ‘Soft Power Suicide’ Nationalist Networks and Global Threats: The GNCA’s Role in Disinformation Campaigns GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] From Headlines to Lies: Global Events as Vehicles for Disinformation The article by EU vs. Disinfo shows how the Kremlin uses global attention focused on high-level geopolitical events, ranging from the Israel-Iran conflict to NATO’s historic summit and EU support for Ukraine, to ramp up its disinformation efforts. Faced with a rapidly evolving international landscape, Russian propaganda reverted to familiar falsehoods: portraying NATO as an aggressor, Europe as Russophobic, and Russia as an eternal victim. These narratives were strategically promoted to capitalize on global events' visibility and validate Russia's confrontational worldview. The Kremlin recycled long-standing myths of Western hostility, falsely claiming that NATO's defense initiatives threaten Russia and that its very existence is expansive. Pro-Kremlin voices attempted to undermine Western unity by exaggerating internal divisions within NATO and framing increased defense spending as fiscally irresponsible or indicative of impending collapse. Simultaneously, Europe was vilified for supporting Ukraine and tightening sanctions on Russia, reinforcing the illusion of a hostile West bent on weakening Moscow. Source: EUvsDisinfo, 2025. The Kremlin’s self-fulfilling curse . [online] Available at: https://euvsdisinfo.eu/the-kremlins-self-fulfilling-curse/ [Back to Table of Contents] History, Rewritten – Generation, Redefined EU vs. Disinfo highlights how the Kremlin’s disinformation strategy has infiltrated the Russian education system to indoctrinate youth with a distorted view of national history. New textbooks, co-authored by regime-loyal writers, present a highly manipulated narrative of Russia’s past. The country is depicted as an “eternal victim” and a “reluctant warrior.” This campaign of historical revisionism reframes acts of aggression as noble acts of defense, from Soviet invasions in the 20th century to the annexation of Crimea and the war against Ukraine. Complex historical contexts are simplified, omitted, or reinterpreted. Military expansion is portrayed as liberation, while war crimes, dissent, and public debate are ignored. This rewriting of history is not merely about fostering patriotism; it functions as a long-term instrument of state-sponsored disinformation. The textbooks glorify Russian militarism, downplay Western contributions to global conflicts, and criminalize criticism of the Red Army. At the same time, they reinforce the narrative that Russia has never initiated a war, but merely responded to threats. The intended outcome: a generation shaped by a state-driven historical narrative, prepared to interpret future military actions as necessary and legitimate responses to external threats. Source: EUvsDisinfo, 2025. Russia’s Military History: Never in the Wrong . [online] Available at: https://euvsdisinfo.eu/russias-military-history-never-in-the-wrong/ [Back to Table of Contents] Operation Overload: Attacking Democracy’s Immune System In its third report, researchers from CheckFirst examine the development of Operation Overload, a Russian information campaign targeting six countries: France, Germany, Moldova, Poland, Ukraine, and the United States. The campaign seeks to attract the attention of media and fact-checkers by overloading their capacity. First documented in June 2024, it has since expanded in scope and platform presence. Narrative themes include anti-Ukrainian rhetoric, election interference, gender-based disinformation, smear campaigns, and calls to violence. A key tactic is content amalgamation, which is publishing the same message across platforms to simulate credibility. Between January 2024 and May 2025, 997 deceptive emails were sent to more than 245 media and research organizations, 704 of which arrived after September 2024. Spikes occurred around major political or global events, such as the Paris Olympics or national elections. The actors also operate on Telegram, X (formerly Twitter), Bluesky, and, since May 2025, on TikTok. AI-generated content is increasingly used to impersonate well-known public figures, mainly journalists and academics. Since September 2024, around 600 content items have been identified, a 1.5-fold increase over the previous year. Logos from 180 institutions and the identities of more than 180 individuals were misused. CheckFirst warns of declining platform moderation and calls for stronger moderation and legal action, especially enforcing the EU Digital Services Act. Source: CheckFirst, Atanasova, A., Poldi, F. & Kuster, G., 2025. Operation Overload: More Platforms, New Techniques, Powered by AI . [online] Available at: https://checkfirst.network/wp-content/uploads/2025/06/Overload%C2%A02_%20Main%20Draft%20Report_compressed.pdf [Back to Table of Contents] Selective Truths: RT Takes Aim at the New MI6 Director DisinfoWatch has reviewed the facts and concludes that the allegations against Blaise Metreweli are part of a Russian state broadcaster RT influence campaign. RT claimed that Metreweli’s grandfather, Constantine Dobrowolski, was a Nazi collaborator during World War II. This information is historically documented, but it is presented without essential context. Metreweli’s father, born Dobrowolski in 1943 in occupied Ukraine, was raised in England by his stepfather and took the surname Metreweli. He had no affiliation with or knowledge of his biological father’s Nazi past. Ms Metreweli neither knew nor met her paternal grandfather. The tactic recalls past efforts, including those targeting Chrystia Freeland, where ancestry is used to delegitimize public figures. Source: DisinfoWatch, 2025. RT Recycles Nazi Allegations to Undermine New MI6 Chief Through False Guilt by Association . [online] Available at: https://disinfowatch.org/disinfo/rt-recycles-nazi-allegations-to-undermine-new‑mi6‑chief‑through‑false‑guilt‑by‑association/ [Back to Table of Contents] “Nobody Leaves the Family": The Coup Attempt in Serbia The Robert Lansing Institute outlines how Russian influence campaigns helped pave the way for the failed coup attempt in Serbia. According to the report, Serbia's alleged military support for Ukraine was a key trigger for Russia’s actions, which Moscow perceived as a symbolic challenge to its traditional influence in the Balkans. The report points to long-standing ties between Russian intelligence and sectors of Serbian society, including the military (with officers trained in Russia), security services (notably infiltration risks within the BIA), the Orthodox Church, and far-right political movements. The coup attempt occurred amid economic instability, political fragmentation, and growing public dissatisfaction. These conditions have made Serbia especially vulnerable to external manipulation. Russian influence operations targeted the Serbian public through state-backed media such as RT Balkan and Sputnik Serbia, along with nationalist networks, veteran groups, and clerics aligned with Moscow. EU integration and normalization with Kosovo were framed as betrayal and “spiritual surrender” to the West. Drawing parallels with Armenia, the report concludes that Russia reacts to perceived geopolitical drift by deploying disinformation, ideological pressure, and covert tactics to derail reform, create chaos, and reassert control. Source: Robert Lansing Institute, 2025. The Coup Attempt in Serbia — Kremlin Influence, Balkan Instability, and Strategic Fallout . [online] Published 26 June 2025. Available at: https://lansinginstitute.org/2025/06/26/the-coup-attempt-in-serbia-kremlin-influence-balkan-instability-and-strategic-fallout/ [Back to Table of Contents] [China] China's Use of Quanzhen Taoism to Spread Disinformation A recent analysis by ReligioScope reveals how the Chinese Communist Party (CCP) strategically leverages religious institutions, specifically Quanzhen Taoism, as instruments of political influence beyond mainland China. At the core of this effort are the United Front Work Department (UFWD) and the Chinese Taoist Association (CTA), seemingly aligning religious practice to Party objectives. In Taiwan, Quanzhen practitioners have reportedly come under growing pressure to participate in CCP-organized events, including ideologically framed “religious exchanges,” visits to Party-affiliated temples, and subtle expectations to echo Beijing’s positions publicly. These tactics form part of a broader campaign to project influence through cultural and spiritual channels, extending the CCP’s reach into politically and religiously autonomous societies. Source: ReligioScope, Swenson Daly, M., Infiltrating the Tao . [online] Available at: https://www.religioscope.org/papers/03.pdf [Back to Table of Contents] [Iran] Manufacturing Victory: Iran’s Disinformation Efforts In an updated analysis, NewsGuard outlines how Iranian state media and affiliated channels launched a coordinated disinformation campaign following the Israeli strikes on nuclear and military facilities in Tehran on June 13, 2025. The aim was to downplay Israel’s military success while portraying Iran’s retaliation as effective. To date, 26 specific false claims have been identified and disseminated across 78 websites. These included AI-generated images and fabricated reports of captured Israeli soldiers ( find more information in the Weekly Review W25 ). The primary sources were channels linked to the Islamic Republic of Iran Broadcasting (IRIB) and military-affiliated Telegram accounts. Platforms such as YouTube, TikTok, and X (formerly Twitter) were primarily used for distribution. The strategy reflects a familiar pattern in Iran’s information operations: official outlets, anonymous websites, and digital platforms push pro-Iranian narratives, project regime stability, bolster Tehran’s strategic interests, and mislead the international public. Sources: NewsGuard, Sadeghi, M., Howard, S. & Lin, C., 2025. Iranian State-Affiliated False Claims Tracker: 26 Myths about the War and Counting . [online] Available at: https://www.newsguardtech.com/special-reports/israel-iran-conflict/ [Back to Table of Contents] The Fiction of Retreat: Iran’s Disinformation on U.S. Withdrawal NewsGuard has documented another targeted instance of Iranian disinformation amid the recent escalation between the United States and Iran. At the center is a false claim that the U.S. Joint Chiefs of Staff had ordered a complete withdrawal of American troops from the Middle East. This baseless narrative emerged shortly after the U.S. airstrikes on Iranian nuclear facilities on June 21, 2025, and quickly spread on social media, particularly on X (formerly Twitter), where it garnered hundreds of thousands of views. The claim was amplified primarily by pro-Iranian and pro-Russian accounts, many of which have previously been involved in similar disinformation efforts. Notably, the timing coincided with Iran’s retaliatory missile strike on the U.S. military base Al Udeid in Qatar on June 23. The likely aim of the disinformation was to project an image of American retreat or weakness, framing Iran’s response as bold and practical. This serves both a domestic propaganda function and an international strategic message. Official U.S. military sources, however, have denied the claim: no such statements appear on the websites or social media accounts of U.S. Central Command or the Joint Chiefs of Staff. Sources: NewsGuard Reality Check, Komar, S., 2025. No, 40,000 U.S. Troops Were Not Evacuated from the Middle East . [online] Published 24 June 2025. Available at: https://www.newsguardrealitycheck.com/p/no-40000-us-troops-were-not-evacuated [Back to Table of Contents] Hybrid by Design: Iranian Hacktivists Target Saudi Games According to Infosecurity Magazine , the pro-Iranian hacktivist group Cyber-Fattah orchestrated a significant data breach targeting the Saudi Games 2024 registration platform. The incident, part of a broader Iranian information operation, exposed sensitive personal and financial data of athletes and officials. Cybernews additionally reported on a broader Saudi-linked data leak and a DDoS attack on the U.S.-based social media platform Truth Social . Notably, the timing shortly after U.S. airstrikes on Iranian nuclear facilities suggests a coordinated cyber response. Data exfiltration, service disruption, and narrative manipulation illustrate how hacktivist groups deploy multi-pronged cyber tactics to spread uncertainty, influence public perception, and destabilize digital communication infrastructures in geopolitical conflict. This targeted action represents a complex example of hybrid warfare: it combines technical attacks with strategic disinformation to undermine trust, establish a narrative of insecurity, and exploit digital platforms as channels for geopolitical messaging against regional rivals.often called “Torture TV”—as a core component of Tehran’s psychological warfare. Sources: Infosecurity Magazine, Mascellino, A., 2025. Cyber Fattah Leaks Data from Saudi Games in Alleged Iranian Operation . [online] Available at: https://www.infosecurity-magazine.com/news/cyber-fattah-leaks-data-saudi-games/ Cybernews, Lapienytė, J., 2025. US Strike on Iran Sends Online Ripples: Major Saudi Leak, DDoS on Truth Social . [online] Available at: https://cybernews.com/cyber-war/major-saudi-leak-ddos-on-truth-social/ [Back to Table of Contents] The Usual Suspects Are Missing: Tehran’s Fake Scots Go Silent The UK Defence Journal reports the sudden disappearance of dozens of pro-Scottish independence accounts on X, immediately following the Israeli airstrikes on Iranian military and cyber infrastructure on June 12, 2025. According to the report, the network behind these accounts was operated by Iran’s Islamic Revolutionary Guard Corps (IRGC), which managed over 80 accounts posing as British users. Between 2022 and 2024, these profiles posted an estimated 250,000 tweets promoting pro-independence and anti-UK narratives. The simultaneous takedown of the accounts, coinciding with widespread power outages and internet blackouts in Iran, strongly suggests centralized control from inside the country, likely disrupted by the Israeli strikes. This incident illustrates how state-sponsored disinformation campaigns exploit domestic political divides in Western democracies to sow discord and erode national cohesion. It also highlights the vulnerability of social media platforms to coordinated influence operations and shows how real-world military actions can dismantle digital propaganda networks in an instant. Source: UK Defence Journal, Allison, G., 2025. Dozens of pro-Indy accounts go dark after Israeli strikes . [online] Available at: https://ukdefencejournal.org.uk/dozens-of-pro-indy-accounts-go-dark-after-israeli-strikes/ [Back to Table of Contents] AI RELATED ARTICLES Grok, We Have a Problem: Disinfo in the Israel–Iran War A recent analysis by the Atlantic Council’s Digital Forensic Research Lab (DFRLab) reveals that Elon Musk’s AI chatbot Grok played a problematic role in spreading disinformation during the early days of the Israel-Iran conflict. Designed to help users verify facts, Grok instead produced contradictory and inaccurate responses, particularly when asked about AI-generated content and widely circulated fake visuals. In one example, Grok gave conflicting answers within a minute regarding an airport allegedly struck by Iran. It alternately claimed the location was in Beirut, Gaza, or Tehran, none of which was accurate. In several instances, the chatbot misidentified events or confirmed fabricated claims as fact. According to DFRLab, this failure highlights a problematic trend: as platforms scale back human moderation and fact-checking, users increasingly rely on AI tools like Grok or Perplexity, only to receive misinformation dressed as authoritative responses. Both bots, for instance, incorrectly affirmed fake stories such as China supplying weapons to Iran. Grok has shown similar vulnerabilities in the past: it previously amplified the debunked far-right conspiracy theory of “white genocide” in South Africa, a striking example of how AI systems, without proper oversight, can uncritically repeat and spread harmful narratives. Sources: DFRLab, Ponce de León, E. & Chenrose, A., 2025. Grok struggles with fact-checking amid the Israel-Iran war . [online] Available at: https://dfrlab.org/2025/06/24/grok-struggles-with-fact-checking-amid-israel-iran-war/ [Back to Table of Contents] The Post-Truth Machine: AI-driven Disinformation Threatens Democracy The New York Times warns that AI-driven disinformation is destabilizing democracies. Electoral manipulation via deepfakes: In Poland, a fake AI-generated video falsely showed Donald Trump endorsing far-right politician Slawomir Mentzen. Though fabricated, the clip spread rapidly on TikTok, a clear example of targeted disinformation aimed at swaying voters. The result was reinforcement of far-right mobilization. Foreign interference through AI: In Romania, a Russian influence campaign used AI to manipulate the first round of the 2024 presidential election to such an extent that the result was annulled. A fringe candidate surged ahead via inauthentic TikTok promotion. The court-ordered rerun marks a precedent for election nullification due to AI-led interference. Erosion of public trust: In Germany and the United States, AI-generated content circulated false quotes and images of political candidates. These campaigns aimed to undermine trust in democratic institutions and polarize public debate, often reinforcing far-right conspiracy narratives. Platform failures and lack of oversight: TikTok removed 7,300 AI-generated posts during Romania’s runoff election but admitted that many were not labeled as synthetic. Major platforms are too slow or ineffective in curbing such manipulation, allowing disinformation to spread unchecked. Conclusion: The New York Times investigation outlines the impact of AI-driven disinformation on democratic processes. It shows how political discourse is increasingly undermined while effective regulatory, political, or technological responses remain lacking. The report warns that democracies must find ways to respond to prevent lasting damage from digital manipulation. Source: The New York Times, Myers, S.L. & Thompson, S.A., 2025. A.I. Is Starting to Wear Down Democracy . [online] Available at: https://www.nytimes.com/2025/06/26/technology/ai-elections-democracy.html? [Back to Table of Contents] The Flight That Never Was: AI Boosts False China-Iran Claim NewsGuard reveals how , following Israeli airstrikes on Iranian targets in June 2025, a targeted wave of disinformation took hold: a false claim that a Chinese military cargo plane flew to Iran went viral, based on a misread of flight data. Anonymous accounts on X and pro-Iran commentator Jackson Hinkle pushed the narrative, despite explicit denials from Flightradar24 and Cargolux. Yet the false story found traction in outlets like The Telegraph and Epoch Times. The real force behind the campaign lies in the use of generative AI chatbots like Grok, Perplexity, and Meta’s bot, which repeatedly confirmed the false claims without verification. This highlights how AI tools amplify disinformation and make it seem more credible to users. As platforms scale back human fact-checking, more users turn to these AI systems, falling into the trap of targeted misinformation. This case exemplifies modern hostile influence tactics: combining human manipulation with automated dissemination makes disinformation faster, broader, and harder to control, especially in geopolitical crises. Such operations deliberately undermine democratic discourse, an urgent challenge for policymakers, society, and technology alike. Source: NewsGuard Reality Check, Lin, C., 2025. False Claim that China is Supporting Iran in the War with a Chinese Military Cargo Plane; Chat Bots Boost It . [online] Available at: https://www.newsguardrealitycheck.com/p/false-claim-that-china-is-supporting [Back to Table of Contents] GENERAL REPORTS Disinformation Undermines Polish Democracy Amid Contentious Election According to a report by Global Issues , Poland’s recent presidential election, narrowly won by nationalist Karol Nawrocki, has become a case study in how disinformation and foreign interference can influence democratic processes. The campaign was marred by coordinated online manipulation, with over 2,400 fake accounts targeting liberal candidate Rafał Trzaskowski or promoting Nawrocki. Investigations revealed a flood of misleading content on TikTok and Facebook, heavily skewed toward far-right narratives, often laced with anti-Ukrainian and anti-immigration conspiracy theories. These efforts contributed to an increasingly polarized electorate and undermined confidence in the electoral process. The campaign mirrored Kremlin-style influence operations and coincided with unprecedented international support for Nawrocki from far-right circles, including former U.S. President Donald Trump and the Conservative Political Action Conference. With Prime Minister Donald Tusk surviving a confidence vote but facing a hostile presidency, Poland now confronts potential institutional paralysis. Judicial reforms crucial to restoring EU funding will likely stall, and Nawrocki’s foreign policy stance could weaken Poland’s support for Ukraine. Source: Global Issues, Pousadela, I.M., 2025. Poland’s Democratic Deadlock . [online] Available at: https://www.globalissues.org/news/2025/06/25/40264 [Back to Table of Contents] Trump and Hannity's Post-Bombing Disinformation Wired reported that President Donald Trump and his closest supporters, including Fox News host Sean Hannity, have been using digital disinformation campaigns to portray the US airstrikes on Iranian nuclear facilities as a complete and decisive victory. These narratives were primarily spread through Trump’s platform, Truth Social, and other social media channels. Instead of relying on information from his intelligence agencies, satellite imagery, or on-the-ground reporting, Trump posted on Truth Social a screenshot of an anonymous X account claiming to conduct open-source intelligence, stating that “Fordow is gone.” Sean Hannity amplified this false claim by sharing a video of an explosion on Instagram, which was footage from an Israeli airstrike in Syria. While military officials and experts contradicted Trump’s portrayal and cautioned against premature assessments, Trump continued to assert that the Fordow facility had been “completely obliterated.” His early declaration on Truth Social shaped public discourse and inspired supporters who hailed the bombing as the end of the conflict. At the same time, Trump later raised the possibility of an extended military engagement and even “regime change,” a stance disputed within his administration. Political opponents criticized both Trump and Hannity for spreading misleading information that damages public discourse and undermines democratic oversight. This case exemplifies how digital platforms and social media can be weaponized as tools of hybrid warfare to advance political agendas, erode trust in reliable information, and deepen societal divisions. Source: Wired, Myers, S.L. & Thompson, S.A., 2025. Donald Trump and Sean Hannity Set Off a Wave of Disinformation After Iran Bombing . [online] Available at: https://www.wired.com/story/donald-trump-sean-hannity-disinformation-iran-bombing/ [Back to Table of Contents] Digital Fog of War: AI Slop and Information Control in the Iran Conflict POLITICO’s Weekly Cybersecurity warns that amid escalating Israel‑Iran tensions, AI-generated “slop”—including deepfakes, manipulated images of destroyed military hardware, and synthetic videos falsely depicting attacks—has proliferated across social media. These fabricated visuals, some shared by world leaders, state-backed outlets, and partisan influencers, spread rapidly, exploiting algorithms and emotional resonance to shape public perception before fact-checkers can respond. The strategy combines synthetic media production, rapid bot amplification, and state-driven narrative control, especially with information blackouts or censorship designed to limit counter-narratives. “The combination of state censorship and AI-powered misinformation is a new digital battlefield, and the collateral damage is public trust,” said Dave Gerry, CEO of cybersecurity firm Bugcrowd. The implications are significant: democracies now face a multifront information battlefield where trust in visual evidence is eroded, fact-checking defenses lag behind AI-enabled manipulation, and authoritarian regimes gain an advantage through coordinated, real-time influence operations. Source: Politico, Nickel, D., 2025. AI Slop Spreads in Israel-Iran War . [online] Available at: https://www.politico.com/newsletters/weekly-cybersecurity/2025/06/23/ai-slop-spreads-in-israel-iran-war-00417791 [Back to Table of Contents] Strategic Rivals Celebrate US’s ‘Soft Power Suicide’ As stated in a New York Times article , under the Trump administration, the United States scaled back or dismantled many of its key global communication tools, including Voice of America and Radio Free Asia, platforms central to promoting democratic values and countering authoritarian propaganda. This retreat was celebrated by rivals like Russia and China, who saw an opportunity to expand their influence. Kremlin-backed RT and China's Global Times openly rejoiced at the weakening of U.S. media infrastructure. At the same time, nations like Hungary, Cambodia, and Cuba followed suit in applauding America's withdrawal from the global information battlefield. In the absence of U.S. leadership, authoritarian states moved to fill the vacuum. Russia, China, Turkey, and others ramped up investments in state-run global media outlets, disinformation campaigns, and cultural outreach, deploying fake accounts, algorithmic manipulation, and state-aligned influencers to flood international platforms with narratives that distort truth and undermine democratic ideals. Fact-based reporting was increasingly replaced by polarizing, often deceptive messaging aimed at reshaping global perceptions in favor of authoritarian models. Experts warn this U.S. "soft power suicide" has not only weakened American global credibility but also emboldened adversaries to weaponize disinformation unchecked. As China's Xinhua and Russia’s Sputnik expand reach in Africa and Asia, and Western trust in U.S. messaging declines, the struggle for global influence has entered a new phase, one where truth competes against algorithm-boosted falsehoods, and where the United States, once a leader in promoting free expression, is increasingly sidelined in the battle for hearts and minds. Source: New York Times, Hsu, T., 2025. As U.S. Dismantles Voice of America, Rival Powers Hope to Fill the Void . [online] Available at: https://www.nytimes.com/2025/06/24/business/media/us-china-russia-global-communications.html [Back to Table of Contents] Nationalist Networks and Global Threats: The GNCA’s Role in Disinformation Campaigns An article by Global Influence Ops examines the Global National Conservative Alliance (GNCA), a burgeoning global political movement uniting right-wing and far-right factions. A key aspect of the GNCA's disinformation is its strategic use of influence operations and the exploitation of its networks by foreign actors, notably Russia and China, to spread disinformation and undermine democratic institutions. This involves eroding checks and balances and establishing patronage networks, which can then be leveraged to propagate narratives disguised as legitimate political discourse that serve authoritarian interests. The actors involved are the various components of the Global National Conservative Alliance, including movements like MAGA in the US, and foreign states such as Russia and China, who act as amplifiers and exploiters of these networks. The broader implication is a significant threat to the integrity of democracy worldwide. By championing national sovereignty, protectionist trade, and cultural exclusion, the GNCA creates fertile ground for foreign interference and the proliferation of misleading information, ultimately weakening democratic norms and institutions on a global scale. Source: The Conversation, Sinclair H.C., Most Americans believe misinformation is a problem — federal research cuts will only make the situation worse, 2025, [online]; Available at: https://theconversation.com/most-americans-believe-misinformation-is-a-problem-federal-research-cuts-will-only-make-the-problem-worse-255355 [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]