top of page

Search CRC

Results found for empty search

  • Cracking the Code: How AI Detects Social Media Impersonation

    Cracking the Code: How AI Detects Social Media Impersonation Author: Osnat Mokryn The rise of online communication and the increasing anonymity of internet interactions have led to growing concerns around authorship attribution and impersonation detection. Social platforms like Twitter, Facebook, and Reddit have further opened the door to deceptive behaviors—ranging from fake identities and manipulated reviews to coordinated misinformation campaigns. Identifying who is behind a post—or whether the same individual operates multiple accounts—has become increasingly challenging. A new AI-driven approach offers a breakthrough by analyzing the words people tend to use and the ones they consistently avoid. This technique constructs a unique linguistic “signature” for each user, enabling accurate authorship attribution and revealing hidden links between seemingly unrelated accounts.

  • Weekly Report: Cyber based influence campaigns 5th - 11th of May 2025

    [Listen to the Podcast] [Report Highlights] An article by EU vs. Disinfo detailed how Russia ramped up its disinformation efforts ahead of the May 9 Victory Day parade at home and abroad, including interference in Romania’s May 4 presidential election. Jamestown  and The Record  report that Poland is facing an unprecedented wave of Russian disinformation and cyberattacks aimed at disrupting the 2025 presidential election, destabi-lizing critical infrastructure, and undermining public trust, as part of broader Kremlin efforts to weaken both national stability and Western alliances. As published by NewsGuard's Reality Check , amid escalating tensions between India and Pakistan in May 2025, pro-India social media users circulated two deepfake videos falsely portraying Donald Trump as threatening to destroy or erase Pakistan if it attacked India. According to an article published by the Atlantic Council , Latvia has been a target of Russian information-manipulation campaigns for over a decade. Since Russia’s full-scale invasion of Ukraine in 2022, these campaigns have grown more aggressive, prompting Latvia to prioritize the protection of its information space at the heart of its national security strategy. DFRLab  reports that a coordinated disinformation campaign in South Africa used paid influencers to spread anti-Zelenskyy and pro-Russian messages on social media, aiming to manipulate public perception around Ukraine’s rejection of a Victory Day ceasefire. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - STATE ACTORS [ Russia] Latvia's Fight Against Russian Disinformation Russia's Disinformation Surge Around Victory Day Russian Disinformation Campaigns Threaten Poland's Stability Azerbaijan Blames Russian State Hackers for Cyberattacks on Local Media Kremlin Sources Concoct WWII Falsehood Against Ukraine Paid South African Influencers Targeting Zelenskyy AI RELATED ARTICLES Deepfake Trump Threatens Pakistan if It Attacks India CYFLUENCE ATTACKS India Experiences Surge in Hacktivist Group Activity Amid Military Tensions GENERAL REPORTS Caribbean Media Faces New Challenges in the Age of AI The Impact of the Digital Services Act on Disinformation on Facebook FRAMEWORKS TO COUNTER DISINFORMATION California Governor Fights Disinformation with New Fact-Checking Website EEAS Efforts Against FIMI and Disinformation GLOSSARY DOWNLOAD SECTION REPORT STATE ACTORS [Russia] Latvia's Fight Against Russian Disinformation A report by the Atlantic Council  outlines how Latvia is confronting intensified Russian hostile influence , particularly since Russia’s invasion of Ukraine in 2022. Russia’s objectives in Latvia are to sow discord within Latvian society and the transatlantic alliance, discredit Ukraine, and erode trust in democratic institutions. Russian-speaking minorities, around 25% of Latvia’s population, are primarily targeted. The Kremlin spreads narratives portraying the Latvian government as illegitimate and the West as hostile, while glorifying the Soviet past. These messages were distributed through Russian state-controlled media (now banned), social media platforms, bots, trolls, VPNs, and increasingly AI-generated content. In response, Latvia officially designated the information space as a domain of national defense, on par with military and civil preparedness. Media literacy is integrated into school and community programs. Public awareness is strengthened through campaigns and educational materials. A dedicated strategic communications unit coordinates messaging across ministries and works with tech platforms to curb disinformation. Independent media receive financial and political support. In 2021, although vague legal definitions have limited convictions, Latvia became the first Baltic state to apply criminal law against deliberately disseminating harmful falsehoods. Civil society plays a vital role: NGOs, investigative journalists, and volunteer groups like the Baltic Elves  monitor and debunk falsehoods. NATO cooperation enhances the detection of emerging threats like deepfakes. Latvia deliberately avoids offensive information operations. Its focus remains on protecting democratic discourse. The remaining challenges include the lack of credible Russian-language content, the need for clearer legal tools, and sustained international backing. Sources:   Atlantic Council, Vrobļevska, E. and Irdi, B., 2025. What’s the best way to deal with Russian information manipulation? Latvia has answers. [online] Available at: https://www.atlanticcouncil.org/blogs/new-atlanticist/whats-the-best-way-to-deal-with-russian-information-manipulation-latvia-has-answers/ [Back to Table of Contents] Russia's Disinformation Surge Around Victory Day The EU vs. Disinfo  project highlights  how the Kremlin used May 9, “Victory Day,” to advance its hostile influence operations. While the focus at home lay in distorting the historical narrative of World War II, particular attention abroad was directed at the Romanian presidential election held on May 4, 2025. Following the annulment of Romania’s November 2024 election due to verified foreign interference (For more details, see our report  on „ The Romanian Presidential Elections 2024: Analysis of Information Operations and Longterm Influence Efforts“) , Russian Foreign Information Manipulation and Interference (FIMI) patterns were again identifiable. In the lead-up to the May vote, a coordinated disinformation campaign unfolded across social media platforms. Its aims included discrediting pro-European candidates, amplifying extremist voices, and eroding public confidence in the electoral process. A network of 25 interconnected pages placed political advertisements worth over €260,000 without transparency or attribution. The Kremlin’s tactics followed a familiar pattern: saturating the information environment with emotionally charged and often contradictory content; promoting political extremes while targeting democratic centrists; and systematically undermining trust in institutions. The Romanian case reflects a broader trend in Russia’s foreign influence strategy. For the Kremlin, elections are not democratic exercises but strategic opportunities to destabilize and weaken democratic governance abroad. Source:    EUvsDisinfo, 2025. Red Square lies: Propaganda on parade, online, and the ground. [online] Available at: https://euvsdisinfo.eu/red-square-lies-propaganda-on-parade-online-and-on-the-ground/ [Back to Table of Contents] Russian Disinformation Campaigns Threaten Poland's Stability  The Record reports   that Poland has accused Russia of launching an unprecedented disinformation and cyberattack campaign aimed at disrupting its upcoming presidential election in May 2025. According to Poland’s digital affairs minister, Janusz Cieszynski, Russian-linked actors have intensified efforts to destabilize critical infrastructure, including water and sewage systems, power plants, and government agencies. Additionally, Russia is reportedly attempting to recruit Polish citizens to spread disinformation, a strategy similar to its use of local influencers during Romania’s recent elections. (For more details, see our report on „ The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts“) Russia denies any involvement in cyberattacks or election interference in either country. A Jamestown Foundation  article highlights  that Russia’s shadow war against Poland combines low-level sabotage, insider espionage, informational warfare, and cyberattacks. Between 2010 and 2025, Polish authorities closed 30 subversion cases, leading to the arrests of 61 individuals—19 cases and 49 arrests since 2021—accounting for roughly 35% of Europe’s Russian-linked espionage and sabotage arrests. Recruits for these operations have shifted from ethnic Poles to predominantly Russian, Belarusian, and Ukrainian nationals. Their missions aim to reduce support for Ukraine, disrupt decision-making, erode social trust, and stoke extreme politics. Countering this threat will require comprehensive measures, including media literacy, institutional strengthening, and increased NATO intelligence cooperation. Source:    The Record, Antoniuk, D., 2025. Poland accuses Russia of ‘unprecedented’ interference ahead of presidential election. [online] Available at:   https://therecord.media/poland-elections-russia-hybrid-threats-disinformation Jamestown Foundation, Kang-Stryker, A. & Bugajski, J., 2025. Poland on the Frontlines Against Russia’s Shadow War. [online] Available at: https://jamestown.org/program/poland-on-the-frontlines-against-russias-shadow-war/ [Back to Table of Contents] Azerbaijan Blames Russian State Hackers for Cyberattacks on Local Media As reported in an article by The Record , Azerbaijan has attributed a cyberattack on multiple local media outlets to the Russian state-sponsored hacking group APT29, labeling it a politically motivated act of retaliation. The attack occurred after Azerbaijan shuttered the Russian House cultural center in Baku, citing espionage and legal violations, and drastically reduced the staff at Sputnik Azerbaijan, a Kremlin-backed media outlet. Azerbaijani officials claim that the hackers had infiltrated the media networks years earlier, activating their attack on the morning of February 20, 2025, starting with Baku TV and spreading to other news platforms. Officials stated that the objective was to spread disinformation, disrupt media infrastructure, and delete or manipulate content. In March, the Ukrainian military intelligence agency (HUR) also reported that Russia was spreading disinformation, accusing it of trying to instigate an armed conflict between Armenia and Azerbaijan. Similar disinformation-driven cyberattacks have previously targeted media in Poland and Ukraine. Russia has rejected the allegations, calling them part of a baseless disinformation campaign. Source:    The Record, Antoniuk, D., 2025. Azerbaijan blames Russian state hackers for cyberattacks on local media. [online], Available at: https://therecord.media/azerbaijan-blames-media-cyberattacks-russia-apt29 [Back to Table of Contents] Kremlin Sources Concoct WWII Falsehood Against Ukraine According to a report by NewsGuard's Reality Check , ahead of Victory Day, which Ukraine also celebrates, pro-Kremlin sources circulated a fabricated leaflet claiming that Ukraine's government instructed World War II veterans to hide their Soviet-era medals. The image, falsely attributed to Ukraine’s Ministry of National Memory, included a diagram allegedly showing how to conceal the awards inside a jacket. The goal was to depict Ukraine as disrespecting its veterans and erasing Soviet contributions to the war. The image first appeared on a pro-Russian Telegram account and quickly spread across social media and Kremlin-linked websites, including those in the Pravda disinformation network. The Ukrainian Institute of National Memory denied any link to the leaflet, calling it likely Russian propaganda. Ukrainian law does ban Nazi and communist symbols but explicitly exempts pre-1991 war medals and awards. ​ Source:    NewsGuard Reality Check, Maitland, E., 2025. Kremlin Sources Concoct WWII Victory Day Falsehood to Smear Ukraine. [online] Available at: https://www.newsguardrealitycheck.com/p/kremlin-sources-concoct-wwii-victory [Back to Table of Contents] Paid South African Influencers Targeting Zelenskyy A recent DFRLab investigation  reveals  that a coordinated disinformation campaign in South Africa targeted Ukrainian President Volodymyr Zelenskyy for rejecting Russia's proposed Victory Day ceasefire. Utilizing a network of paid influencers through a South African influencer marketplace, the campaign amplified anti-Zelenskyy and pro-Russian narratives on X, promoting hashtags like [ #ZelenskyyIsWar] and [#May09Truce]  to trend nationally. This effort involved 42 accounts generating 840 posts, amassing approximately 290,000 views within two hours. Many of these influencers had previously participated in similar campaigns advancing pro-Russian narratives. This operation underscores the strategic use of local influencers to disseminate foreign propaganda, exploiting regional platforms to sway public opinion on international conflicts. By leveraging South Africa's historical anti-colonial sentiments, such campaigns aim to erode support for Ukraine and legitimize Russian actions. The incident highlights the broader implications of influence-for-hire networks (Find more information about this subject here on our Blog Post : Commercial Hostile Influence Networks) in shaping geopolitical narratives and the necessity for heightened media literacy and regulatory measures to counteract such disinformation efforts. ​ Source:    Digital Forensic Research Lab (DFRLab), Adam, I., 2025. South African influencers-for-hire attack Zelenskyy for declining Victory Day ceasefire. [online] Available at: https://dfrlab.org/2025/05/09/south-african-influencers-for-hire-attack-zelenskyy-for-declining-victory-day-ceasefire/   [Back to Table of Contents] AI Related Articles Deepfake Trump Threatens Pakistan if It Attacks India As published by NewsGuard's Reality Check , amid escalating tensions between India and Pakistan in May 2025, pro-India social media users circulated two deepfake videos falsely portraying Donald Trump as threatening to destroy or erase Pakistan if it attacked India. These AI-manipulated clips featured fabricated voice-overs synced to altered footage of Trump from a 2016 speech at the Economic Club of New York. Contrary to the claims, Trump never made such remarks, and independent AI-detection tools confirmed the videos were digitally manipulated. The videos emerged after a deadly militant attack in Indian-controlled Kashmir raised fears of a broader conflict. Pakistan denied involvement, but diplomatic relations deteriorated sharply. Additionally, according to an article by Bellingcat, another deepfake further muddied the information landscape during this volatile period. A manipulated video falsely showing Pakistani army spokesperson Ahmed Sharif Chaudhry admitting the loss of two aircraft was shared nearly 700,000 times on X, and picked up by several mainstream Indian media outlets before being debunked. Source:    NewsGuard Reality Check, Komar, S., 2025. Trump Deepfake Threatens to ‘Destroy’ or ‘Erase’ Pakistan if It Attacks India. [online] Available at: https://www.newsguardrealitycheck.com/p/trump-deepfake-threatens-to-destroy# Bellingcat, Chaudhuri, P. & Higgins, E., 2025. India-Pakistan Conflict: How a Deepfake Video Made it Mainstream. [online] Available at: https://www.bellingcat.com/news/rest-of-world/2025/05/09/india-pakistan-conflict-how-a-deepfake-video-made-it-mainstream/ [Back to Table of Contents] CYFLUENCE ATTACKS India Experiences Surge in Hacktivist Group Activity Amid Military Tensions Cyble investigated  a coordinated cyber campaign against India that followed the April 22, 2025, terror attack in Jammu and Kashmir and India's retaliatory strikes under Operation Sindoor . The campaign, conducted under the hashtag #OpIndia , involved website defacements, DDoS attacks, and online propaganda. The attacks were deliberately timed to coincide with military operations. Although the attacks caused only temporary disruptions to government, law enforcement and healthcare websites, the primary objective appeared to be psychological rather than technical. The campaign prioritized volume over technical sophistication: over 50% of incidents were DDoS attacks, while 36% involved website defacements—both tactics aimed at maximizing visibility and psychological impact. Political and religious messages were disseminated, often aligning with pro-Pakistan narratives. Involved groups such as Keymous+ , AnonSec , Nation of Savior , and Electronic Army Special Forces  used social media to publicize their actions, frequently exaggerating their actual impact to amplify anti-India messaging. Source:   Cyble, 2025. India Experiences Surge in Hacktivist Group Activity Amid Military Tensions. [online] Available at: https://cyble.com/blog/india-experience-hacktivist-group-activity/ [Back to Table of Contents] GENERAL REPORTS Caribbean Media Faces New Challenges in the Age of AI   Published by Misinfocon  and originally contributed by Global Voices, the 2025 World Press Freedom Day spotlighted the impact of artificial intelligence (AI) on journalism. While AI offers benefits such as efficiency, multilingual capabilities, and data-driven analysis, both Global Voices and the Media Institute of the Caribbean (MIC) warn of serious risks, including disinformation, deepfakes, surveillance, and algorithmic bias. MIC emphasized that free, AI-generated content increasingly competes with high-quality journalism, which is expensive to produce—a challenge particularly acute in the Caribbean, where shrinking ad revenues and fragile markets threaten media viability. Between 15% and 25% of advertising income is already diverted to tech giants like Meta and Google, whose platforms dominate access to information and undermine the financial foundations of independent journalism. MIC President Kiran Maharaj has called for fair AI governance to protect democratic discourse and the sustainability of public interest media. At the same time, regional media face the dual burden of environmental crises and digital threats. Misinformation during natural disasters can have devastating consequences. While AI can enhance emergency response through real-time alerts and forecasting, its misuse remains a serious concern. To address this, MIC has proposed policies including taxing technology companies and reinvesting the revenue into journalism, exploring AI-driven revenue models, and—in line with UNESCO’s AI Road Map—establishing a regional AI Ethics Task Force to audit algorithmic bias and promote content verification standards. Source:    MisinfoCon, Mendes-Franco, J., 2025. World Press Freedom Day: Caribbean media faces new challenges in the age of AI. [online] Available at: https://misinfocon.com/world-press-freedom-day-caribbean-media-faces-new-challenges-in-the-age-of-ai-90e1c3cc335e [Back to Table of Contents] The Impact of the Digital Services Act on Disinformation on Facebook   A recent study  by the NATO Strategic Communications Centre of Excellence  assessed the early impact of the EU Digital Services Act (DSA) on harmful content on Facebook, focusing on Polish and Lithuanian accounts. Using a multi-stage AI analysis of over 2,300 posts from 2023 and 2024, the research found that hate speech, particularly targeting protected groups, remains the dominant form of harmful content, accounting for 90% of flagged posts in both years. While Lithuania saw an 11% decline in such content in 2024, Poland experienced a 55% increase, with a dramatic 128% surge within Facebook groups. This highlights significant moderation gaps in group environments. Antisemitic disinformation related to the Israel–Hamas conflict was especially prevalent in Polish content. The platform showed some progress: fact-checking activity rose in 2024, likely due to increased investment in moderation. However, the overall rate of harmful content removal declined, particularly for Lithuanian-language posts. The study concludes that despite the DSA’s promise, Facebook’s efforts yielded only partial improvements. ​ Source:    NATO Strategic Communications Centre of Excellence, Haiduchyk, T., Khusnutdinov, M., Belafatti, F., Kubś, J., Šuminas, A., 2025. Impact of the Digital Services Act: A Facebook Case Study. [online] Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Impact-of-the-Digital-Services-Act-DIGITAL.pdf?zoom=page-fit [Back to Table of Contents] Frameworks to Counter Disinformation California Governor Fights Disinformation with New Fact-Checking Website Politico reports  that California Governor Gavin Newsom has launched CaliforniaFacts.com , a website aimed at combating statements deemed disinformation. The site targets explicitly narratives propagated by conservative media and influencers. Funded by his political action committee, Campaign for Democracy, it addresses misinformation spread by figures such as Donald Trump, Elon Musk, and anonymous X accounts. Newsom, who has criticized Democrats for failing to break through right-wing media ecosystems, presents the website as part of a broader strategy that includes social media responses, media appearances, and a podcast. ​ Source:    Politico, Jones, B., 2025. Newsom jabs at ‘MAGA trolls’ as he broadens information war. [online] Available at: https://www.politico.com/news/2025/05/07/gavin-newsom-fact-check-maga-00334581 [Back to Table of Contents] EEAS Efforts Against FIMI and Disinformation The European External Action Service (EEAS) presents  strategic communication as a key tool to counter foreign information manipulation and interference (FIMI). To implement this approach globally, it has established regional Task Forces that promote EU values, support local partners, and enhance societal resilience against disinformation. The East Stratcom Task Force (ESTF) focuses on the Eastern Partnership and Central Asia, working with civil society and media to deliver targeted campaigns such as “Share your Light,” particularly emphasizing Ukraine-related communication. In the Western Balkans, the WBTF engages in public diplomacy through initiatives like “Europeans in Action” and supports media literacy and independent journalism. The Task Force South (TFS) covers the Middle East and North Africa (MENA) region, monitoring disinformation, coordinating Arabic-language outreach, and assisting local journalists. The newest unit, the Sub-Saharan Africa Task Force (SSA TF), launched in 2023, empowers youth and media professionals through region-specific strategies like “Above the Noise.” All Task Forces are linked through the “Connecting Media Communities initiative,” launched in 2023. It brings journalists from various regions together to exchange best practices, build professional networks, and strengthen collective resilience to FIMI. Through these coordinated efforts, the EEAS works to uphold democratic values and foster informed, engaged societies worldwide. ​ Source:    European External Action Service, 2025. EEAS Strategic Communication Task Forces. [online] Available at: https://www.eeas.europa.eu/eeas/eeas-strategic-communication-task-forces_en [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]

  • Weekly Report: Cyber based influence campaigns 19th - 25th of May 2025

    [Listen to the Podcast] [Report Highlights] Cybernews reports  that Telegram founder Pavel Durov has accused France’s foreign intelligence chief of pressuring him to block conservative voices in Romania ahead of national elections—a request he rejected. The DGSE denies any attempt at political interference. According to a report published by Graphika , a covert influence network aligned with Chinese interests has been uncovered on X. The network involves over 1,000 fake accounts designed to manipulate online discourse about U.S. Tariffs and Trade Policies. As described in the media outlets The Record , NewsGuard's Reality Check, and DFRLab Romania, the recent presidential election has become entangled in a wave of disinformation and unproven allegations, as defeated far-right candidate George Simion calls for the results to be annulled. DFRLab reports that a disinformation network linked to the French company DirectWay promoted false claims of election interference in Romania's 2025 presidential race to support a nationalist candidate and undermine democratic trust. According to a report by ABC News , during the recent conflict sparked by a deadly attack in Pahalgam, Indian-administered Kashmir, disinformation surged online with alarming speed and sophistication. In a recent report, The Record  reveals that the European Union has introduced a new sanctions package targeting individuals and organizations involved in Russia’s hybrid warfare operations, including disinformation, sabotage, and espionage activities across Europe and Africa. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [ X]    Telegram’s Durov Accuses France of Political Censorship Attempt STATE ACTORS [ Russia] Russia-Linked Disinformation Targets Elections in Romania and Poland Russia-Aligned TAG-110 Evolves Tactics in Ongoing Disinformation Campaign Targeting Tajikistan [ The War in Ukraine]    RAND Study Finds Limited Impact of Russian Propaganda Narratives [ China]    Graphika Exposes Chinese-Aligned Hostile Influence Campaign on X GENERAL REPORTS   The Potential and Risks of Meta’s Community Notes Program Far-Right Candidate Challenges Romania Election Results French-Linked Network Amplifies False Romanian Election Claims Europe at a Crossroads: Balancing AI Innovation and Regulation Disinformation Arises in India-Pakistan Conflict FRAMEWORKS TO COUNTER DISINFORMATION EU Renews Mission in Moldova to Combat Disinformation and Hybrid Threats EU Sanctions Target Russia's Hybrid Warfare Threats GLOSSARY DOWNLOAD SECTION REPORT HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [X] Telegram’s Durov Accuses France of Political Censorship Attempt Cybernews reports  that Pavel Durov, founder of the messaging app Telegram, accused Nicolas Lerner, head of France’s foreign intelligence agency (DGSE), of asking him to block conservative voices in Romania ahead of national elections. The meeting allegedly occurred this spring at the Hôtel de Crillon in Paris, where Durov is under judicial supervision. He said he refused, stating that Telegram does not censor protest movements in any country, including Russia, Belarus, and Iran. The DGSE denied the accusation, stating that meetings with Durov were strictly to remind him of his responsibilities in combating terrorism and child exploitation. It firmly rejected any involvement in electoral interference. Elon Musk reacted by reposting Durov’s statement on X with the comment: “Wow.” Musk has repeatedly criticized European governments for alleged suppression of right-wing political voices. Sources:   CyberNews, 2025. France asked the Telegram founder to ban conservative Romanian voices, he says . [online] Available at: https://cybernews.com/news/france-asked-the-telegram-founder-to-ban-conservative-romanian-voices-he-says/ [Back to Table of Contents] STATE ACTORS [Russia] Russia-Linked Disinformation Targets Elections in Romania and Poland   A report  by The Record highlights  increased Russian disinformation in Romania and Poland during their presidential elections. The Kremlin-backed campaign, known as Doppelgänger , used known tactics such as cloning official websites of institutions and media outlets to spread false narratives. In Romania, the campaign aimed to erode trust in democratic institutions. It pushed claims of government abuse, large-scale electoral fraud, and false reports that the election had been cancelled. Authorities had warned of such activity before the first round. Despite the efforts, centrist candidate Nicușor Dan won the May 18 runoff, defeating far-right nationalist George Simion with 53.6% of the vote. His victory reaffirmed Romania’s pro-EU and pro-NATO course. The campaign sought to undermine support for pro-European and pro-Ukrainian policies in Poland. Authorities detected foreign-funded disinformation on Facebook ahead of the first round. According to Ukraine’s military intelligence (HUR), the Doppelgänger campaign intensified in Poland, using fake accounts and bots on platforms like X to impersonate voters and amplify false messages. Key narratives included opposition to support for Ukraine, calls to exit the EU, and attacks on government policy. The election now heads to a runoff on June 1 between centrist Rafał Trzaskowski and nationalist Karol Nawrocki. The outcome will shape Poland’s EU role and stance on Ukraine. Source:    The Record, Antoniuk, D., 2025. Russia-linked disinformation floods Poland, Romania as voters cast ballots . [online] Available at: https://therecord.media/russia-disinformation-poland-presidential-election [Back to Table of Contents] Russia-Aligned TAG-110 Evolves Tactics in Ongoing Disinformation Campaign Targeting Tajikistan In a May 2025 assessment , Recorded Future’s Insikt Group analyzed  a cyber operation by the Russia-aligned actor TAG-110 targeting Tajikistan’s public sector. While the primary method was technical—phishing emails and macro-enabled Word documents—the campaign had a hostile influence dimension. The attackers embedded malware into files disguised as official communications, such as election schedules or defense-related notices. When opened, these files installed persistent code that granted long-term access to government, research, and educational systems. This access allowed for more than surveillance. The timing and content of the attacks indicate an intent to influence internal decision-making during politically sensitive periods, including elections and military activity. The operation blurred the line between information and manipulation by impersonating trusted documents. The goal was not simply to gather data, but to shape perception and disrupt institutional integrity, subtly guiding outcomes in ways favorable to Russian strategic interests. TAG-110’s campaign demonstrates how cyber capabilities can serve broader geopolitical objectives. The hostile influence aspect lies in using digital tools not just for intrusion, but to quietly steer political processes from within, without overt interference, yet with significant impact. Source:    Recorded Future, 2025. Russia-Aligned TAG-110 Targets Tajikistan with Macro-Enabled Word Templates . [online] Available at: https://go.recordedfuture.com/hubfs/reports/cta-2025-0522.pdf [Back to Table of Contents] [The War in Ukraine] RAND Study Finds Limited Impact of Russian Propaganda Narratives In a May 2025 study,  researchers at the RAND Corporation analyzed  the spread and impact of Russia’s most extreme propaganda narratives related to the war in Ukraine. The focus was on four core themes: claims of “denazification,” the dehumanization of Ukrainians through slurs and hate speech, antisemitic narratives targeting President Zelenskyy, and anti-Western rhetoric portraying Ukraine as a puppet of liberal or LGBTQ agendas. While these narratives have circulated widely on platforms like X and Telegram, RAND’s analysis reveals that their influence is more limited than often assumed. The study examined over 43 million posts from 3.8 million users across 30 languages. The most virulent content, especially dehumanizing language, was primarily concentrated in Russian-language communities and struggled to gain traction internationally. On X, most users posting such content did not engage in dialogue; most extreme posts came from anonymous, unpopular accounts with little reach. On Telegram, similar dynamics were observed: Russian-language channels were active and often widely forwarded, but the most toxic narratives failed to break out of niche audiences. In contrast, many pro-Ukrainian voices had larger followings and greater visibility, effectively challenging Russian messaging in digital spaces. Overall, RAND concludes that while Russian propaganda is aggressive in scale, its resonance beyond Russian-speaking networks remains shallow. Source:    RAND Corporation, Treyger, E., Williams, H. J., & D'Arrigo, A., 2025. Measuring the Reach of Russia’s Propaganda in the Russia-Ukraine War . [online] Available at: https://www.rand.org/pubs/research_briefs/RBA3450-2.htm l [Back to Table of Contents] [China] Graphika Exposes Chinese-Aligned Hostile Influence Campaign on X A new report by Graphika has identified  a covert, pro-Chinese influence network operating on X (formerly Twitter). The network appears to have been designed to shape international discourse around U.S. tariffs and trade policy. According to Graphika, over 1,000 fake accounts were identified. Using stolen content and carefully constructed counterfeit personas, the operators posed as authentic users from the United States, Canada, the United Kingdom, and Japan. At the heart of the campaign was an effort to undermine the United States' trade policies under President Donald Trump. The fake profiles also promoted the narrative that Japan, Canada, and the United Kingdom were resisting political pressure from Washington—a portrayal deliberately framed as a supposed grassroots discourse. Beyond trade-related topics, the network disseminated content aligned with China’s broader geopolitical agenda. This included critical narratives about the U.S. military presence in Japan and promotional content for Chinese government-backed tourism initiatives. While the network could not be definitively linked to a specific state actor, Graphika concluded that the combination of tactics, content, and behavioral patterns strongly suggests a pro-Chinese influence operation, with notable similarities to previously documented activities attributed to Chinese state actors. Despite temporarily easing U.S.–China trade tensions, the report warns that covert efforts to sway Western public opinion will likely persist.  Source: Graphika, le Roux, J., 2025. Tariff Tirade: China-Aligned Network Poses as Grassroots Voices in Effort to Covertly Boost Online Narratives Critical of US Tariffs and Trade Policies . [online] Available at: https://public-assets.graphika.com/reports/graphika_report_tariff_tirade.pdf [Back to Table of Contents] GENERAL REPORTS The Potential and Risks of Meta’s Community Notes Program According to an article  by The Conversation , Meta is preparing to launch its Community Notes program in Canada following its rollout in the U.S. in March 2025. The initiative allows users to add context to misleading posts. Notes are only made public if they receive approval from users with differing perspectives, reflecting a decentralized, consensus-based approach. Key insights come from X (formerly Twitter), which has operated a similar system, launched initially as “Birdwatch”, since 2021. Studies indicate that Community Notes on X can lead to the voluntary deletion of flagged posts and encourage contributors to use more moderate, fact-based language. One of the program’s most widely praised features is transparency: X has made its data and algorithms publicly accessible, allowing independent researchers to monitor and evaluate the system. However, significant weaknesses have also emerged. Fewer than 9% of submitted notes are published due to the high threshold for cross-perspective agreement. In politically polarized environments, this model often fails. Moreover, there is a real risk of manipulation by coordinated groups aiming to discredit accurate content through mass reporting. Another critical limitation is that neither X nor Meta penalizes users who spread misinformation. Platforms avoid direct intervention, shifting responsibility to users. Critics argue that without consequences, Community Notes risks becoming a symbolic gesture rather than a meaningful tool against disinformation. For Community Notes to be effective in Canada, Meta must address these structural flaws, learning from failures seen on X. Source:    The Conversation, Borwankar, S., 2025. Meta’s Community Notes program is promising, but needs to prioritize transparency. [online] Available at: https://theconversation.com/metas-community-notes-program-is-promising-but-needs-to-prioritize-transparency-248324 [Back to Table of Contents] Far-Right Candidate Challenges Romania Election Results A recent Reality Check from NewsGuard details that after Romania’s presidential election on May 18, 2025, pro-Kremlin and nationalist X users spread false claims of electoral fraud in favor of Nicușor Dan, who defeated pro-Russian candidate George Simion in the runoff. Baseless allegations of election fraud were amplified by French conspiracy influencers and by the Pravda network, a known Russian-controlled disinformation outlet. These claims accused France, Moldova, and the European Union of orchestrating the alleged manipulation. Romanian authorities and OSCE observers confirmed the election was free, fair, and held under democratic conditions. The Record  further reports that Simion called for the election to be annulled in response to the result. He cited claims by Telegram founder Pavel Durov, who alleged that French authorities had pressured the platform to silence “conservative voices in Romania.” French officials dismissed the accusation. Earlier this year, Romanian authorities annulled the first round of voting due to confirmed Russian interference, including a coordinated disinformation campaign on TikTok and other platforms. Another report by DFRLab outlines  how George Simion , once known for his anti-Russian unionist activism in support of Romanian-Moldovan reunification, has increasingly echoed Kremlin-aligned narratives. He has opposed military aid to Ukraine and has framed the Russia-Ukraine war as a “fraternal conflict.” Simion’s shift toward pro-Russian messaging has drawn praise from Kremlin-aligned Moldovan figures such as Igor Dodon and support from disinformation networks linked to fugitive oligarch Ilan Shor, known for spreading false narratives online ( See our Weekly Report, W20, May 2025, for further details ). ​ Source:    NewsGuard's Reality Check, Badilini, S., 2025. After Romania Elects Pro-EU Candidate, Pro-Russian Accounts Claim Election Was Stolen. [online] Available at:   https://www.newsguardrealitycheck.com/p/after-romania-elects-pro-eu-candidate The Record, Antoniuk, D., 2025. Defeated Romanian far-right candidate calls for court to annul election over alleged interference. [online] Available at:   https://therecord.media/romania-election-annul-simion-george DFRLab, Olari, V., 2025. From Bucharest to Chisinau: How pro-Kremlin networks shaped Romania’s 2025 election . [online] Available at: https://dfrlab.org/2025/05/16/pro-kremlin-networks-shaping-romania-2025-election/ [Back to Table of Contents] French-Linked Network Amplifies False Romanian Election Claims A 2025 Digital Forensic Research Lab (DFRLab) investigation  uncovered a network of 15 websites, three of which actively repost Romanian-language content from far-right and fringe sources, linked to the France-based company DirectWay . The company operates the news aggregator Ziar[.]com, identified as a source of disinformation, and the X account @_Direct_News, which promoted claims of election interference by the European Union and France during Romania’s presidential elections. During the 2025 vote, both platforms circulated false claims declaring nationalist candidate George Simion the winner. Official results confirmed the victory of pro-European candidate Nicușor Dan. The campaign relied on material from outlets such as Realitatea Plus , which was fined for partisan election-day coverage. Technical analyses showed that the network shares Google Analytics and AdSense codes, indicating centralized control. Historical data revealed that the network had targeted up to 13 African countries. Most of these domains were later redirected to direct[.]news, which publishes region-specific content for 55 African and 47 Asian countries. An AdSense reverse lookup traced the network's administration to a Romanian national based in Lyon, France, listed as DirectWay ’s director in the French corporate registry. The case study demonstrates how a modular, transnational digital infrastructure can be repurposed to spread digital hostile influence campaigns across regions, challenging electoral integrity and democratic stability. Source:    DFRLab, Châtelet, V., 2025. Online network with French ties promotes election interference claims in Romania. [online] Available at: https://dfrlab.org/2025/05/23/online-network-with-french-ties-promotes-election-interference-claims-in-romania/ [Back to Table of Contents] Europe at a Crossroads: Balancing AI Innovation and Regulation In a recent analysis , Carnegie Europe  characterizes  the European Union as a global pioneer in AI governance, citing the AI Act as the first comprehensive legal framework for artificial intelligence. Complemented by instruments like the Digital Services Act and the GDPR, this approach reflects the EU’s commitment to ethics and fundamental rights. However, the Union is increasingly shifting toward innovation, with initiatives such as AI factories and the EuroStack project to enhance technological sovereignty. The deregulatory turn has been framed as a necessary response to the U.S. and China's geopolitical pressures and technological competition. Yet the report suggests that Europe’s key barriers to innovation may lie more in structural weaknesses—such as limited access to venture capital, fragmented markets, and reliance on foreign infrastructure—than in regulation itself. Recent policy changes, including the withdrawal of the proposed AI liability directive and including national security exemptions in the AI Act, may risk weakening oversight and fundamental rights protections. The EU now faces a strategic dilemma: balancing its role as a global standard-setter in ethical AI and needing to remain technologically competitive. The regulation of dual-use AI, applicable in civilian and military contexts, remains particularly unresolved. According to Carnegie Europe , a viable path forward would involve greater investment, sovereign digital infrastructure, and a binding framework for dual-use AI. A balanced approach linking innovation with responsible regulation may be key to preserving Europe’s autonomy and democratic values. Source: Carnegie Endowment for International Peace, Csernatoni, R., 2025. The EU’s AI Power Play: Between Deregulation and Innovation. [online] Available at: https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation?lang=en [Back to Table of Contents] Disinformation Arises in India-Pakistan Conflict ABC News reports that following the deadly attack in Pahalgam and the subsequent military escalation between India and Pakistan, a parallel wave of disinformation spread rapidly across platforms like X, WhatsApp, Facebook, and YouTube. AI-generated deepfakes, recycled footage, and fabricated stories distorted public perception and fueled nationalist sentiment ( we previously covered the developments between India and Pakistan in our Weekly Reviews 19 and 20 ). Prominent examples included doctored images of Rawalpindi Stadium in ruins, a deepfake video of a Pakistani general appearing to admit defeat, and video game clips shared as real airstrikes. A fake Daily Telegraph front page praising Pakistan’s air force was also widely circulated. ABC highlights how even mainstream media broadcast unverified content. One video showed a couple dancing in Kashmir, falsely framed as their final moments before death. Despite the couple confirming they were alive, the footage continued to spread. The Digital Rights Foundation recorded a surge in hate speech, while India’s blocking of 17 Pakistani YouTube channels and several X accounts, including those of journalists, raised censorship concerns. Fact-checkers like BOOM Live and AFP were overwhelmed. Of 437 X posts reviewed, 179 were from verified accounts, yet only 73 included community notes. Experts warn that disinformation will remain a powerful weapon in digital-age conflicts without stronger moderation and verification tools. Source: ABC News, Hogan, L., 2025. Misinformation war rages online amid India-Pakistan tensions. [online] Available at: https://www.abc.net.au/news/2025-05-24/misinformation-online-war-kashmir-conflict-india-pakistan/105318696 [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION EU Renews Mission in Moldova to Combat Disinformation and Hybrid Threats The European Union has extended  the mandate of the EU Partnership Mission in Moldova (EUPM Moldova) until May 2027. The mission aims to strengthen Moldova’s resilience against hybrid threats, with a strong focus on foreign disinformation and information manipulation. Launched in 2023 at the request of the Moldovan government, EUPM Moldova is the EU’s first civilian mission explicitly designed to counter such threats. It provides strategic advice and operational support in crisis management, cybersecurity, and communication integrity. One key achievement is the support for establishing Moldova’s Centre for Strategic Communication and Countering Disinformation (StratCom Centre). The mission provided training, tools, and EU best practices to help identify and respond to false or manipulative narratives. Since its launch, EUPM Moldova has carried out over 60 capacity-building activities. Specialized teams work closely with Moldovan authorities to improve institutional responses. The mission has become a strategic partner in building sustainable security and protecting democratic processes from foreign interference. Source: European External Action Service, 2025. EUPM Moldova: Moving forward towards sustainable security resilience in Moldova. [online] Available at: https://www.eeas.europa.eu/eupm-moldova/eupm-moldova-moving-forward-towards-sustainable-security-resilience-moldova_en EU Sanctions Target Russia's Hybrid Warfare Threats According to a report  by The Record , the European Union has introduced a new sanctions package targeting individuals and entities involved in Russia’s hybrid warfare operations. The measures focus on actors linked to disinformation, sabotage, and espionage activities across Europe and Africa. Those sanctioned include members of Russia’s military intelligence agency (GRU), individuals spreading pro-Kremlin narratives on social media, and companies providing technical infrastructure, such as web hosting and GPS jamming technologies, that support these efforts. A key target of the sanctions is Voice of Europe, a media outlet reportedly secretly funded by pro-Russian Ukrainian politician Viktor Medvedchuk. The platform allegedly ran influence operations across the continent, including attempts to covertly finance candidates in the 2024 European Parliament elections. The sanctions list also includes media figures involved in disseminating Russian disinformation in African countries and the operators of Stark Industries, a U.K.-based hosting provider. The company is believed to have supported Kremlin-aligned cyber and influence campaigns, including the well-known Doppelgänger operation to manipulate public opinion in the West. Sources: The Record, Antoniuk, D., 2025. EU sanctions target individuals, organizations behind Russia’s disinformation and sabotage operations. [online] Available at: https://therecord.media/eu-sanctions-orgs-individuals-tied-to-russia-disinformation [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]

  • Weekly Report: Cyber based influence campaigns 16th - 22nd of June 2025

    [Listen to the Podcast] [ Report Highlights ] As published by La Derecha Diario , Argentina’s State Intelligence Secretariat (SIDE) has uncovered a covert Russian spy network operating within the country, focused on disinformation and influence campaigns aligned with Kremlin interests. According to a Graphika report, a covert network of over 1,000 X accounts has been exposed for engaging in coordinated disinformation efforts surrounding the U.S.-China trade war. New Eastern Europe reports that foreign meddling was lower than expected despite Poland's "election umbrella" strategy of monitoring, training, and cybersecurity to counter anticipated Russian interference in its June 1st presidential election. The ICT reports that Iran is using AI-generated disinformation on TikTok to fabricate battlefield victories, ridicule Israeli leadership, and shape global perceptions following Israel’s June 2025 strikes. The Oxford Internet Institute reports that Republican tweets on X were flagged as misleading nearly twice as often as Democratic ones, highlighting a partisan imbalance in misinformation identified by the platform’s Community Notes system. Logically and EU DisinfoLab report that CIB networks are spreading climate disinformation across Europe through conspiratorial, culture-war, and pro-fossil fuel narratives to undermine public trust and stall climate action. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS SOCIAL MEDIA PLATFORMS [X] X Corp. Challenges New York Law on Disinformation Reporting STATE ACTORS [Russia] Argentina Identifies Russian Agents Involved in Hostile Influence Africa’s Health Crisis and Russian Influence The Kremlin Exploits Crises for Hostile Influence [China]    Graphika Report Reveals Hostile Chinese Influence Operation Chinese Influence Efforts in Latin America [Iran] Iran's AI-Fueled Disinformation After Israel Attacks Social Media Actors Fuel Confusion Amid the Israel-Iran Conflict Islamic Republic of Iran Broadcasting's Global Propaganda Efforts Revealed Evolution of Iran’s AI-Powered TikTok Campaign AI-RELATED ARTICLES   South Korea’s AI Strategy Prioritizes Innovation, Overlooks National Security Disinformation and Deepfakes in the 2025 Canadian Election GENERAL REPORTS Sudan’s Digital War: A Laboratory for Hostile Influence Operations Partisan Asymmetry in Misinformation Sharing on X Unpacking Climate Disinformation: Key Insights from the HEAT Project FRAMEWORKS TO COUNTER DISINFORMATION Disinformation Research Under Political Attack in the U.S. Poland Fights Russian Election Interference GLOSSARY DOWNLOAD SECTION SOCIAL MEDIA PLATFORMS [X] X Corp. Challenges New York Law on Disinformation Reporting As published by TechXplore , Elon Musk’s X Corp. has filed a lawsuit against a New York law requiring large social media companies to report how they define moderate disinformation, hate speech, and other harmful content. X argues the law violates the First Amendment by compelling disclosure of internal moderation practices, claiming it pressures platforms to limit constitutionally protected speech. The law, known as the “Stop Hiding Hate Act,” requires companies with over $100 million in annual revenue to submit semiannual transparency reports; noncompliance is punishable by daily fines. X contends that the law reflects “viewpoint discriminatory motives,” pointing to lawmakers’ refusal to amend the bill following a similar law's rejection in California. The platform positions itself as defending user rights and editorial independence, stating this legal battle is about resisting government overreach into content moderation. Critics, however, argue that transparency regarding disinformation and hate moderation is essential and does not infringe on free speech. Advocacy groups, such as Reporters Without Borders, argue that requiring platforms to explain how they handle misinformation is the bare minimum for accountability in digital spaces. New York legislators insist the law is narrowly focused on transparency, not censorship, allowing users to make informed choices about the platforms they engage with. Source: TechXplore, Zinin A., Musk’s X sues to block New York social media transparency law, 2025, [online]; Available at: https://techxplore.com/news/2025-06-musk-sues-block-york-social.html [Back to Table of Contents] STATE ACTORS [Russia] Argentina Identifies Russian Agents Involved in Hostile Influence La Derecha Diario states that Argentina’s State Intelligence Secretariat (SIDE) has uncovered a covert Russian spy network operating within the country, focused on disinformation and influence campaigns aligned with Kremlin interests. The findings are detailed in a report that SIDE has not yet released. The operation, “La Compañía,” was allegedly tied to the notorious “Lakhta Project,” a global interference initiative previously led by Russian oligarch Yevgeniy Prigozhin. Russian nationals Lev Andriashvili and Irina Iakovenko, residing in Argentina, reportedly coordinated these efforts, which included recruiting local citizens, promoting false narratives, and establishing connections with sympathetic organizations. The network's activities included creating and disseminating manipulated political content on social media, infiltrating civil society groups, organizing focus groups to gather sensitive data, and collecting political intelligence to aid foreign operations. Authorities warn that these disinformation tactics were designed to destabilize Argentina’s institutions and distort democratic processes, posing a serious threat to national sovereignty and public discourse. In response, Argentine officials reaffirmed their commitment to safeguarding internal security and resisting foreign influence. Source:    La Derecha Diario, Editorial Team, These are the Russian spies SIDE caught in Argentina, 18. Juni 2025, [online]; Verfügbar unter: https://derechadiario.com.ar/us/argentina/these-are-the-russian-spies-that-side-argentina-captured [Back to Table of Contents] Africa’s Health Crisis and Russian Influence EUvsDisinfo reports  that Russian-linked actors are exploiting the Mpox outbreak in Africa for a disinformation campaign. The virus, primarily found in Central and West Africa, saw a new strain spread rapidly in 2023, leading the WHO to declare a global health emergency in 2024. The campaign, mainly active in francophone regions, spreads false claims that mpox was deliberately created by Western powers to harm African populations and benefit pharmaceutical companies. It also falsely links outbreaks to LGBTIQ+ communities, increasing stigma. Kremlin state media praises russian vaccines while casting doubt on Western aid, aiming to portray the West and organizations like the WHO as responsible for outbreaks and unsafe vaccines, positioning Russia as Africa’s trusted partner. This disinformation fuels fear around diseases such as mpox, Marburg, and Ebola, deepening social divisions. The campaign resembles the 1980s Soviet “Operation Denver,” which falsely claimed HIV/AIDS was a U.S. bioweapon and contributed to over 330,000 preventable deaths in South Africa due to delayed treatment. Source:    EUvsDisinfo, Mpox, myths and media manipulation, 2025, [online]; Available at: https://euvsdisinfo.eu/mpox-myths-and-media-manipulation/ttps://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/ [Back to Table of Contents] The Kremlin Exploits Crises for Hostile Influence EUvsDisinfo highlights  that Russia strategically leverages current crises to strengthen its geopolitical influence. A recent example is the escalation between Israel and Iran, which Russian media use to portray Iran as a peaceful nation while criticizing Israeli military actions. At the same time, Russia’s invasion of Ukraine receives limited coverage. This selective application of international law reinforces Russia’s position and weakens its opponents. Additionally, Russia seeks to present itself as a mediator in the Middle East, primarily serving its strategic interests. Despite this stance, reports of civilian casualties and attacks on critical infrastructure in Ukraine remain largely unaddressed. The crisis in the Middle East also benefits Russia economically, with rising oil prices supporting its strained economy, while international attention is diverted away from Ukraine. These developments illustrate how Russia uses ongoing conflicts to shape international norms and pursue its geopolitical goals. Source: EUvsDisinfo, Never let a crisis go to waste, 2025, [online]; Available at: https://euvsdisinfo.eu/never-let-a-crisis-go-to-waste/ [Back to Table of Contents] [China] Graphika Report Reveals Hostile Chinese Influence Operation Graphika’s findings  describe a coordinated digital disinformation campaign involving over 1,000 fake X accounts to influence public perception of the US-China trade conflict and global tariffs. The accounts used counterfeit identities, copied content from legitimate users, and amplified specific hashtags and media articles. The campaign sought to portray US trade policies, particularly President Trump’s tariffs, as harmful to the American and global economies. Sub-networks focused on countries such as the UK, Canada, and Japan, depicting them as resisting US pressure. Beyond trade issues, the network also promoted China’s broader geopolitical agenda, including criticism of US military presence in Japan and favorable coverage of Chinese tourism initiatives. Although no direct attribution to a known actor was established, the content and tactics indicate a likely alignment with Chinese state interests. Source: Graphika, China-Aligned Network Poses as Grassroots Voices in Effort to Covertly Boost Online Narratives Critical of US Tariffs and Trade Policies, 2025, [online]; Available at: https://graphika.com/reports/tariff-tirade#download-form [Back to Table of Contents] Chinese Influence Efforts in Latin America According to the Global Influence Operations Report (GIOR) , China is rapidly expanding its propaganda network in Latin America through sophisticated disinformation strategies aimed at shaping public opinion and suppressing criticism. As reported by Diálogo Americas on June 13, 2025 , Beijing employs various tactics, including journalist training programs, strategic media partnerships, and covert networks. These efforts involve specific actors, such as the Chinese publishing group Caixin Media, state-funded training programs for Latin American journalists, and collaborations with local outlets, including Peru's América TV and El Comercio. They also leverage platforms like BYD's Zhengzhou plant tours and state media groups like China Media Group. The broader implications of this expanding Chinese influence operation highlight a comprehensive campaign to dominate media landscapes and erode trust in independent journalism across the region. China utilizes strategies like the "Paperwall" campaign, involving fake local newspaper websites (123 sites across 30+ nations), and "borrow a boat to go out to sea" tactics to disseminate pro-Beijing content in foreign media, both openly and covertly, often camouflaging it with commercial press releases to obscure its true origin. This strategic manipulation of information aims to neutralize local media on issues related to Beijing, devalue professional journalistic values, and ultimately advance China's political and economic interests in Latin America by controlling narratives and suppressing critical voices. Sources: GIOR, Beijing’s Expanding Propaganda Network in Latin America, 2025, [online]; Available at: https://www.global-influence-ops.com/beijings-expanding-chinese-propaganda-network-latin-america/ Diálogo Américas, China’s Influence Operations in Latin American Media, 2025, [online]; Available at: https://dialogo-americas.com/articles/chinas-influence-operations-in-latin-american-media/ [Back to Table of Contents] [Iran] Iran's AI-Fueled Disinformation After Israel Attacks The recent developments in the Israel-Iran conflict vividly illustrate how digital hostile influence has become a key instrument in modern warfare. Following Israeli airstrikes in June 2025, Iranian state-controlled media orchestrated a coordinated effort, leveraging advanced AI technology to create a manipulated information environment. NewsGuard identified  at least 16 targeted false claims across more than 50 platforms, accompanied by AI-generated images and videos, including fabricated reports of downed Israeli F-35 fighter jets and Prime Minister Netanyahu fleeing ( NewsGuard ). These Iranian efforts use digital technologies to boost the credibility of their narratives and strategically shape public perception. Simultaneously, Russian actors intensify their digital campaigns aimed at undermining the trust in Western weapons systems ( GIOR ). In the Israel-Iran context, methods are employed to question the reliability of Israeli defense systems—such as the Iron Dome—and Western arms supplied to Israel, aiming to weaken confidence in Israel’s military capabilities and indirectly strengthen Iran’s position. France 24, citing AFP , reports that advanced AI tools like Google’s Veo 3 increasingly blur the line between reality and fabrication. AI-generated videos and clips repurposed from war-themed video games rapidly circulate on social media platforms and are sometimes falsely validated by AI chatbots like xAI’s Grok. Weak content moderation on major platforms exacerbates the trust crisis and facilitates the rapid spread of deceptive content. This case underscores how digitally hostile influence, empowered by technological innovation, is growing increasingly sophisticated, more complex to detect, and globally interconnected. A combined approach of advanced detection technologies, international cooperation, robust regulation, and strengthened societal resilience is necessary to confront this escalating threat effectively. Sources: NewsGuard, Sadeghi M., Howard S., Lin C., Iranian State-Affiliated False Claims Tracker: 22 Myths about the War and Counting, 2025, [online]; Available at: https://www.newsguardtech.com/special-reports/israel-iran-conflict/ NewsGuard, Sadeghi M., When Down is Up: Top 10 Iranian State Media False Claims of Losses by Israel, 2025, [online]; Available at: https://www.newsguardrealitycheck.com/p/when-down-is-up-top-10-iranian-state Global Influence Operations Report (GIOR), Israel-Iran Conflict and AI Disinformation: 100 Million Views on Fake Content, 2025, [online]; Available at: https://www.global-influence-ops.com/israel-iran-conflict-ai-disinformation-100m-views-on-fake-content/ France 24, Tech-fueled misinformation distorts Iran-Israel fighting, 2025, [online]; Available at: https://www.france24.com/en/live-news/20250621-tech-fueled-misinformation-distorts-iran-israel-fighting [Back to Table of Contents] Social Media Actors Fuel Confusion Amid the Israel-Iran Conflict The Centre for Information Resilience (CIR) article on the Israel-Iran escalation highlights  key challenges for OSINT practitioners. It provides essential insights into the complexity of modern information environments in conflict situations. It emphasizes the need for a systematic approach, technical expertise, and critical analysis to ensure reliable source evaluation. Distinguishing authentic from false information in real time proves especially challenging when direct on-the-ground verification is unavailable. The widespread use of outdated or misattributed images and videos can significantly distort situational assessments. Examples of widely circulated inaccurate content are highlighted, alongside the increasing presence of highly realistic AI-generated images. This situation demands that OSINT analysts consistently refine their detection techniques and uphold a critical mindset toward potentially manipulated material. Furthermore, the analysis reveals that even social media accounts with large followings are unreliable sources. Careful verification and cross-referencing with multiple trusted sources are therefore essential. Finally, the report stresses the critical role of satellite and geospatial data as independent and dependable tools for verifying events and damages. Their systematic and methodical integration into OSINT workflows is crucial for producing accurate and reliable situational awareness. Sources: CIR, Israel–Iran: Monitoring updates, verifying damage and debunking misinformation, 2025, [online]; Available at: https://www.info-res.org/articles/israel-iran-monitoring-updates-verifying-damage-and-debunking-misinformation/ [Back to Table of Contents] Islamic Republic of Iran Broadcasting's Global Propaganda Efforts Revealed The Global Influence Operations Report (GIOR) reveals  how the Iranian state broadcaster IRIB airs coerced televised confessions produced in collaboration with the Ministry of Intelligence and the Islamic Revolutionary Guard Corps (IRGC). These confessions are used to intimidate domestic dissent and deliberately shape international narratives. The report highlights IRIB’s extensive reach, with 19 national channels and broadcasts in 32 languages. Additionally, the broadcaster conducts covert online influence campaigns involving propagandistic proxy websites and botnets, amplifying regime-aligned messages while concealing their origins. Key actors include IRIB and its security-affiliated partners, such as the IRGC and intelligence services, digital proxies, and seemingly “independent” media outlets distributing content globally. FDD researchers Bridget Toomey and Toby Dershowitz emphasize  the role of IRIB-produced coerced programming—often called “Torture TV”—as a core component of Tehran’s psychological warfare. Sources: Global Influence Operations Report (GIOR), Iran’s IRIB Propaganda Network: “Torture TV” Operations Exposed, 2025, [online]; Available at: https://www.global-influence-ops.com/iran-irib-torture-network-exposed/ Foundation for Defense of Democracies (FDD), Toomey B., Dershowitz T., 10 Things to Know About Tehran’s Propaganda Network, the Islamic Republic of Iran Broadcasting, 2025, [online]; Available at: https://www.fdd.org/analysis/2025/06/18/10-things-to-know-about-tehrans-propaganda-network-the-islamic-republic-of-iran-broadcasting/ [Back to Table of Contents] Evolution of Iran’s AI-Powered TikTok Campaign A recent article  by the ICT reveals  a marked escalation in Iran’s use of AI-generated content on TikTok, reflecting a strategic evolution in its information warfare operations during the current conflict. Compared to previous influence campaigns, including those following the October 7 attacks, this latest attack demonstrates a significant leap in scale and technical execution. This transformation is driven by a convergence of increased Iranian capability and the proliferation of advanced generative tools. The campaign’s multilingual architecture suggests a deliberate attempt to target segmented audiences with tailored narratives. Arabic and Farsi content promotes regional solidarity and anti-Israel messaging. Hebrew-language materials aim to induce psychological pressure within Israel. English-language videos target global public opinion, while content in East Asian languages appears designed to exploit local narratives and broaden geographic reach. The report identifies several core narrative frames: simulated large-scale destruction in Israeli urban centres, a portrayal of sustained Iranian military momentum, and the use of satire to delegitimize Israeli and U.S. leadership. These themes are amplified through synthetic video and imagery to manipulate perception and erode adversarial morale. Perhaps most remarkable is the emerging tactic of weaponizing AI-powered fact-checking tools to authenticate fabricated content falsely . This reflects an adaptive approach to undermining trust in legitimate information channels.  Source: International Institute for Counter-Terrorism (ICT), Haberfeld D., Azani E., Iranian TikTok Campaign Seeks to Shape War Perceptions Using AI, 2025, [online]; Available at: https://ict.org.il/iranian-tiktok-campaign-seeks-to-shape-war-perceptions-using-ai/ [Back to Table of Contents] AI RELATED ARTICLES South Korea’s AI Strategy Prioritizes Innovation, Overlooks National Security  As published  by CSIS , in January 2025, South Korea passed the "AI Basic Act," a comprehensive legislative effort to promote artificial intelligence development and build public trust in its use. Set to take effect in January 2026, the law outlines national policies supporting AI innovation, industry development, data infrastructure, ethical standards, transparency requirements, and safety obligations. It also introduces regulations for high-risk and generative AI systems. Experts argue that South Korea’s institutional approach remains fragmented despite its scope. The current strategy mainly focuses on ethics and economic growth, leaving critical gaps in addressing malicious information operations and national security threats. A significant limitation is the act’s exclusion of defence and security applications. Article 4 explicitly states that AI's national defence and security uses fall outside the law’s scope, based on the assumption that other agencies will manage these areas separately. However, there is no legal framework beyond the act to guide the use of AI in defence or security, making coordinating and implementing effective strategies challenging. Experts recommend a security-centred update to South Korea’s AI policy to address these shortcomings. This would include establishing a dedicated institutional framework for AI in national defence, clarifying the roles of relevant agencies, and potentially creating a new organization focused on AI-related national security threats. Strengthening technical capabilities in AI-driven threat detection and cyberattack prediction is also essential. Ultimately, for South Korea to fully leverage AI, it must recognize the technology not only as a driver of innovation but also as a strategic asset in national defence. Sources:    Center for Strategic and International Studies (CSIS), Bae S., Kim S.J., AI Security Strategy and South Korea’s Challenges, 2025, [online]; Available at: https://www.csis.org/analysis/ai-security-strategy-and-south-koreas-challenges [Back to Table of Contents] Disinformation and Deepfakes in the 2025 Canadian Election DFR Lab published an article  reporting  on the viral deepfake video that circulated, depicting Canadian Prime Minister Mark Carney announcing sweeping auto regulations ahead of Canada’s 2025 federal election. Although labelled as AI-generated and debunked by fact-checkers, the clip still racked up millions of views on TikTok and X, showing how misinformation can thrive even after exposure. The video’s reach extended well beyond Canada. Many U.S.-based users, already immersed in their political tensions, shared and reacted to the false narrative with alarm. Influencers played a key role in amplifying the content, often reposting the claims without attribution or context. Even after the platforms removed the original footage, echoes of the misinformation persisted. This incident highlights the growing challenge of AI-generated disinformation during politically sensitive periods. Fabricated content that imitates public figures or references polarizing policies can spread faster than social media platforms can respond. Experts caution that as AI tools become more sophisticated and widely available, traditional content moderation may be insufficient to contain the spread of false but convincing narratives. DFR claims that the Carney deepfake offers a warning: in the digital age, political perception can be easily manufactured and manipulated. ​ Source:    Digital Forensic Research Lab (DFRLab), Deepfake video of Canadian Prime Minister reaches millions, 2025, [online]; Available at: https://www.dfrlab.org/2025/06/19/deepfake-video-of-canadian-prime-minister-reaches-millions-on-tiktok-x/ [Back to Table of Contents] GENERAL REPORTS Mistrustful and skeptical, Australians still turn to social media for news According to an article  by The Conversation , as traditional news consumption declines, more Australians are turning to social media for information, particularly from influencers, yet trust in these sources remains low. According to the report, 57% of Australians see influencers as the leading source of misinformation, surpassing concerns about activists, political actors, and even the news media. Despite social media’s growing role, especially among younger users on platforms like TikTok, Australians remain deeply concerned about what is real or fake online, with 74% expressing worry. A third of respondents believe platforms like TikTok and YouTube aren’t doing enough to remove harmful or misleading content. At the same time, recent political efforts to regulate misinformation, such as the now-withdrawn ‘Combatting Misinformation and Disinformation Bill’ illustrate the complexity of the debate over who should be responsible for truth online: platforms, governments, or users themselves. The conversation suggests that improving media literacy is a vital solution. The survey found that individuals who have received news literacy education are more likely to verify information, trust credible sources, and engage with the news more frequently overall. However, only 24% of Australians reported receiving such training. Source:    Jamestown Foundation, Bolkunets D., YouTube Restrictions on Belarus and Russia Strengthen Minsk’s Propaganda, 2025, [online]; Available at: https://jamestown.org/program/youtube-restrictions-on-belarus-and-russia-strengthen-minsks-propaganda/ [Back to Table of Contents] Sudan’s Digital War: A Laboratory for Hostile Influence Operations An article  by LSE details  how Sudan’s civil war has escalated into the digital domain. LSE claims that Sudan has, in effect, become a laboratory for hostile digital influence campaigns which are unregulated, coordinated, and increasingly effective. Armed actors, particularly the Rapid Support Forces (RSF), have turned social media into a battleground for narrative control. Platforms like TikTok, WhatsApp, and Telegram are used to distribute emotionally charged, stylized content crafted to bypass critical scrutiny and provoke viral reactions. Disinformation is not incidental but systematic, multilingual, and designed to sway local and global audiences. Campaigns such as [#TheArmyKillsCivilians] and [#ProtectSudaneseCivilians] flood the digital space with polarizing messages, while hashtags like [#HumanitarianAid] target international sentiment to shape diplomatic outcomes. Arabic content tends to evoke loyalty and fear, while English-language posts are tailored to elicit sympathy and frame the RSF in a more favourable light abroad. These operations are timed with military actions or negotiations, revealing a deliberate strategy to manipulate perception and influence political trajectories. In the absence of digital governance, Sudan exemplifies how fragile states become proving grounds for aggressive influence operations. What is unfolding in Sudan is not just narrative warfare. It is an unregulated testbed for the next generation of digital conflict, where truth is malleable and perception can be weaponized. Source: LSE, Ahmed Adam L.M., Digital propaganda and the steering of mass narratives have shaped the war in Sudan, 2025, [online]; Available at: https://blogs.lse.ac.uk/africaatlse/2025/06/16/digital-propaganda-and-the-steering-of-mass-narratives-have-shaped-the-war-in-sudan/ [Back to Table of Contents] Partisan Asymmetry in Misinformation Sharing on X The Oxford Internet Institute (OII) report in new study  published in PNAS , that two thirds of posts flagged by X’s Community Notes as potentially misleading were written by Republicans, compared to just one third written by Democrats, underscoring a significant partisan asymmetry in misinformation sharing. The analysis from OII included English-language tweets and found that across topics like health, politics, science, and economy, Republican posts were disproportionately flagged—e.g., 81% flagged in health-related tweets and 73% in politics. “Our findings provide strong evidence of a partisan asymmetry in misinformation sharing which cannot be attributed to political bias on the part of fact-checkers or academic researchers”, said  Professor Mohsen Mosleh , Associate Professor of Social Data Science at the Oxford Internet Institute. However, the study credits X’s crowd-sourced fact-checking mechanism, Community Notes, for reliably identifying misleading content through a diverse rating system that minimizes partisan bias. Source: Oxford Internet Institute, New study finds Republicans flagged for posting misleading tweets twice as often as Democrats on X/Twitter’s Community Notes, 2025, [online]; Available at: https://www.oii.ox.ac.uk/news-events/new-study-finds-republicans-flagged-for-posting-misleading-tweets-twice-as-often-as-democrats-on-x-twitters-community-notes/ [Back to Table of Contents] Unpacking Climate Disinformation: Key Insights from the HEAT Project The HEAT project (Harmful Environmental Agendas & Tactics), produced by Logically and DisinfoLab, has recently concluded its comprehensive study  of climate-related misinformation narratives across Germany, France, and the Netherlands. This targeted investigation offers crucial insights into the actors, platforms, and techniques actively shaping climate discourse in these diverse European contexts. Four critical findings emerged: Persistent Conspiratorial Narratives : A pervasive distrust in institutions and science fuels the popularity of conspiratorial narratives, particularly around geoengineering and HAARP. These stories erode public confidence in climate action. Climate Policy as "Culture War" : Climate policy is increasingly framed as a "culture war" issue. Partisan attacks and populist rhetoric brand climate action as authoritarian, often exploiting national anxieties about economic stability or regional autonomy to amplify skepticism. Russia-Linked Amplification ("Portal Kombat") : The project verified Russia-linked "Portal Kombat's" role in amplifying disinformation. Its strategy—rebranding international content for local audiences via Telegram and low-engagement Facebook "copypasta"—demonstrates an agile model for narrative laundering that scales rapidly. "Big Oil" Influence : While narratives aligning with fossil fuel interests were prominent, direct "Big Oil" disinformation campaigns were limited. These narratives were primarily diffused through partisan or ideological networks, indicating an indirect yet significant influence. These findings underscore an urgent need for enhanced platform accountability and harmonized, EU-level regulatory responses. Understanding how shared disinformation frames engage with local conditions is paramount for countering efforts to undermine informed public debate on climate.  Source: The HEAT project ( Logically and EU DisinfoLab ) , A look at France, Germany, and the Netherlands, 2025, [online]; Available at: https://logically-web.cdn.prismic.io/logically-web/aFkoPnfc4bHWim7c_20250623-heat-harmful-environmental-agendas-tactics.pdfaFkoPnfc4bHWim7c_20250623-heat-harmful-environmental-agendas-tactics.pdf      [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION Disinformation Research Under Political Attack in the U.S. According to a report  by The Conversation , the field of misinformation and disinformation research in the U.S. faces unprecedented political pressure, jeopardizing efforts to understand and counter online manipulation. Following a controversial executive order by President Donald Trump, ostensibly to "restore free speech," the National Science Foundation alarmingly cancelled numerous grants for studying misleading content. This move, framing legitimate research as censorship, directly clashes with public sentiment: a staggering 95% of Americans recognize misinformation as a significant problem. Despite a 2024 Supreme Court ruling upholding the constitutionality of such research, Republican lawmakers continue their aggressive efforts to discredit academics, painting them as politically biased. This narrative persists even as studies consistently show conservative content is more prone to containing misinformation, logically explaining higher labelling rates. Tactics like "accusation in a mirror" are employed, casting researchers as censors while political actors themselves engage in suppressing information. This politically driven backlash has led to harassment of researchers, especially women, and threatens to stall meaningful work that helps inform the public about manipulation tactics and media literacy. Fact-checking, proven to be one of the most effective tools against disinformation, is also under attack, despite being widely accepted as accurate, even by skeptical audiences when shown specific results. Source: The Conversation, Sinclair H.C., Most Americans believe misinformation is a problem — federal research cuts will only make the situation worse, 2025, [online]; Available at: https://theconversation.com/most-americans-believe-misinformation-is-a-problem-federal-research-cuts-will-only-make-the-problem-worse-255355 [Back to Table of Contents] Poland Fights Russian Election Interference As reported by New Eastern Europe's article , on June 1st, 2025, Poland saw the election of Karol Nawrocki, supported by the national-conservative Law and Justice (PiS) party. Throughout the campaign, Warsaw's ruling coalition consistently warned of foreign interference, explicitly from Russia. Given Poland's strategic significance and the shifting dynamics of support for Ukraine, an intense digital hostile information campaign from Moscow was widely anticipated, aiming to push Eurosceptic, anti-Western, and pro-Russian narratives. However, experts noted a surprisingly subdued impact from Russian interference efforts compared to what was expected. This outcome suggests that Poland's diverse social media landscape, the limited constitutional powers of the presidency, and crucial lessons learned from Romania's recent electoral experience played a significant mitigating role. Indeed, following both election rounds, the feared "Russian storm" of disinformation largely failed to materialize. As one expert, Agnieszka Lipińska of NASK, noted to New Eastern Europe, disinformation levels were significantly lower than during the 2024 European Parliament election. Despite this relative quiet during the immediate electoral period, experts caution that the objective of foreign state-sponsored disinformation is often a more insidious, long-term erosion of public perception on key issues, rather than an immediate, overt impact on election results. Source: New Eastern Europe, Pyka A., Resisting foreign interference: Poland’s presidential election and the Russian challenge, 2025, [online]; Available at: https://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/ [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]

  • Influence Rebranded: The African Initiative

    1. Introduction This week, we will examine a recent VIGINUM report (June 2025), which analyzes the activities and organizational structure of the " African Initiative," an organization operating across several African countries. We will explore only selected aspects of the piece in more detail. You may find the full report here . 2. Origins Russia’s full-scale invasion of Ukraine on February 24, 2022, led to Moscow’s growing international isolation. As a result, Africa gained new importance in the Kremlin’s foreign policy agenda. With the strategic shift outlined in Russia’s 2023 Foreign Policy Concept, there was a notable expansion of activities related to foreign information manipulation and interference (FIMI) on the African continent . (1) Initially, Russia pursued two parallel influence strategies: one overt, through state-controlled media outlets such as RT and Sputnik, and the other covert, via operations like “Project Lakhta.” These efforts were planned and executed under the leadership of PMC Wagner. However, following the death of Yevgeny Prigozhin on August 23, 2023, and the partial dismantling of the Wagner Group, control over the existing influence infrastructure was transferred to various Russian state actors. According to a New York Times investigation , the Russian Foreign Intelligence Service (SVR) has taken control of Project Lakhta . The PMC Wagner’s activities are now closely monitored by the Russian military intelligence service (GRU ), notably through the creation of the “Africa Corps” (Afrikansky Korpus). Alongside Africa Corps, a new Russian media outlet named "African Initiative" has emerged . This entity was established as a Moscow-based "news agency" in September 2023, with the self-declared mission to create an “information bridge between Russia and Africa,” which is a euphemistic way of describing the systematic dissemination of anti-Western and pro-Kremlin propaganda. (2) Fig.1 - Structure and members of the African Initiative, Courtesy of Viginum (3)   A central figure within the African Initiative is Artyom Kureyev , who serves as the organization’s editor-in-chief. The European Union has linked him to coordinated disinformation campaigns targeting both Europe and Africa. He is believed to be a former officer associated with the Russian intelligence services (RIS) . Kureyev previously worked with Russian "think tanks" that functioned as front organizations for European influence operations. (4) Another key individual is Viktor Lukovenko , known by the alias Viktor Vasilyev. A former operative involved in Project Lakhta, which has been active in propaganda efforts in Ukraine and across Africa, utilizing Telegram channels and collaborating with Russian cultural institutions. According to his statements, he is no longer part of the African Initiative. (5) The African Initiative benefits from strong personnel links to the Russian state apparatus, particularly the Ministry of Foreign Affairs , Defense, and various intelligence agencies ( see the biographies of the executive team at 7. Appendices ). (6) These connections undoubtedly provide operational support, granting access to resources, networks, and expertise within Russia’s broader sphere of influence. 3. Overt Activities Outwardly, the African Initiative presents itself as a non-governmental organization (NGO) that recruits influencers, journalists, and grassroots activists to disseminate and amplify pro-Russian narratives among local populations. The initiative employs a dual strategy, combining both online and offline presence. By expanding its reach and establishing a local presence in strategically important regions, the initiative aims to establish and strengthen Russian influence across the African continent. Operational offshoots and affiliated structures have been identified in Burkina Faso, Mali, and Niger. (7) These influence operations are continually advanced in close coordination with the state-run agency Rossotrudnichestvo and the network of "Russian Houses." A key element of the initiative’s offline structure is the creation of a network of local NGOs, bloggers, artists, activists, and civil society groups, primarily in Burkina Faso. These actors reinforce and circulate pro-Russian narratives at the regional level. (8) In addition, the Initiative organizes various sporting, cultural, religious, health-related, and educational events designed to integrate into communities and build trust. By encouraging local ownership of these projects, it fosters a social environment that operates independently of its media platform, likely serving as a buffer against external criticism. (9) The Online Structure operates through messaging services, websites, and social media platforms, using multilingual channels tailored to different target audiences. The African Initiative’s website, [ afrinz.ru] , was registered on September 13, 2023. It publishes articles in Russian, English, French, Spanish, and Arabic. As of April 30, 2025, 18,127 articles had been published across all languages. However, VIGINUM describes the platform’s overall reach and impact as limited , according to data from SimilarWeb, the site recorded only 104,818 unique visits  between December 2024 and February 2025. (10) In addition to its website, the African Initiative operates a network of six “official” Telegram channels , which republish the site’s content in Russian, English, French, Portuguese, Spanish, and Arabic. These channels collectively reach 69,720 subscribers . (11) A Facebook page, a VK account, and two Rutube channels further support this digital ecosystem. The African Initiative also operates an TikTok account  targeting Spanish-speaking audiences , expanding its multilingual outreach strategy across diverse social media platforms. (12) 4. Covert Activities In addition to its overt media presence and cultural outreach, the African Initiative appears to be deeply involved in covert digital disinformation operations. Investigations have uncovered an information manipulation network known as “AI-Freak”  (also referred to as “StopNews” ), which serves as a technological extension of the African Initiative’s broader influence ecosystem and operates primarily out of public view. (13) AI-Freak primarily targets English- and French-speaking audiences in Africa through a coordinated disinformation strategy. It employs artificial intelligence to produce text, imagery, and video content, which is then distributed via fake news websites, inauthentic social media accounts, and manipulative search engine optimization tactics ( "Black Hat SEO") . (14) Fig. 2 - Diagram of the „AI-Freak” Network, Courtesy of VIGINIUM (15) As part of the so-called “Africa Cluster,” the website Newstop Afrique  [ newstop.Africa ] plays a central role within a pro-Russian influence ecosystem, specifically targeting African audiences. The French-language site was registered on December 19, 2023, and is hosted on a server located in South Africa. According to its self-description, it presents itself as a “trusted source of information” covering sports, music, and cinema topics. (16) Despite this seemingly benign appearance, the website shows several signs of inauthenticity . Although it claims to be based in Niger, it uses a telephone number with Nigeria’s country code (+234) instead of Niger’s proper code (+227). Furthermore, open-source searches for the provided contact email address [ info@newstop.africa ] yield no verifiable information about the actual operators behind the site . In addition to the website itself, Newstop Afrique maintains a presence across several social media platforms, including X (formerly Twitter), YouTube, and Telegram. (17) Activities with a European connection are also being conducted through the “ UK Cluster ” ( see Fig. 2 ). VIGINUM demonstrates this through technical analysis, including favicon matching and URLScan investigations ( see graphics below ). (18) Fig 3 - The "UK Cluster", Courtesy of VIGINUM; (19) Fig 3 - Redirection observed via the URLScan service; Courtesy of Viginum; (20) The image below displays the Open Graph code [og_tags] from the WordPress settings of [ newstop.africa]  and [ britishtalks.com ]. In both cases, the variable [og_locale] is set to "ru_RU," indicating that the respective admin interface's default language is Russian. While this technical setting is invisible to readers, it reveals the site operators' linguistic and likely personnel origin. Fig. 4 - WordPress administration language, Courtesy of VIGINUM; (21) The following image (Fig. 5) shows a snippet from the HTML source code of the contact page on [ britishtalks.com ]. Here, too, the language attribute is set to "ru-RU" . Additionally, the form label contains the Russian term “Контактная форма”  (contact form). The use of Cyrillic characters within what is intended to be an English-language interface further indicates a Russian origin configuration. Fig 5 - Source Code Analysis Sample, Courtesy of Viginum; (22) 5. Conclusion The report delivers a clear and well-researched overview of the African Initiative’s setup and operations. With solid technical analysis and a firm grasp of the broader context, it shows how Russia systematically builds local influence networks across Africa. The findings provide valuable insights for anyone addressing foreign interference, information security, or African policy. Beyond that, the report provides a solid foundation for deeper exploration and encourages both professionals and interested readers to take a closer look. Footnotes: (1) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.4. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (2) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.4. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (3) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.24. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (4) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.6. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (5) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . pp.6 -7. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (6) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . 7 Appendices, pp. 41–45 [online], Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf   (7) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.4. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (8) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.3. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (9) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.16. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (10) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.3. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (11) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.5. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (12) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.5. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (13) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.4. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (14) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . pp. 25-28. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (15) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.25. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (16) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.36. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (17) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.26. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (18) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.27. [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (19) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.27 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (20) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.28 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (21) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.34 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf (22) SGDSN, VIGINUM, FCDO & EEAS, 2025. African Initiative: From Public Diplomacy to Covert Influence Operations . p.34 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250612_TLP-CLEAR_VIGINUM_FCDO_EEAS_Report_African_Initiative_EN.pdf DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 23rd - 29th of June 2025

    [Listen to the Podcast] [ Report Highlights ] CheckFirst's  third report, dated June 26, 2025, reveals how the Russian disinformation campaign Operation Overload specifically targets six countries: France, Germany, Moldova, Poland, Ukraine, and the United States. By flooding media outlets and fact-checkers with targeted false information, the campaign aims to overwhelm and paralyze their efforts. The Robert Lansing Institute  reports that Russia orchestrated a failed coup attempt in Serbia using disinformation, paramilitary networks, and religious influencers to destabilize the country and obstruct its pro-Western trajectory. The UK Defence Journal reports that dozens of pro-Scottish independence accounts on X, believed to be part of an Iranian disinformation campaign to weaken the UK, went dark simultaneously after Israeli airstrikes disrupted Iranian cyber infrastructure, exposing a direct link between physical attacks and online influence operations. A recent study by the Atlantic Council's Digital Forensic Research Lab (DFRLab)  revealed that Elon Musk’s AI chatbot, Grok, played a troubling role in spreading disinformation during the early days of the Israel-Iran conflict. A New York Times  investigation highlights how artificial intelligence has evolved beyond novelty into a weaponized disinformation tool, deployed by foreign regimes and domestic actors to sow distrust in democracies worldwide. NewsGuard Reality Check  reports that a false claim on X about China sending military aid to Iran, stemming from misinterpreted flight data, was amplified by a pro-Iran commentator, some mainstream media, and notably, multiple AI chatbots, revealing a significant flaw in how misinformation spreads. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] From Headlines to Lies: Global Events as Vehicles for Disinformation History, Rewritten – Generation, Redefined Operation Overload: Attacking Democracy’s Immune System Selective Truths: RT Takes Aim at the New MI6 Director “Nobody Leaves the Family": The Coup Attempt in Serbia [China]    China's Use of Quanzhen Taoism to Spread Disinformation [Iran] Manufacturing Victory: Iran’s Disinformation Efforts The Fiction of Retreat: Iran’s Disinformation on U.S. Withdrawal The Usual Suspects Are Missing: Tehran’s Fake Scots Go Silent AI-RELATED ARTICLES   Grok, We Have a Problem: Disinfo in the Israel–Iran War The Post-Truth Machine: AI-driven Disinformation Threatens Democracy The Flight That Never Was: AI Boosts False China-Iran Cla im GENERAL REPORTS Disinformation Undermines Polish Democracy Amid Contentious Election Trump and Hannity's Post-Bombing Disinformation Digital Fog of War: AI Slop and Information Control in the Iran Conflict Strategic Rivals Celebrate US’s ‘Soft Power Suicide’ Nationalist Networks and Global Threats: The GNCA’s Role in Disinformation Campaigns GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] From Headlines to Lies: Global Events as Vehicles for Disinformation The article by EU vs. Disinfo shows  how the Kremlin uses global attention focused on high-level geopolitical events, ranging from the Israel-Iran conflict to NATO’s historic summit and EU support for Ukraine, to ramp up its disinformation efforts. Faced with a rapidly evolving international landscape, Russian propaganda reverted to familiar falsehoods: portraying NATO as an aggressor, Europe as Russophobic, and Russia as an eternal victim. These narratives were strategically promoted to capitalize on global events' visibility and validate Russia's confrontational worldview. The Kremlin recycled long-standing myths of Western hostility, falsely claiming that NATO's defense initiatives threaten Russia and that its very existence is expansive. Pro-Kremlin voices attempted to undermine Western unity by exaggerating internal divisions within NATO and framing increased defense spending as fiscally irresponsible or indicative of impending collapse. Simultaneously, Europe was vilified for supporting Ukraine and tightening sanctions on Russia, reinforcing the illusion of a hostile West bent on weakening Moscow. Source:    EUvsDisinfo, 2025. The Kremlin’s self-fulfilling curse . [online] Available at: https://euvsdisinfo.eu/the-kremlins-self-fulfilling-curse/ [Back to Table of Contents] History, Rewritten – Generation, Redefined EU vs. Disinfo   highlights   how the Kremlin’s disinformation strategy has infiltrated the Russian education system to indoctrinate youth with a distorted view of national history. New textbooks, co-authored by regime-loyal writers, present a highly manipulated narrative of Russia’s past. The country is depicted as an “eternal victim” and a “reluctant warrior.” This campaign of historical revisionism reframes acts of aggression as noble acts of defense, from Soviet invasions in the 20th century to the annexation of Crimea and the war against Ukraine. Complex historical contexts are simplified, omitted, or reinterpreted. Military expansion is portrayed as liberation, while war crimes, dissent, and public debate are ignored. This rewriting of history is not merely about fostering patriotism; it functions as a long-term instrument of state-sponsored disinformation. The textbooks glorify Russian militarism, downplay Western contributions to global conflicts, and criminalize criticism of the Red Army. At the same time, they reinforce the narrative that Russia has never initiated a war, but merely responded to threats. The intended outcome: a generation shaped by a state-driven historical narrative, prepared to interpret future military actions as necessary and legitimate responses to external threats. Source:    EUvsDisinfo, 2025. Russia’s Military History: Never in the Wrong . [online] Available at: https://euvsdisinfo.eu/russias-military-history-never-in-the-wrong/ [Back to Table of Contents] Operation Overload: Attacking Democracy’s Immune System In its third report, researchers from CheckFirst examine  the development of Operation Overload, a Russian information campaign  targeting six countries: France, Germany, Moldova, Poland, Ukraine, and the United States. The campaign seeks to attract the attention of media and fact-checkers by overloading their capacity. First documented in June 2024, it has since expanded in scope and platform presence. Narrative themes include anti-Ukrainian rhetoric, election interference, gender-based disinformation, smear campaigns, and calls to violence. A key tactic is content amalgamation, which is publishing the same message across platforms to simulate credibility. Between January 2024 and May 2025, 997 deceptive emails were sent to more than 245 media and research organizations, 704 of which arrived after September 2024. Spikes occurred around major political or global events, such as the Paris Olympics or national elections. The actors also operate on Telegram, X (formerly Twitter), Bluesky, and, since May 2025, on TikTok. AI-generated content is increasingly used to impersonate well-known public figures, mainly journalists and academics. Since September 2024, around 600 content items have been identified, a 1.5-fold increase over the previous year. Logos from 180 institutions and the identities of more than 180 individuals were misused. CheckFirst warns of declining platform moderation and calls for stronger moderation and legal action, especially enforcing the EU Digital Services Act. Source: CheckFirst, Atanasova, A., Poldi, F. & Kuster, G., 2025. Operation Overload: More Platforms, New Techniques, Powered by AI . [online] Available at: https://checkfirst.network/wp-content/uploads/2025/06/Overload%C2%A02_%20Main%20Draft%20Report_compressed.pdf [Back to Table of Contents] Selective Truths: RT Takes Aim at the New MI6 Director DisinfoWatch has reviewed  the facts and concludes that the allegations against Blaise Metreweli are part of a Russian state broadcaster RT influence campaign. RT claimed that Metreweli’s grandfather, Constantine Dobrowolski, was a Nazi collaborator during World War II. This information is historically documented, but it is presented without essential context. Metreweli’s father, born Dobrowolski in 1943 in occupied Ukraine, was raised in England by his stepfather and took the surname Metreweli. He had no affiliation with or knowledge of his biological father’s Nazi past. Ms Metreweli neither knew nor met her paternal grandfather. The tactic recalls past efforts, including those targeting Chrystia Freeland, where ancestry is used to delegitimize public figures. Source: DisinfoWatch, 2025. RT Recycles Nazi Allegations to Undermine New MI6 Chief Through False Guilt by Association . [online] Available at: https://disinfowatch.org/disinfo/rt-recycles-nazi-allegations-to-undermine-new‑mi6‑chief‑through‑false‑guilt‑by‑association/ [Back to Table of Contents] “Nobody Leaves the Family": The Coup Attempt in Serbia The Robert Lansing Institute outlines  how Russian influence campaigns helped pave the way for the failed coup attempt in Serbia. According to the report, Serbia's alleged military support for Ukraine was a key trigger for Russia’s actions, which Moscow perceived as a symbolic challenge to its traditional influence in the Balkans. The report points to long-standing ties between Russian intelligence and sectors of Serbian society, including the military (with officers trained in Russia), security services (notably infiltration risks within the BIA), the Orthodox Church, and far-right political movements. The coup attempt occurred amid economic instability, political fragmentation, and growing public dissatisfaction. These conditions have made Serbia especially vulnerable to external manipulation. Russian influence operations targeted the Serbian public through state-backed media such as RT Balkan and Sputnik Serbia, along with nationalist networks, veteran groups, and clerics aligned with Moscow. EU integration and normalization with Kosovo were framed as betrayal and “spiritual surrender” to the West. Drawing parallels with Armenia, the report concludes that Russia reacts to perceived geopolitical drift by deploying disinformation, ideological pressure, and covert tactics to derail reform, create chaos, and reassert control. Source: Robert Lansing Institute, 2025. The Coup Attempt in Serbia — Kremlin Influence, Balkan Instability, and Strategic Fallout . [online] Published 26 June 2025. Available at: https://lansinginstitute.org/2025/06/26/the-coup-attempt-in-serbia-kremlin-influence-balkan-instability-and-strategic-fallout/ [Back to Table of Contents] [China] China's Use of Quanzhen Taoism to Spread Disinformation A recent analysis  by ReligioScope reveals how the Chinese Communist Party (CCP) strategically leverages religious institutions, specifically Quanzhen Taoism, as instruments of political influence beyond mainland China. At the core of this effort are the United Front Work Department (UFWD) and the Chinese Taoist Association (CTA), seemingly aligning religious practice to Party objectives. In Taiwan, Quanzhen practitioners have reportedly come under growing pressure to participate in CCP-organized events, including ideologically framed “religious exchanges,” visits to Party-affiliated temples, and subtle expectations to echo Beijing’s positions publicly. These tactics form part of a broader campaign to project influence through cultural and spiritual channels, extending the CCP’s reach into politically and religiously autonomous societies. Source: ReligioScope, Swenson Daly, M., Infiltrating the Tao . [online] Available at: https://www.religioscope.org/papers/03.pdf [Back to Table of Contents] [Iran] Manufacturing Victory: Iran’s Disinformation Efforts In an updated analysis,   NewsGuard outlines  how Iranian state media and affiliated channels launched a coordinated disinformation campaign following the Israeli strikes on nuclear and military facilities in Tehran on June 13, 2025. The aim was to downplay Israel’s military success while portraying Iran’s retaliation as effective. To date, 26 specific false claims have been identified and disseminated across 78 websites. These included AI-generated images and fabricated reports of captured Israeli soldiers ( find more information in the Weekly Review W25 ). The primary sources were channels linked to the Islamic Republic of Iran Broadcasting (IRIB) and military-affiliated Telegram accounts. Platforms such as YouTube, TikTok, and X (formerly Twitter) were primarily used for distribution. The strategy reflects a familiar pattern in Iran’s information operations: official outlets, anonymous websites, and digital platforms push pro-Iranian narratives, project regime stability, bolster Tehran’s strategic interests, and mislead the international public. Sources: NewsGuard, Sadeghi, M., Howard, S. & Lin, C., 2025. Iranian State-Affiliated False Claims Tracker: 26 Myths about the War and Counting . [online] Available at: https://www.newsguardtech.com/special-reports/israel-iran-conflict/ [Back to Table of Contents] The Fiction of Retreat: Iran’s Disinformation on U.S. Withdrawal NewsGuard has documented  another targeted instance of Iranian disinformation amid the recent escalation between the United States and Iran. At the center is a false claim that the U.S. Joint Chiefs of Staff had ordered a complete withdrawal of American troops from the Middle East. This baseless narrative emerged shortly after the U.S. airstrikes on Iranian nuclear facilities on June 21, 2025, and quickly spread on social media, particularly on X (formerly Twitter), where it garnered hundreds of thousands of views. The claim was amplified primarily by pro-Iranian and pro-Russian accounts, many of which have previously been involved in similar disinformation efforts. Notably, the timing coincided with Iran’s retaliatory missile strike on the U.S. military base Al Udeid in Qatar on June 23. The likely aim of the disinformation was to project an image of American retreat or weakness, framing Iran’s response as bold and practical. This serves both a domestic propaganda function and an international strategic message. Official U.S. military sources, however, have denied the claim: no such statements appear on the websites or social media accounts of U.S. Central Command or the Joint Chiefs of Staff. Sources: NewsGuard Reality Check, Komar, S., 2025. No, 40,000 U.S. Troops Were Not Evacuated from the Middle East . [online] Published 24 June 2025. Available at: https://www.newsguardrealitycheck.com/p/no-40000-us-troops-were-not-evacuated [Back to Table of Contents] Hybrid by Design: Iranian Hacktivists Target Saudi Games According  to Infosecurity Magazine , the pro-Iranian hacktivist group Cyber-Fattah orchestrated a significant data breach targeting the Saudi Games 2024 registration platform. The incident, part of a broader Iranian information operation, exposed sensitive personal and financial data of athletes and officials. Cybernews  additionally reported  on a broader Saudi-linked data leak and a DDoS attack  on the U.S.-based social media platform Truth Social . Notably, the timing shortly after U.S. airstrikes on Iranian nuclear facilities suggests a coordinated cyber response. Data exfiltration, service disruption, and narrative manipulation illustrate how hacktivist groups deploy multi-pronged cyber tactics to spread uncertainty, influence public perception, and destabilize digital communication infrastructures in geopolitical conflict. This targeted action represents a complex example of hybrid warfare: it combines technical attacks with strategic disinformation to undermine trust, establish a narrative of insecurity, and exploit digital platforms as channels for geopolitical messaging against regional rivals.often called “Torture TV”—as a core component of Tehran’s psychological warfare. Sources: Infosecurity Magazine, Mascellino, A., 2025. Cyber Fattah Leaks Data from Saudi Games in Alleged Iranian Operation . [online] Available at: https://www.infosecurity-magazine.com/news/cyber-fattah-leaks-data-saudi-games/ Cybernews, Lapienytė, J., 2025. US Strike on Iran Sends Online Ripples: Major Saudi Leak, DDoS on Truth Social . [online] Available at: https://cybernews.com/cyber-war/major-saudi-leak-ddos-on-truth-social/ [Back to Table of Contents] The Usual Suspects Are Missing: Tehran’s Fake Scots Go Silent The UK Defence Journal   reports  the sudden disappearance of dozens of pro-Scottish independence accounts on X, immediately following the Israeli airstrikes on Iranian military and cyber infrastructure on June 12, 2025. According to the report, the network behind these accounts was operated by Iran’s Islamic Revolutionary Guard Corps (IRGC), which managed over 80 accounts posing as British users. Between 2022 and 2024, these profiles posted an estimated 250,000 tweets promoting pro-independence and anti-UK narratives. The simultaneous takedown of the accounts, coinciding with widespread power outages and internet blackouts in Iran, strongly suggests centralized control from inside the country, likely disrupted by the Israeli strikes. This incident illustrates how state-sponsored disinformation campaigns exploit domestic political divides in Western democracies to sow discord and erode national cohesion. It also highlights the vulnerability of social media platforms to coordinated influence operations and shows how real-world military actions can dismantle digital propaganda networks in an instant. Source: UK Defence Journal, Allison, G., 2025. Dozens of pro-Indy accounts go dark after Israeli strikes . [online] Available at: https://ukdefencejournal.org.uk/dozens-of-pro-indy-accounts-go-dark-after-israeli-strikes/ [Back to Table of Contents] AI RELATED ARTICLES Grok, We Have a Problem: Disinfo in the Israel–Iran War   A recent analysis by the Atlantic Council’s Digital Forensic Research Lab (DFRLab) reveals  that Elon Musk’s AI chatbot Grok played a problematic role in spreading disinformation during the early days of the Israel-Iran conflict. Designed to help users verify facts, Grok instead produced contradictory and inaccurate responses, particularly when asked about AI-generated content and widely circulated fake visuals. In one example, Grok gave conflicting answers within a minute regarding an airport allegedly struck by Iran. It alternately claimed the location was in Beirut, Gaza, or Tehran, none of which was accurate. In several instances, the chatbot misidentified events or confirmed fabricated claims as fact. According to DFRLab, this failure highlights a problematic trend: as platforms scale back human moderation and fact-checking, users increasingly rely on AI tools like Grok or Perplexity, only to receive misinformation dressed as authoritative responses. Both bots, for instance, incorrectly affirmed fake stories such as China supplying weapons to Iran. Grok has shown similar vulnerabilities in the past: it previously amplified the debunked far-right conspiracy theory of “white genocide” in South Africa, a striking example of how AI systems, without proper oversight, can uncritically repeat and spread harmful narratives. Sources:    DFRLab, Ponce de León, E. & Chenrose, A., 2025. Grok struggles with fact-checking amid the Israel-Iran war . [online] Available at: https://dfrlab.org/2025/06/24/grok-struggles-with-fact-checking-amid-israel-iran-war/ [Back to Table of Contents] The Post-Truth Machine: AI-driven Disinformation Threatens Democracy  The New York Times warns   that AI-driven disinformation is destabilizing democracies. Electoral manipulation via deepfakes: In Poland, a fake AI-generated video falsely showed Donald Trump endorsing far-right politician Slawomir Mentzen. Though fabricated, the clip spread rapidly on TikTok, a clear example of targeted disinformation aimed at swaying voters. The result was reinforcement of far-right mobilization. Foreign interference through AI: In Romania, a Russian influence campaign used AI to manipulate the first round of the 2024 presidential election to such an extent that the result was annulled. A fringe candidate surged ahead via inauthentic TikTok promotion. The court-ordered rerun marks a precedent for election nullification due to AI-led interference. Erosion of public trust: In Germany and the United States, AI-generated content circulated false quotes and images of political candidates. These campaigns aimed to undermine trust in democratic institutions and polarize public debate, often reinforcing far-right conspiracy narratives. Platform failures and lack of oversight: TikTok removed 7,300 AI-generated posts during Romania’s runoff election but admitted that many were not labeled as synthetic. Major platforms are too slow or ineffective in curbing such manipulation, allowing disinformation to spread unchecked. Conclusion: The New York Times investigation outlines the impact of AI-driven disinformation on democratic processes. It shows how political discourse is increasingly undermined while effective regulatory, political, or technological responses remain lacking. The report warns that democracies must find ways to respond to prevent lasting damage from digital manipulation. ​ Source:    The New York Times, Myers, S.L. & Thompson, S.A., 2025. A.I. Is Starting to Wear Down Democracy . [online] Available at: https://www.nytimes.com/2025/06/26/technology/ai-elections-democracy.html? [Back to Table of Contents] The Flight That Never Was: AI Boosts False China-Iran Claim NewsGuard reveals  how , following Israeli airstrikes on Iranian targets in June 2025, a targeted wave of disinformation took hold: a false claim that a Chinese military cargo plane flew to Iran went viral, based on a misread of flight data. Anonymous accounts on X and pro-Iran commentator Jackson Hinkle pushed the narrative, despite explicit denials from Flightradar24 and Cargolux. Yet the false story found traction in outlets like The Telegraph and Epoch Times. The real force behind the campaign lies in the use of generative AI chatbots like Grok, Perplexity, and Meta’s bot, which repeatedly confirmed the false claims without verification. This highlights how AI tools amplify disinformation and make it seem more credible to users. As platforms scale back human fact-checking, more users turn to these AI systems, falling into the trap of targeted misinformation. This case exemplifies modern hostile influence tactics: combining human manipulation with automated dissemination makes disinformation faster, broader, and harder to control, especially in geopolitical crises. Such operations deliberately undermine democratic discourse, an urgent challenge for policymakers, society, and technology alike. Source: NewsGuard Reality Check, Lin, C., 2025. False Claim that China is Supporting Iran in the War with a Chinese Military Cargo Plane; Chat Bots Boost It . [online] Available at: https://www.newsguardrealitycheck.com/p/false-claim-that-china-is-supporting [Back to Table of Contents] GENERAL REPORTS Disinformation Undermines Polish Democracy Amid Contentious Election According to a report  by Global Issues , Poland’s recent presidential election, narrowly won by nationalist Karol Nawrocki, has become a case study in how disinformation and foreign interference can influence democratic processes. The campaign was marred by coordinated online manipulation, with over 2,400 fake accounts targeting liberal candidate Rafał Trzaskowski or promoting Nawrocki. Investigations revealed a flood of misleading content on TikTok and Facebook, heavily skewed toward far-right narratives, often laced with anti-Ukrainian and anti-immigration conspiracy theories. These efforts contributed to an increasingly polarized electorate and undermined confidence in the electoral process. The campaign mirrored Kremlin-style influence operations and coincided with unprecedented international support for Nawrocki from far-right circles, including former U.S. President Donald Trump and the Conservative Political Action Conference. With Prime Minister Donald Tusk surviving a confidence vote but facing a hostile presidency, Poland now confronts potential institutional paralysis. Judicial reforms crucial to restoring EU funding will likely stall, and Nawrocki’s foreign policy stance could weaken Poland’s support for Ukraine. Source:    Global Issues, Pousadela, I.M., 2025. Poland’s Democratic Deadlock . [online] Available at: https://www.globalissues.org/news/2025/06/25/40264 [Back to Table of Contents] Trump and Hannity's Post-Bombing Disinformation Wired reported that President Donald Trump and his closest supporters, including Fox News host Sean Hannity, have been using digital disinformation campaigns to portray the US airstrikes on Iranian nuclear facilities as a complete and decisive victory. These narratives were primarily spread through Trump’s platform, Truth Social, and other social media channels. Instead of relying on information from his intelligence agencies, satellite imagery, or on-the-ground reporting, Trump posted on Truth Social a screenshot of an anonymous X account claiming to conduct open-source intelligence, stating that “Fordow is gone.” Sean Hannity amplified this false claim by sharing a video of an explosion on Instagram, which was footage from an Israeli airstrike in Syria. While military officials and experts contradicted Trump’s portrayal and cautioned against premature assessments, Trump continued to assert that the Fordow facility had been “completely obliterated.” His early declaration on Truth Social shaped public discourse and inspired supporters who hailed the bombing as the end of the conflict. At the same time, Trump later raised the possibility of an extended military engagement and even “regime change,” a stance disputed within his administration. Political opponents criticized both Trump and Hannity for spreading misleading information that damages public discourse and undermines democratic oversight. This case exemplifies how digital platforms and social media can be weaponized as tools of hybrid warfare to advance political agendas, erode trust in reliable information, and deepen societal divisions. Source: Wired, Myers, S.L. & Thompson, S.A., 2025. Donald Trump and Sean Hannity Set Off a Wave of Disinformation After Iran Bombing . [online] Available at: https://www.wired.com/story/donald-trump-sean-hannity-disinformation-iran-bombing/ [Back to Table of Contents] Digital Fog of War: AI Slop and Information Control in the Iran Conflict POLITICO’s Weekly Cybersecurity warns  that amid escalating Israel‑Iran tensions, AI-generated “slop”—including deepfakes, manipulated images of destroyed military hardware, and synthetic videos falsely depicting attacks—has proliferated across social media. These fabricated visuals, some shared by world leaders, state-backed outlets, and partisan influencers, spread rapidly, exploiting algorithms and emotional resonance to shape public perception before fact-checkers can respond. The strategy combines synthetic media production, rapid bot amplification, and state-driven narrative control, especially with information blackouts or censorship designed to limit counter-narratives.  “The combination of state censorship and AI-powered misinformation is a new digital battlefield, and the collateral damage is public trust,” said Dave Gerry, CEO of cybersecurity firm Bugcrowd. The implications are significant: democracies now face a multifront information battlefield where trust in visual evidence is eroded, fact-checking defenses lag behind AI-enabled manipulation, and authoritarian regimes gain an advantage through coordinated, real-time influence operations. Source: Politico, Nickel, D., 2025. AI Slop Spreads in Israel-Iran War . [online] Available at: https://www.politico.com/newsletters/weekly-cybersecurity/2025/06/23/ai-slop-spreads-in-israel-iran-war-00417791 [Back to Table of Contents] Strategic Rivals Celebrate US’s ‘Soft Power Suicide’ As stated in a New York Times article , under the Trump administration, the United States scaled back or dismantled many of its key global communication tools, including Voice of America and Radio Free Asia, platforms central to promoting democratic values and countering authoritarian propaganda. This retreat was celebrated by rivals like Russia and China, who saw an opportunity to expand their influence. Kremlin-backed RT and China's Global Times openly rejoiced at the weakening of U.S. media infrastructure. At the same time, nations like Hungary, Cambodia, and Cuba followed suit in applauding America's withdrawal from the global information battlefield. In the absence of U.S. leadership, authoritarian states moved to fill the vacuum. Russia, China, Turkey, and others ramped up investments in state-run global media outlets, disinformation campaigns, and cultural outreach, deploying fake accounts, algorithmic manipulation, and state-aligned influencers to flood international platforms with narratives that distort truth and undermine democratic ideals. Fact-based reporting was increasingly replaced by polarizing, often deceptive messaging aimed at reshaping global perceptions in favor of authoritarian models. Experts warn this U.S. "soft power suicide" has not only weakened American global credibility but also emboldened adversaries to weaponize disinformation unchecked. As China's Xinhua and Russia’s Sputnik expand reach in Africa and Asia, and Western trust in U.S. messaging declines, the struggle for global influence has entered a new phase, one where truth competes against algorithm-boosted falsehoods, and where the United States, once a leader in promoting free expression, is increasingly sidelined in the battle for hearts and minds. Source: New York Times, Hsu, T., 2025. As U.S. Dismantles Voice of America, Rival Powers Hope to Fill the Void . [online] Available at: https://www.nytimes.com/2025/06/24/business/media/us-china-russia-global-communications.html    [Back to Table of Contents] Nationalist Networks and Global Threats: The GNCA’s Role in Disinformation Campaigns An article  by Global Influence Ops examines  the Global National Conservative Alliance (GNCA), a burgeoning global political movement uniting right-wing and far-right factions. A key aspect of the GNCA's disinformation is its strategic use of influence operations and the exploitation of its networks by foreign actors, notably Russia and China, to spread disinformation and undermine democratic institutions. This involves eroding checks and balances and establishing patronage networks, which can then be leveraged to propagate narratives disguised as legitimate political discourse that serve authoritarian interests. The actors involved are the various components of the Global National Conservative Alliance, including movements like MAGA in the US, and foreign states such as Russia and China, who act as amplifiers and exploiters of these networks. The broader implication is a significant threat to the integrity of democracy worldwide. By championing national sovereignty, protectionist trade, and cultural exclusion, the GNCA creates fertile ground for foreign interference and the proliferation of misleading information, ultimately weakening democratic norms and institutions on a global scale. Source: The Conversation, Sinclair H.C., Most Americans believe misinformation is a problem — federal research cuts will only make the situation worse, 2025, [online]; Available at: https://theconversation.com/most-americans-believe-misinformation-is-a-problem-federal-research-cuts-will-only-make-the-problem-worse-255355 [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]

  • Weekly Report: Cyber based influence campaigns 09th - 15th of June 2025

    [Listen to the Podcast] [ Report Highlights ] As revealed by EU vs. Disinfo , Russia has significantly expanded its FIMI operations, particularly across Africa, using covert strategies to influence public opinion and undermine democratic institutions. The report is based on a joint investigation by the European External Action Service, the French agency VIGINUM, and the UK’s Foreign, Commonwealth and Development Office. New Eastern Europe highlights  that during Poland’s 2025 presidential election, authorities feared significant Russian disinformation and interference. However, experts found the impact far less than expected, likely due to effective preventative measures . The Disinformation Observe r reports that the June 2025 Los Angeles immigration protests became a testing ground for coordinated disinformation campaigns, with foreign state media, U.S. politicians, and AI-generated content spreading false narratives to portray national instability and manipulate public perception. The Jamestown Foundation  reports that following Russia’s invasion of Ukraine, YouTube imposed restrictions and demonetized content creators from Russia and Belarus. Although aimed at curbing state propaganda, these measures ultimately failed to prevent disinformation. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [ Russia] Kremlin Expands Disinformation Operations Through African Front Surge in Cyber Activity Following Israeli Strikes on Iran Disinformation in Poland’s 2025 Presidential Election AI-RELATED ARTICLES   Social Media and AI Chatbots Fuel Disinformation About LA Protests LA Protests: Algorithms & Foreign Influence AI Misidentifies Plane Crash Details and Spreads Misinformation GENERAL REPORTS YouTube Restrictions Strengthen Kremlin-aligned narravtives FRAMEWORKS TO COUNTER DISINFORMATION EU Sanctions Target Russian Disinformation Amid Enforcement Challenges GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] Kremlin Expands Disinformation Operations Through African Front EUvsDisinfo reports  that the African Initiative , a supposedly independent news agency, is a Russian instrument for Foreign Information Manipulation and Interference  (FIMI) in Africa. The report is based on a joint investigation   by the European External Action Service (EEAS), the French agency VIGINUM, and the United Kingdom’s Foreign, Commonwealth and Development Office (FCDO). The initiative is led by Russian national Artem Sergeevich Kureev, allegedly linked to Russian intelligence services. It plays a central role in Russia’s influence operations on the African continent, particularly following Moscow’s strategic reorientation after its full-scale invasion of Ukraine in February 2022. The African Initiative  operates mainly in French- and English-speaking parts of Africa, with a notable focus on countries like Burkina Faso, which serves as a regional hub. Its activities combine physical outreach—through local associations, media partnerships, and activist networks—with digital operations, including spreading disinformation via fake news sites, AI-generated content, and manipulative SEO techniques. Sources:    EUvsDisinfo, African Initiative: Russian FIMI operation disguised as a news agency, 2025, [online]; Available at: https://euvsdisinfo.eu/african-initiative-russian-fimi-operation-disguised-as-a-news-agency/   VIGINUM, FCDO, EEAS, From Public Diplomacy to Covert Influence Operations, 2025, [online]; Available at: https://euvsdisinfo.eu/uploads/2025/06/VIGINUM_FCDO_EEAS_Technical_Report_African_Initiative.pdf [Back to Table of Contents] Surge in Cyber Activity Following Israeli Strikes on Iran In a post  on X, CyberKnow notes  a sharp rise in cyber activity following Israel’s military strikes on Iran on June 13, 2025. A total of 83 hacktivist groups are currently active: 65 are aligned with Iran, 11 are anti-Iran, and six support Israel. Three of the pro-Iran groups are identified as pro-Russian. Reported cyberattack types include DDoS, website defacements, data breaches, doxing, social media takedowns, ransomware, and attacks targeting operational technology. The number of anti-Iran groups exceeds that of pro-Israel groups, suggesting that much of the hacktivist activity is framed more around opposition to Iran than explicit support for Israel. The post does not assess the effectiveness of the attacks. However, the timing and scope of activity suggest that cyber operations are being launched rapidly in response to on-the-ground developments, underscoring their role in the broader conflict environment. Source:    @Cyberknow20, Iran and Israel War Cybertracker #1 , 2025, 15 June, 16:35, [online]; Available at: https://x.com/Cyberknow20/status/1934258425590141193 [Back to Table of Contents] Disinformation in Poland’s 2025 Presidential Election New Eastern Europe states  that disinformation and foreign interference, particularly from Russia, were significant concerns that the government and cybersecurity officials expressed during Poland’s 2025 presidential election. The Kremlin has long pursued efforts to influence European democratic processes, including disinformation campaigns in Romania, France, and Germany. Given Poland’s strategic position and its shifting stance on Ukraine, fears of pro-Russian narratives infiltrating the country were significant. However, experts and state institutions found that interference was far more limited than initially feared. In response, Poland developed a coordinated defense strategy known as the “election umbrella.” This strategy aims to counter disinformation, strengthen cybersecurity, and train key stakeholders such as election committees and journalists. Poland relied on proven measures from previous efforts and formally consolidated them into a unified approach. Despite early warnings of aggressive cyberattacks by Russian intelligence, including a high-profile breach of a ruling party’s IT system, the protective measures proved largely effective. Experts attribute this relative success to several factors: Poland’s diverse social media landscape, the limited executive powers of its presidency, and valuable lessons learned from recent attacks on elections abroad. Europe-wide initiatives like the Democracy Shield and increased collaboration with technology companies supported these defenses, albeit with varying degrees of enforcement. Source:    New Eastern Europe, Pyka A., Resisting foreign interference: Poland’s presidential election and the Russian challenge, 2025, [online]; Available at: https://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/ [Back to Table of Contents] AI RELATED ARTICLES Social Media and AI Chatbots Fuel Disinformation About LA Protests   According to a Wired   article , disinformation surrounding the protests in Los Angeles is spreading rapidly and is further amplified by AI chatbots like Grok and ChatGPT. In recent days, many people in Los Angeles have been protesting against increased Immigration and Customs Enforcement (ICE) raids. Among the images circulating were photos of National Guard troops sleeping on the floor, initially published by the San Francisco Chronicle  and later shared by California Governor Gavin Newsom. Additionally, a picture of a pile of bricks was falsely portrayed as connected to the LA protests, though it was taken in New Jersey. On platforms like X (formerly Twitter) and Facebook, users quickly spread false claims that the protesters were paid actors or that the images were doctored or taken out of context. These assertions were further amplified by conspiracy theorists and right-wing influencers, with politicians such as Senator Ted Cruz echoing the misleading content. Many users turned to AI chatbots like Grok and ChatGPT for clarification, but often received inaccurate or false information. Grok, for example, claimed that the National Guard photos originated from Afghanistan in 2021 or the U.S. Capitol. ChatGPT also stated the images were taken at the Kabul airport during the withdrawal from Afghanistan. Regarding the brick pile photo, Grok insisted it was from Los Angeles despite fact-checkers confirming it was taken in New Jersey. Despite corrections, Grok refused to retract its erroneous statements. The role of AI chatbots is thus ambivalent. While intended to provide clarity, their inaccurate responses contribute to confusion. Combined with fast-moving political events and declining content moderation on social platforms, this creates a fertile ground for disinformation. The spread of false images and unfounded claims undermines public discourse and makes it harder to distinguish truth from fiction during times of crisis. Sources:    Wired, Gilbert D., AI Chatbots Are Making LA Protest Disinformation Worse, 2025, [online]; Available at: https://www.wired.com/story/grok-chatgpt-ai-los-angeles-protest-disinformation/ [Back to Table of Contents] LA Protests: Algorithms & Foreign Influence The Disinformation Observer highlights  that Russian and Chinese state media spread genuine and fake content during the immigration protests in Los Angeles in June 2025. They operated across various platforms and were often difficult to distinguish from domestic users. A notable example is a photo circulated by the Russian agency Sputnik, allegedly showing “pallets of bricks” at protest sites. In reality, the image originated from a construction site in New Jersey and was unrelated to the Los Angeles protests. Such false images and conspiracy theories were used to portray the protests as signs of instability and to push political messages. A key factor in the rapid spread of this disinformation is the role of social media algorithms and AI-powered chatbots. These systems prioritize content that generates strong engagement, regardless of its truthfulness. As a result, sensational and emotionally charged false information gains greater visibility and spreads faster. The combination of disinformation and algorithmic amplification highlights the complexity of modern information warfare. This poses a significant challenge for democratic societies in the digital age. ​ Source:    The Disinformation Observer, Phil, This Week In Disinformation: LA Riots Edition, 2025, [online]; Available at: https://thedisinformationobserver.substack.com/p/this-week-in-disinformation-la-riots [Back to Table of Contents] AI Misidentifies Plane Crash Details and Spreads Misinformation CyberNews reports  that Google’s AI Overview wrongly identified the aircraft in the fatal Air India crash as an Airbus A330 instead of a Boeing 787. This error highlights growing concerns about AI hallucinations and misinformation in critical news events. AI systems like Google’s Overview are non-deterministic, meaning they can produce different results for the same query. These hallucinations occur when AI presents contextually related but false information as fact, leading to confusion and misinformation. Despite disclaimers that AI answers may contain errors, the authoritative tone of AI-generated content causes users to trust it implicitly. Such mistakes, especially during breaking news, can quickly spread false narratives and cause reputational harm. ​ Source:    Cybernews, Walsh M., Google’s AI overviews hallucinate Airbus in Air India crash – it was a Boeing, 13 June 2025, [online]; Available at: https://cybernews.com/news/google-ai-hallucinate-crash-india-boeing-airbus/t [Back to Table of Contents] GENERAL REPORTS YouTube Restrictions Strengthen Kremlin-aligned narravtives An article  by the Jamestown Foundation states  that following Russia’s invasion of Ukraine in 2022, YouTube imposed sanctions on Russian and Belarusian channels, blocked monetization, and restricted payment methods. Independent media lost significant revenue and reach as a result. Meanwhile, state-backed platforms like RuTube and VK Video benefited from these restrictions, gaining users and influence. Belarusian state media quickly filled the gap on YouTube, significantly expanding their reach and often adopting Kremlin-aligned narratives to justify the war. Independent channels, on the other hand, were labeled “extremist,” with their audiences intimidated by repression and prison sentences. Many media outlets had to scale back their activities or operate from exile. Russian opposition media saw a temporary increase in viewers, but their reach remains far behind state media. Today, the media landscape in both countries is dominated by state-controlled content. Sanctions and platform restrictions have significantly reduced space for independent journalism and amplified the spread of state propaganda. Source:    Jamestown Foundation, Bolkunets D., YouTube Restrictions on Belarus and Russia Strengthen Minsk’s Propaganda, 2025, [online]; Available at: https://jamestown.org/program/youtube-restrictions-on-belarus-and-russia-strengthen-minsks-propaganda/ [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION EU Sanctions Target Russian Disinformation Amid Enforcement Challenges The Royal United Services Institute (RUSI) reported that the European Union is employing sanctions as a key strategy to counter disinformation, primarily focusing on Russian state media outlets in response to the invasion of Ukraine. The core disinformation aspect addressed is the continuous broadcasting of Russian propaganda within the EU, often through "mirror websites" that circumvent bans. This strategy aims to limit the reach of malign narratives by directly targeting the platforms and entities that disseminate them. However, implementing and enforcing these sanctions face significant challenges, including monitoring pervasive online content and the need for a unified approach among EU member states. The broader implications of this approach highlight the complex balance between combating foreign information manipulation and upholding freedom of speech within democratic societies. The article suggests that while restrictive measures, such as sanctions, are necessary, they are not self-sufficient. It underscores the critical need for complementary investments in societal resilience, fostering media literacy among the populace, and promoting independent journalism to create a more robust defense against disinformation. The debate also addresses domestic sources of misinformation, emphasizing the multifaceted nature of defending democratic information spaces. Source: RUSI, Gyimesi B., Defending Democracy: Sanctions on Disinformation, 2025, [online]; Available at: https://www.rusi.org/explore-our-research/publications/commentary/defending-democracy-sanctions-disinformation [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]

  • Weekly Report: Cyber based influence campaigns 02nd - 08th of June 2025

    [Listen to the Podcast] [Report Highlights] DFRLab  found that Moldova24 (MD24), launched in July 2024, is a Kremlin-aligned Moldovan web broadcaster backed by fugitive oligarch Ilan Shor and covertly supported by Russian state media infrastructure like RT and TV Novosti. As stated in a TechXplore article, during a recent four-day conflict between India and Pakistan, social media users increasingly turned to AI chatbots like Grok, ChatGPT, and Gemini to verify claims in real time. However, instead of clarifying facts, these tools often amplify disinformation. The Virtual Manipulation Brief   2025 from NATO StratCom COE  highlighted a sharp escalation in Russian and Chinese information manipulation, driven by advances in AI. AFP Fact Check  reported that the recent South Korean election was significantly impacted by widespread online disinformation campaigns, with both progressive and conservative parties being targeted. OpenAI’s  latest report features case studies with poetic titles: "Sneer Review," "Uncle Spam ," and "Helgoland Bite." These studies expose how state-linked actors from China, Russia, and Iran weaponize generative AI. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [ Russia] Russia’s Hybrid War on Europe and Ukraine Digital Hostile Influence: Moldova24’s Media Operations [ The War in Ukraine] Russian disinformation reacts hesitantly after the Ukrainian Air Strike Kremlin Disinformation Targeting Polish Elections Russia Spreads Misinformation About Kursk Attack and “Neo-Nazis” AI-RELATED ARTICLES   Confidently Incorrect: AI Factcheckers Spread Misinformation Questions About C2PA’s Viability in Combating AI Misinformation NATO Warns of Generative ‘AI Agent Swarms’ GENERAL REPORTS South Korea Election Disinformation: Foreign Interference and Deepfakes FRAMEWORKS TO COUNTER DISINFORMATION OpenAI’s Disinformation Tracking Report GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] Russia’s Hybrid War on Europe and Ukraine A report by the Atlantic Council states   that Russia is intensifying its hybrid warfare against Europe through cyberattacks, sabotage, disinformation, and targeted influence operations (“hostile influence”). A central tactic involves manipulating elections via coordinated disinformation campaigns, manipulated online content, and covert support for system-critical or pro-Russian actors. Countries particularly affected include Poland, Slovakia, the Baltic states, and Romania ( for more information on the manipulation of the 2024 Romanian Presidential Elections , see our detailed research report here ). Ukraine has been exposed to these tactics since 2014, but has developed an effective response model. This model combines media literacy, digital volunteer networks tasked with identifying and debunking false information, and a state-led communication strategy that provides transparent messaging and actively counters disinformation narratives. By contrast, the report finds that NATO and EU structures remain institutionally fragmented and insufficiently coordinated. A joint EU-NATO-Ukraine framework, along with targeted support for fact-checking, media education, and digital resilience, would help to form a sustainable response model. Additionally, A report by EU vs. Disinfo mentioned  that in recent months, pro-Kremlin disinformation has increasingly manipulated international perceptions by framing the United States, particularly under former President Donald Trump, as a rational actor seeking peace, in contrast to Europe and Ukraine, which are depicted as aggressive and intent on prolonging the war. Russian state media and affiliated outlets have praised Trump, claiming he shares views with Moscow and favors negotiations aligned with Russia’s demands. This narrative aims to sow discord within the West by portraying the U.S. as a more reasonable counterweight to a supposedly belligerent Europe. Sources:    Atlantic Council, Beznosiuk, M., Russian hybrid warfare: Ukraine’s success offers lessons for Europe, 2025, [online]; Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russian-hybrid-warfare-europe-should-study-ukraines-unique-experience/# EU vs. Disinfo, The world in black and white, 2025. [online]; Available at: https://euvsdisinfo.eu/the-world-in-black-and-white-2/ [Back to Table of Contents] Digital Hostile Influence: Moldova24’s Media Operations According to a DFRLab investigation , Moldova24 (MD24) is an ostensibly independent Moldovan web-broadcaster launched in July 2024 that operates as an extension of the Kremlin’s information machine. Streaming news shows, call-in programs, and a torrent of short social-media clips around the clock, the channel is steered by the media network of fugitive oligarch Ilan Shor but protected by the resources of Russian state outlets such as RT and TV Novosti, from whose servers it quietly transmits. MD24’s editors recycle Moscow’s talking points almost verbatim: Brussels is blamed for soaring prices, President Maia Sandu is painted as an authoritarian US puppet, and the idea of NATO cooperation is framed as a prelude to war. The tone is emotional and often sensationalistic, with dramatic music, split-second cuts, and scrolling captions designed to keep viewers angry and engaged. During the winter energy crisis of 2025, the channel uploaded nearly a thousand TikTok videos in eight weeks, saturating feeds just as household bills spiked; each clip ended with the same implied lesson: salvation lies in turning away from the West. When platforms suspend an account, the operation re-emerges under a slightly different name, supported by a lattice of mirror websites and fallback channels on Telegram and Instagram. The objective is not merely to win clicks, but to erode Moldovans’ trust in their democratic institutions, sap enthusiasm for EU membership, and clear the ground for pro-Russian parties ahead of crucial elections. In short, MD24 is Moscow’s pocket-sized but agile psychological operations tool, explicitly calibrated for Moldova’s volatile information space. Source:    DFRLAB, Olari V., Unveiling the Russian infrastructure supporting the Moldova24 TV channel, 2025. [online]; Available at: https://dfrlab.org/2025/06/03/unveiling-the-russian-infrastructure-supporting-the-moldova24-tv-channel/ [Back to Table of Contents] [The War in Ukraine] Russian disinformation reacts hesitantly after the Ukrainian Air Strike EUvsDisinfo highlights  the impact of Operation Spiderweb. In this coordinated drone offensive conducted by Ukraine, five Russian military airfields were partially hit. The attacks were unexpected and seemingly caught the Russian military leadership and Kremlin-aligned propaganda unprepared. On the Russian side, the actual effects were downplayed. Reports reluctantly mentioned "some fires," while Russian President Putin remained silent. Prominent propagandists like Vladimir Solovyov also showed unusual restraint. EUvsDisinfo concludes that Russian information control is fragile and dependent on Kremlin directives. Lacking clear instructions, pro-Kremlin propaganda resorted to familiar tactics: downplaying the attacks, portraying Ukrainians as terrorists, and spreading conspiracy theories that British strategists were behind the strikes to weaken Russia. This was intended to relativize the impact of the Ukrainian operation and depict the West as the mastermind. Additionally, Russia emphasized nuclear deterrence to highlight its strength. The article concludes that the Kremlin’s response to Operation Spiderweb reveals weaknesses in its authoritarian system, especially the propaganda’s dependence on Putin’s direct control. When he remains silent, information gaps cannot be convincingly filled. The repeated accusation against the United Kingdom distracts and aim to support Russia’s invulnerability narrative. Overall, the successful Operation Spiderweb undermines the image of Russia’s military superiority and exposes cracks in the Kremlin’s controlled disinformation structure. Source:    Eu vs. Disinfo, An Unpleasant Incident, 2025. [online]; Available at: https://euvsdisinfo.eu/an-unpleasant-incident/ [Back to Table of Contents] Kremlin Disinformation Targeting Polish Elections EU vs. Disinfo states  that during Poland’s 2025 presidential elections, pro-Kremlin disinformation sources actively sought to undermine public support for Ukraine and delegitimize the democratic process. Poland was portrayed as corrupt, chaotic, and imperialistic, while the election itself was framed as manipulated by foreign powers. Simultaneously, anti-Ukrainian sentiments were stoked through false claims that both presidential candidates were increasingly hostile toward Ukraine.   Three core narratives  dominated the influence campaign: First , the election was depicted as a battle between “Trump, Brussels, and globalist elites,” casting doubt on its democratic legitimacy. Second , it was asserted that Ukraine would be the loser of the election outcome, as candidates were supposedly anti-Ukrainian, and Polish society was portrayed as weary of supporting Ukrainian refugees. Third , both candidates were labeled as “hardline Russophobes,” suggesting that relations with Russia would remain unchanged regardless of the winner. Following Karol Nawrocki’s victory, pro-Kremlin media framed his success as the onset of the “Trumpization of Europe” and a blow against the EU. Nawrocki was further depicted as a wanted figure on Russian watchlists due to his removal of Soviet-era monuments. Overall, the disinformation campaign aimed to portray Poland as a puppet of anti-Russian forces, discredit its democratic institutions, and strain the relations between Poland, Ukraine, and the West. Source:    Eu vs. Disinfo, Polish elections in the eyes of Moscow, 2025. [online]; Available at: https://euvsdisinfo.eu/polish-elections-in-the-eyes-of-moscow/ [Back to Table of Contents] Russia Spreads Misinformation About Kursk Attack and “Neo-Nazis” According to DisinfoWatch , A recent tweet  by Russia’s Ministry of Foreign Affairs falsely claimed that Ukraine committed atrocities in Russia’s Kursk region, comparing the alleged violence to Nazi crimes. The statement, made by spokesperson Maria Zakharova, accused Ukraine’s government and foreign fighters of killing over 300 civilians in Kursk. However, there is no credible evidence to support this narrative. Independent investigators, satellite imagery, and international media have found no indication that Ukraine targeted civilians in the region. The accusation of Ukraine being run by "neo-Nazis" is a recycled propaganda trope. In reality, Ukraine’s president is Jewish, and far-right parties hold no parliamentary seats in the country. The Kremlin’s references to Nazism aim to evoke historical trauma, misrepresent Ukraine as the aggressor, and deflect the attention away from ongoing Russian violence. Source:    Russian MFA Makes False Claims About Kursk and “Nazis”,   2025, [online]; Available at: https://disinfowatch.org/disinfo/russian-mfa-makes-false-claims-about-kursk-and-nazis/ [Back to Table of Contents] AI RELATED ARTICLES Confidently Incorrect: AI Factcheckers Spread Misinformation  TechXplore reported that as AI chatbots like Grok, ChatGPT, and Gemini become more prominent on social platforms, users are increasingly relying on them for real-time fact-checking. Prompts like 'Hey @Grok, is this true?' are now routine on X.  They found that instead of debunking false claims, chatbots often reinforce them. Grok, for instance, misidentified archival footage from Sudan’s Khartoum airport as a missile strike in Pakistan, and described a video of a burning building in Nepal as showing Pakistani retaliation against India. A Columbia Tow Centre study concludes that chatbots often provide misleading information instead of admitting uncertainty. When AFP presented Gemini with an AI-generated image of a woman, it confirmed its authenticity and invented background details. Grok likewise validated a fake 'giant anaconda' video, citing non-existent expeditions. Experts warn that with reduced human moderation, chatbot fact-checkers may be effectively exploited by hostile actors. Their tendency to craft plausible but false narratives opens them up to direct manipulation by misinformation campaigns and the possibility of their output being shaped by politically biased training data. The vulnerability of AI Chatbots to being directly manipulated was dramatically acknowledged in a statement by xAI, which blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing the unsupported narrative of “white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Elon Musk as the "most likely" culprit. Sources:    Techxplore, Hey chatbot, is this true? AI 'factchecks' sow misinformation , 2025. [online] Available at: https://techxplore.com/news/2025-06-hey-chatbot-true-ai-factchecks.html Columbia Journalism Review, AI Search Has A Citation Problem , 2025. [online] Available at: https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php [Back to Table of Contents] Questions About C2PA’s Viability in Combating AI Misinformation A recent RAND report casts doubt on the effectiveness of the Coalition for Content Provenance and Authenticity (C2PA), a private-sector initiative to counter the growing threat of AI-generated misinformation. Backed by industry giants such as Adobe, Google, and Microsoft, the C2PA seeks to embed cryptographic provenance into digital content, offering a technical solution to restoring trust online. Initially hailed as a potential bulwark against disinformation, the initiative now faces a fundamental design flaw. Its voluntary, closed-system model is ill-suited to the open nature of the internet. While it tags participating content with a digital “badge” of authenticity, it cannot account for the vast majority of legitimate content created outside its system. This creates what RAND calls an “authenticity gap,” where untagged content may be genuine or expertly faked, giving bad actors room to sow doubt. An outdated threat model that fails to address the accelerating sophistication of generative AI compounds the issue. Critics argue the standard must evolve to differentiate between credentials issued by verified, trusted entities and those from unvetted sources. Without such reform, the C2PA may inadvertently erode trust rather than enhance it. ​ Source:    Rand, Overpromising on Digital Provenance and Security , 2025 [online] Available at: https://www.rand.org/pubs/commentary/2025/06/overpromising-on-digital-provenance-and-security.html [Back to Table of Contents] NATO Warns of Generative ‘AI Agent Swarms’  The recent Virtual Manipulation Brief 2025  from NATO StratCom COE highlights a sharp escalation in Russian and Chinese information manipulation capabilities, driven by advances in AI. The report states that in the last year, campaigns using AI-generated content, deepfakes, and bot networks to spread anti-NATO, anti-Ukraine, and anti-U.S. narratives were more frequent, synchronized, and emotionally targeted than previously. A key finding is the increasingly sophisticated Kremlin-backed disinformation, which is increasingly effective in using AI to automate content creation and tailor messaging to target specific platforms and languages. This has enabled their hostile influence campaigns to exploit geopolitical developments rapidly. Telegram, VK, and YouTube have emerged as high-engagement platforms for pro-Russian messaging, while X serves as a broad reach vector for both Russian and Chinese narratives. The report highlights a growing alignment between pro-Russian and pro-Chinese narratives, which consistently frame NATO as a destabilizing force and portray the U.S. critically. China's messaging aligns closely with Russia's, portraying NATO as destabilizing and the U.S. as weak and corrupt, thereby expanding the threat beyond a single actor. Inauthentic behaviour, including cross-platform amplification and echo chamber tactics, was also frequently observed. Looking ahead, the brief warns of the potential for adversaries to coordinate "generative AI agent swarms" using emerging interoperability standards (MCP, A2A, ACP), raising concerns about the speed and scale of future influence campaigns. The Nato report concludes that countering these evolving threats is a critical challenge for democratic societies. ​ Source:    Nato Strategic Communications Centre of Excellence, May 2025,  Virtual Manipulation Brief, [online] Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/VMB-Final-8325b.pdf?zoom=page -fit [Back to Table of Contents] GENERAL REPORTS Jae-Myung Wins Disinformation-Saturated South Korean Election The recent South Korean election on June 3, 2025, starkly illustrated how hostile digital influence campaigns are increasingly saturating democratic processes. According to an article  by AFP Fact Check , both conservative and progressive candidates were targeted by widespread online disinformation campaigns, a trend in line with the escalating threats to electoral integrity around the world. In the run-up to the vote, progressive candidates, notably then-opposition leader Lee Jae-myung, faced a concerted effort to portray them as sympathetic to or even controlled by China. Although going on to win the election, Jae-Myung’s campaign was dogged by the false claim that Chinese nationals were exploiting weaknesses in the overseas voter registration system to cast fraudulent ballots. "It's now the claim that progressives are subservient to China that resonates -- especially amid worsening US-China tensions," noted Choi Jin-bong, a media communications professor. Fabricated AI images, including one showing Lee kowtowing to a Mao statue, further fuelled these misleading narratives. Conservative candidates were targeted over alleged ties to the United States. Doctored images depicted former Prime Minister Han Duck-Soo’s presidential campaign announcement with Trump's endorsement. Another false claim, spread by a popular right-wing YouTuber to 1.5 million subscribers, asserted the Pentagon's endorsement of a conservative candidate, a claim they swiftly denied. The sheer intensity and ubiquitousness of digital hostile influence attacks during the South Korean election can be viewed as a warning of the escalating challenges democratic nations face worldwide. In campaign environmentssaturated by digital misinformation, the integrity of democratic processes risks being further eroded. Source:    AFP Fact Check, Shim Kyu-Seok, South Korea election hit by misinformation, 2025, [online]; Available at: https://factcheck.afp.com/doc.afp.com.48ZT8CP [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION OpenAI’s Disinformation Tracking Report A recent report by OpenAI details  several covert influence operations (IOs). To counter these threats, OpenAI leverages its AI tools to support expert investigative teams in identifying and disrupting malicious activities. Over the past three months, they’ve uncovered and exposed operations involving social engineering, cyber espionage, and deceptive online campaigns. Operation “Sneer Review” notably involved ChatGPT accounts generating large volumes of social media comments on platforms like TikTok, X, and Reddit. These posts aligned with China’s geopolitical interests, including critical commentary on Taiwanese content and the defamation of activists, accompanied by staged, seemingly organic user engagement. Internal performance reviews were also AI-generated to manage the operation systematically. Another China-linked campaign, “Uncle Spam”, aimed to polarize U.S. public opinion by creating divisive posts on platforms such as X and Bluesky. AI-generated profiles, including those themed around U.S. veterans, were used to deepen political divides. The report also highlights influence activities by Russian and Iranian actors, such as “Helgoland Bite” in Germany, which promotes pro-AfD content, and the Iranian operation “STORM-2035,” which produced posts on U.S. immigration policy and European independence movements. The report illustrates how AI is increasingly exploited to mass-produce politically manipulative content to sway public opinion. Actors combine automated content creation, fake personas, and coordinated engagement to simulate authenticity. Although the actual impact of these operations varied and was sometimes limited, the analysis underscores the growing role of AI-driven influence campaigns in global information conflicts. Source: OpenAI, Disrupting malicious uses of AI: June 2025, 2025. [online]; Available at: https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf / [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]

  • To Understand The Nature of Modern Chinese Influence Operations, Study Russia First

    In her article, Maya Sobchuk  analyzes the growing strategic partnership between China and Russia in the field of information and influence operations . While China’s economic and military power often takes center stage, the piece shows that Beijing is increasingly drawing on Russian expertise in psychological warfare to target foreign audiences more effectively . Concrete examples illustrate how both countries align narratives, adopt shared tactics, and develop joint strategies in the information space. This compelling read explains why understanding Russia is essential to grasping China’s evolving role in global information warfare. [ Download Full Report here ]

  • The Moldovan Government Report on Russian Rlection Interference

    The Moldovan Intelligence and Security Service (SIS) has published a report investigating Russian interference in the 2024 referendum on EU accession. The report addresses illegal financing, propaganda, organized street protests, and cyberattacks. It highlights that Russian actors, such as Ilan Șor, played a central role in these activities. Additionally, the report examines hybrid tactics, the influence of organized crime, and digital disinformation.

  • Romanian Election and Findings of snoop.ro

    A recent investigation by Snoop.ro sheds light on the role of influencer campaigns in Romanian politics. At the center is a pro-Georgescu campaign, initially notable for its strong presence on TikTok, now embroiled in controversy over alleged takeovers and financial irregularities.

  • Cyberattack on U.S. Treasury Attributed to Chinese Actors

    Unclassified U.S. Department of the Treasury systems were breached through vulnerabilities in a contractor’s security system. Hackers linked to the Chinese government targeted the Office of Foreign Asset Control (OFAC) and the Office of Financial Research. They accessed data on sanctions, tariffs, and internal communications. Classified networks were not affected, but the stolen information could be used for strategic purposes or disinformation campaigns.

bottom of page