top of page

Search CRC

143 results found with an empty search

  • Weekly Report: Cyber based influence campaigns 09th - 15th of June 2025

    [Listen to the Podcast] [ Report Highlights ] As revealed by EU vs. Disinfo , Russia has significantly expanded its FIMI operations, particularly across Africa, using covert strategies to influence public opinion and undermine democratic institutions. The report is based on a joint investigation by the European External Action Service, the French agency VIGINUM, and the UK’s Foreign, Commonwealth and Development Office. New Eastern Europe highlights  that during Poland’s 2025 presidential election, authorities feared significant Russian disinformation and interference. However, experts found the impact far less than expected, likely due to effective preventative measures . The Disinformation Observe r reports that the June 2025 Los Angeles immigration protests became a testing ground for coordinated disinformation campaigns, with foreign state media, U.S. politicians, and AI-generated content spreading false narratives to portray national instability and manipulate public perception. The Jamestown Foundation  reports that following Russia’s invasion of Ukraine, YouTube imposed restrictions and demonetized content creators from Russia and Belarus. Although aimed at curbing state propaganda, these measures ultimately failed to prevent disinformation. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [ Russia] Kremlin Expands Disinformation Operations Through African Front Surge in Cyber Activity Following Israeli Strikes on Iran Disinformation in Poland’s 2025 Presidential Election AI-RELATED ARTICLES   Social Media and AI Chatbots Fuel Disinformation About LA Protests LA Protests: Algorithms & Foreign Influence AI Misidentifies Plane Crash Details and Spreads Misinformation GENERAL REPORTS YouTube Restrictions Strengthen Kremlin-aligned narravtives FRAMEWORKS TO COUNTER DISINFORMATION EU Sanctions Target Russian Disinformation Amid Enforcement Challenges GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] Kremlin Expands Disinformation Operations Through African Front EUvsDisinfo reports  that the African Initiative , a supposedly independent news agency, is a Russian instrument for Foreign Information Manipulation and Interference  (FIMI) in Africa. The report is based on a joint investigation   by the European External Action Service (EEAS), the French agency VIGINUM, and the United Kingdom’s Foreign, Commonwealth and Development Office (FCDO). The initiative is led by Russian national Artem Sergeevich Kureev, allegedly linked to Russian intelligence services. It plays a central role in Russia’s influence operations on the African continent, particularly following Moscow’s strategic reorientation after its full-scale invasion of Ukraine in February 2022. The African Initiative  operates mainly in French- and English-speaking parts of Africa, with a notable focus on countries like Burkina Faso, which serves as a regional hub. Its activities combine physical outreach—through local associations, media partnerships, and activist networks—with digital operations, including spreading disinformation via fake news sites, AI-generated content, and manipulative SEO techniques. Sources:    EUvsDisinfo, African Initiative: Russian FIMI operation disguised as a news agency, 2025, [online]; Available at: https://euvsdisinfo.eu/african-initiative-russian-fimi-operation-disguised-as-a-news-agency/   VIGINUM, FCDO, EEAS, From Public Diplomacy to Covert Influence Operations, 2025, [online]; Available at: https://euvsdisinfo.eu/uploads/2025/06/VIGINUM_FCDO_EEAS_Technical_Report_African_Initiative.pdf [Back to Table of Contents] Surge in Cyber Activity Following Israeli Strikes on Iran In a post  on X, CyberKnow notes  a sharp rise in cyber activity following Israel’s military strikes on Iran on June 13, 2025. A total of 83 hacktivist groups are currently active: 65 are aligned with Iran, 11 are anti-Iran, and six support Israel. Three of the pro-Iran groups are identified as pro-Russian. Reported cyberattack types include DDoS, website defacements, data breaches, doxing, social media takedowns, ransomware, and attacks targeting operational technology. The number of anti-Iran groups exceeds that of pro-Israel groups, suggesting that much of the hacktivist activity is framed more around opposition to Iran than explicit support for Israel. The post does not assess the effectiveness of the attacks. However, the timing and scope of activity suggest that cyber operations are being launched rapidly in response to on-the-ground developments, underscoring their role in the broader conflict environment. Source:    @Cyberknow20, Iran and Israel War Cybertracker #1 , 2025, 15 June, 16:35, [online]; Available at: https://x.com/Cyberknow20/status/1934258425590141193 [Back to Table of Contents] Disinformation in Poland’s 2025 Presidential Election New Eastern Europe states  that disinformation and foreign interference, particularly from Russia, were significant concerns that the government and cybersecurity officials expressed during Poland’s 2025 presidential election. The Kremlin has long pursued efforts to influence European democratic processes, including disinformation campaigns in Romania, France, and Germany. Given Poland’s strategic position and its shifting stance on Ukraine, fears of pro-Russian narratives infiltrating the country were significant. However, experts and state institutions found that interference was far more limited than initially feared. In response, Poland developed a coordinated defense strategy known as the “election umbrella.” This strategy aims to counter disinformation, strengthen cybersecurity, and train key stakeholders such as election committees and journalists. Poland relied on proven measures from previous efforts and formally consolidated them into a unified approach. Despite early warnings of aggressive cyberattacks by Russian intelligence, including a high-profile breach of a ruling party’s IT system, the protective measures proved largely effective. Experts attribute this relative success to several factors: Poland’s diverse social media landscape, the limited executive powers of its presidency, and valuable lessons learned from recent attacks on elections abroad. Europe-wide initiatives like the Democracy Shield and increased collaboration with technology companies supported these defenses, albeit with varying degrees of enforcement. Source:    New Eastern Europe, Pyka A., Resisting foreign interference: Poland’s presidential election and the Russian challenge, 2025, [online]; Available at: https://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/ [Back to Table of Contents] AI RELATED ARTICLES Social Media and AI Chatbots Fuel Disinformation About LA Protests   According to a Wired   article , disinformation surrounding the protests in Los Angeles is spreading rapidly and is further amplified by AI chatbots like Grok and ChatGPT. In recent days, many people in Los Angeles have been protesting against increased Immigration and Customs Enforcement (ICE) raids. Among the images circulating were photos of National Guard troops sleeping on the floor, initially published by the San Francisco Chronicle  and later shared by California Governor Gavin Newsom. Additionally, a picture of a pile of bricks was falsely portrayed as connected to the LA protests, though it was taken in New Jersey. On platforms like X (formerly Twitter) and Facebook, users quickly spread false claims that the protesters were paid actors or that the images were doctored or taken out of context. These assertions were further amplified by conspiracy theorists and right-wing influencers, with politicians such as Senator Ted Cruz echoing the misleading content. Many users turned to AI chatbots like Grok and ChatGPT for clarification, but often received inaccurate or false information. Grok, for example, claimed that the National Guard photos originated from Afghanistan in 2021 or the U.S. Capitol. ChatGPT also stated the images were taken at the Kabul airport during the withdrawal from Afghanistan. Regarding the brick pile photo, Grok insisted it was from Los Angeles despite fact-checkers confirming it was taken in New Jersey. Despite corrections, Grok refused to retract its erroneous statements. The role of AI chatbots is thus ambivalent. While intended to provide clarity, their inaccurate responses contribute to confusion. Combined with fast-moving political events and declining content moderation on social platforms, this creates a fertile ground for disinformation. The spread of false images and unfounded claims undermines public discourse and makes it harder to distinguish truth from fiction during times of crisis. Sources:    Wired, Gilbert D., AI Chatbots Are Making LA Protest Disinformation Worse, 2025, [online]; Available at: https://www.wired.com/story/grok-chatgpt-ai-los-angeles-protest-disinformation/ [Back to Table of Contents] LA Protests: Algorithms & Foreign Influence The Disinformation Observer highlights  that Russian and Chinese state media spread genuine and fake content during the immigration protests in Los Angeles in June 2025. They operated across various platforms and were often difficult to distinguish from domestic users. A notable example is a photo circulated by the Russian agency Sputnik, allegedly showing “pallets of bricks” at protest sites. In reality, the image originated from a construction site in New Jersey and was unrelated to the Los Angeles protests. Such false images and conspiracy theories were used to portray the protests as signs of instability and to push political messages. A key factor in the rapid spread of this disinformation is the role of social media algorithms and AI-powered chatbots. These systems prioritize content that generates strong engagement, regardless of its truthfulness. As a result, sensational and emotionally charged false information gains greater visibility and spreads faster. The combination of disinformation and algorithmic amplification highlights the complexity of modern information warfare. This poses a significant challenge for democratic societies in the digital age. ​ Source:    The Disinformation Observer, Phil, This Week In Disinformation: LA Riots Edition, 2025, [online]; Available at: https://thedisinformationobserver.substack.com/p/this-week-in-disinformation-la-riots [Back to Table of Contents] AI Misidentifies Plane Crash Details and Spreads Misinformation CyberNews reports  that Google’s AI Overview wrongly identified the aircraft in the fatal Air India crash as an Airbus A330 instead of a Boeing 787. This error highlights growing concerns about AI hallucinations and misinformation in critical news events. AI systems like Google’s Overview are non-deterministic, meaning they can produce different results for the same query. These hallucinations occur when AI presents contextually related but false information as fact, leading to confusion and misinformation. Despite disclaimers that AI answers may contain errors, the authoritative tone of AI-generated content causes users to trust it implicitly. Such mistakes, especially during breaking news, can quickly spread false narratives and cause reputational harm. ​ Source:    Cybernews, Walsh M., Google’s AI overviews hallucinate Airbus in Air India crash – it was a Boeing, 13 June 2025, [online]; Available at: https://cybernews.com/news/google-ai-hallucinate-crash-india-boeing-airbus/t [Back to Table of Contents] GENERAL REPORTS YouTube Restrictions Strengthen Kremlin-aligned narravtives An article  by the Jamestown Foundation states  that following Russia’s invasion of Ukraine in 2022, YouTube imposed sanctions on Russian and Belarusian channels, blocked monetization, and restricted payment methods. Independent media lost significant revenue and reach as a result. Meanwhile, state-backed platforms like RuTube and VK Video benefited from these restrictions, gaining users and influence. Belarusian state media quickly filled the gap on YouTube, significantly expanding their reach and often adopting Kremlin-aligned narratives to justify the war. Independent channels, on the other hand, were labeled “extremist,” with their audiences intimidated by repression and prison sentences. Many media outlets had to scale back their activities or operate from exile. Russian opposition media saw a temporary increase in viewers, but their reach remains far behind state media. Today, the media landscape in both countries is dominated by state-controlled content. Sanctions and platform restrictions have significantly reduced space for independent journalism and amplified the spread of state propaganda. Source:    Jamestown Foundation, Bolkunets D., YouTube Restrictions on Belarus and Russia Strengthen Minsk’s Propaganda, 2025, [online]; Available at: https://jamestown.org/program/youtube-restrictions-on-belarus-and-russia-strengthen-minsks-propaganda/ [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION EU Sanctions Target Russian Disinformation Amid Enforcement Challenges The Royal United Services Institute (RUSI) reported that the European Union is employing sanctions as a key strategy to counter disinformation, primarily focusing on Russian state media outlets in response to the invasion of Ukraine. The core disinformation aspect addressed is the continuous broadcasting of Russian propaganda within the EU, often through "mirror websites" that circumvent bans. This strategy aims to limit the reach of malign narratives by directly targeting the platforms and entities that disseminate them. However, implementing and enforcing these sanctions face significant challenges, including monitoring pervasive online content and the need for a unified approach among EU member states. The broader implications of this approach highlight the complex balance between combating foreign information manipulation and upholding freedom of speech within democratic societies. The article suggests that while restrictive measures, such as sanctions, are necessary, they are not self-sufficient. It underscores the critical need for complementary investments in societal resilience, fostering media literacy among the populace, and promoting independent journalism to create a more robust defense against disinformation. The debate also addresses domestic sources of misinformation, emphasizing the multifaceted nature of defending democratic information spaces. Source: RUSI, Gyimesi B., Defending Democracy: Sanctions on Disinformation, 2025, [online]; Available at: https://www.rusi.org/explore-our-research/publications/commentary/defending-democracy-sanctions-disinformation [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]

  • Weekly Report: Cyber based influence campaigns 02nd - 08th of June 2025

    [Listen to the Podcast] [Report Highlights] DFRLab  found that Moldova24 (MD24), launched in July 2024, is a Kremlin-aligned Moldovan web broadcaster backed by fugitive oligarch Ilan Shor and covertly supported by Russian state media infrastructure like RT and TV Novosti. As stated in a TechXplore article, during a recent four-day conflict between India and Pakistan, social media users increasingly turned to AI chatbots like Grok, ChatGPT, and Gemini to verify claims in real time. However, instead of clarifying facts, these tools often amplify disinformation. The Virtual Manipulation Brief   2025 from NATO StratCom COE  highlighted a sharp escalation in Russian and Chinese information manipulation, driven by advances in AI. AFP Fact Check  reported that the recent South Korean election was significantly impacted by widespread online disinformation campaigns, with both progressive and conservative parties being targeted. OpenAI’s  latest report features case studies with poetic titles: "Sneer Review," "Uncle Spam ," and "Helgoland Bite." These studies expose how state-linked actors from China, Russia, and Iran weaponize generative AI. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [ Russia] Russia’s Hybrid War on Europe and Ukraine Digital Hostile Influence: Moldova24’s Media Operations [ The War in Ukraine] Russian disinformation reacts hesitantly after the Ukrainian Air Strike Kremlin Disinformation Targeting Polish Elections Russia Spreads Misinformation About Kursk Attack and “Neo-Nazis” AI-RELATED ARTICLES   Confidently Incorrect: AI Factcheckers Spread Misinformation Questions About C2PA’s Viability in Combating AI Misinformation NATO Warns of Generative ‘AI Agent Swarms’ GENERAL REPORTS South Korea Election Disinformation: Foreign Interference and Deepfakes FRAMEWORKS TO COUNTER DISINFORMATION OpenAI’s Disinformation Tracking Report GLOSSARY DOWNLOAD SECTION STATE ACTORS [Russia] Russia’s Hybrid War on Europe and Ukraine A report by the Atlantic Council states   that Russia is intensifying its hybrid warfare against Europe through cyberattacks, sabotage, disinformation, and targeted influence operations (“hostile influence”). A central tactic involves manipulating elections via coordinated disinformation campaigns, manipulated online content, and covert support for system-critical or pro-Russian actors. Countries particularly affected include Poland, Slovakia, the Baltic states, and Romania ( for more information on the manipulation of the 2024 Romanian Presidential Elections , see our detailed research report here ). Ukraine has been exposed to these tactics since 2014, but has developed an effective response model. This model combines media literacy, digital volunteer networks tasked with identifying and debunking false information, and a state-led communication strategy that provides transparent messaging and actively counters disinformation narratives. By contrast, the report finds that NATO and EU structures remain institutionally fragmented and insufficiently coordinated. A joint EU-NATO-Ukraine framework, along with targeted support for fact-checking, media education, and digital resilience, would help to form a sustainable response model. Additionally, A report by EU vs. Disinfo mentioned  that in recent months, pro-Kremlin disinformation has increasingly manipulated international perceptions by framing the United States, particularly under former President Donald Trump, as a rational actor seeking peace, in contrast to Europe and Ukraine, which are depicted as aggressive and intent on prolonging the war. Russian state media and affiliated outlets have praised Trump, claiming he shares views with Moscow and favors negotiations aligned with Russia’s demands. This narrative aims to sow discord within the West by portraying the U.S. as a more reasonable counterweight to a supposedly belligerent Europe. Sources:    Atlantic Council, Beznosiuk, M., Russian hybrid warfare: Ukraine’s success offers lessons for Europe, 2025, [online]; Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russian-hybrid-warfare-europe-should-study-ukraines-unique-experience/# EU vs. Disinfo, The world in black and white, 2025. [online]; Available at: https://euvsdisinfo.eu/the-world-in-black-and-white-2/ [Back to Table of Contents] Digital Hostile Influence: Moldova24’s Media Operations According to a DFRLab investigation , Moldova24 (MD24) is an ostensibly independent Moldovan web-broadcaster launched in July 2024 that operates as an extension of the Kremlin’s information machine. Streaming news shows, call-in programs, and a torrent of short social-media clips around the clock, the channel is steered by the media network of fugitive oligarch Ilan Shor but protected by the resources of Russian state outlets such as RT and TV Novosti, from whose servers it quietly transmits. MD24’s editors recycle Moscow’s talking points almost verbatim: Brussels is blamed for soaring prices, President Maia Sandu is painted as an authoritarian US puppet, and the idea of NATO cooperation is framed as a prelude to war. The tone is emotional and often sensationalistic, with dramatic music, split-second cuts, and scrolling captions designed to keep viewers angry and engaged. During the winter energy crisis of 2025, the channel uploaded nearly a thousand TikTok videos in eight weeks, saturating feeds just as household bills spiked; each clip ended with the same implied lesson: salvation lies in turning away from the West. When platforms suspend an account, the operation re-emerges under a slightly different name, supported by a lattice of mirror websites and fallback channels on Telegram and Instagram. The objective is not merely to win clicks, but to erode Moldovans’ trust in their democratic institutions, sap enthusiasm for EU membership, and clear the ground for pro-Russian parties ahead of crucial elections. In short, MD24 is Moscow’s pocket-sized but agile psychological operations tool, explicitly calibrated for Moldova’s volatile information space. Source:    DFRLAB, Olari V., Unveiling the Russian infrastructure supporting the Moldova24 TV channel, 2025. [online]; Available at: https://dfrlab.org/2025/06/03/unveiling-the-russian-infrastructure-supporting-the-moldova24-tv-channel/ [Back to Table of Contents] [The War in Ukraine] Russian disinformation reacts hesitantly after the Ukrainian Air Strike EUvsDisinfo highlights  the impact of Operation Spiderweb. In this coordinated drone offensive conducted by Ukraine, five Russian military airfields were partially hit. The attacks were unexpected and seemingly caught the Russian military leadership and Kremlin-aligned propaganda unprepared. On the Russian side, the actual effects were downplayed. Reports reluctantly mentioned "some fires," while Russian President Putin remained silent. Prominent propagandists like Vladimir Solovyov also showed unusual restraint. EUvsDisinfo concludes that Russian information control is fragile and dependent on Kremlin directives. Lacking clear instructions, pro-Kremlin propaganda resorted to familiar tactics: downplaying the attacks, portraying Ukrainians as terrorists, and spreading conspiracy theories that British strategists were behind the strikes to weaken Russia. This was intended to relativize the impact of the Ukrainian operation and depict the West as the mastermind. Additionally, Russia emphasized nuclear deterrence to highlight its strength. The article concludes that the Kremlin’s response to Operation Spiderweb reveals weaknesses in its authoritarian system, especially the propaganda’s dependence on Putin’s direct control. When he remains silent, information gaps cannot be convincingly filled. The repeated accusation against the United Kingdom distracts and aim to support Russia’s invulnerability narrative. Overall, the successful Operation Spiderweb undermines the image of Russia’s military superiority and exposes cracks in the Kremlin’s controlled disinformation structure. Source:    Eu vs. Disinfo, An Unpleasant Incident, 2025. [online]; Available at: https://euvsdisinfo.eu/an-unpleasant-incident/ [Back to Table of Contents] Kremlin Disinformation Targeting Polish Elections EU vs. Disinfo states  that during Poland’s 2025 presidential elections, pro-Kremlin disinformation sources actively sought to undermine public support for Ukraine and delegitimize the democratic process. Poland was portrayed as corrupt, chaotic, and imperialistic, while the election itself was framed as manipulated by foreign powers. Simultaneously, anti-Ukrainian sentiments were stoked through false claims that both presidential candidates were increasingly hostile toward Ukraine.   Three core narratives  dominated the influence campaign: First , the election was depicted as a battle between “Trump, Brussels, and globalist elites,” casting doubt on its democratic legitimacy. Second , it was asserted that Ukraine would be the loser of the election outcome, as candidates were supposedly anti-Ukrainian, and Polish society was portrayed as weary of supporting Ukrainian refugees. Third , both candidates were labeled as “hardline Russophobes,” suggesting that relations with Russia would remain unchanged regardless of the winner. Following Karol Nawrocki’s victory, pro-Kremlin media framed his success as the onset of the “Trumpization of Europe” and a blow against the EU. Nawrocki was further depicted as a wanted figure on Russian watchlists due to his removal of Soviet-era monuments. Overall, the disinformation campaign aimed to portray Poland as a puppet of anti-Russian forces, discredit its democratic institutions, and strain the relations between Poland, Ukraine, and the West. Source:    Eu vs. Disinfo, Polish elections in the eyes of Moscow, 2025. [online]; Available at: https://euvsdisinfo.eu/polish-elections-in-the-eyes-of-moscow/ [Back to Table of Contents] Russia Spreads Misinformation About Kursk Attack and “Neo-Nazis” According to DisinfoWatch , A recent tweet  by Russia’s Ministry of Foreign Affairs falsely claimed that Ukraine committed atrocities in Russia’s Kursk region, comparing the alleged violence to Nazi crimes. The statement, made by spokesperson Maria Zakharova, accused Ukraine’s government and foreign fighters of killing over 300 civilians in Kursk. However, there is no credible evidence to support this narrative. Independent investigators, satellite imagery, and international media have found no indication that Ukraine targeted civilians in the region. The accusation of Ukraine being run by "neo-Nazis" is a recycled propaganda trope. In reality, Ukraine’s president is Jewish, and far-right parties hold no parliamentary seats in the country. The Kremlin’s references to Nazism aim to evoke historical trauma, misrepresent Ukraine as the aggressor, and deflect the attention away from ongoing Russian violence. Source:    Russian MFA Makes False Claims About Kursk and “Nazis”,   2025, [online]; Available at: https://disinfowatch.org/disinfo/russian-mfa-makes-false-claims-about-kursk-and-nazis/ [Back to Table of Contents] AI RELATED ARTICLES Confidently Incorrect: AI Factcheckers Spread Misinformation  TechXplore reported that as AI chatbots like Grok, ChatGPT, and Gemini become more prominent on social platforms, users are increasingly relying on them for real-time fact-checking. Prompts like 'Hey @Grok, is this true?' are now routine on X.  They found that instead of debunking false claims, chatbots often reinforce them. Grok, for instance, misidentified archival footage from Sudan’s Khartoum airport as a missile strike in Pakistan, and described a video of a burning building in Nepal as showing Pakistani retaliation against India. A Columbia Tow Centre study concludes that chatbots often provide misleading information instead of admitting uncertainty. When AFP presented Gemini with an AI-generated image of a woman, it confirmed its authenticity and invented background details. Grok likewise validated a fake 'giant anaconda' video, citing non-existent expeditions. Experts warn that with reduced human moderation, chatbot fact-checkers may be effectively exploited by hostile actors. Their tendency to craft plausible but false narratives opens them up to direct manipulation by misinformation campaigns and the possibility of their output being shaped by politically biased training data. The vulnerability of AI Chatbots to being directly manipulated was dramatically acknowledged in a statement by xAI, which blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing the unsupported narrative of “white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Elon Musk as the "most likely" culprit. Sources:    Techxplore, Hey chatbot, is this true? AI 'factchecks' sow misinformation , 2025. [online] Available at: https://techxplore.com/news/2025-06-hey-chatbot-true-ai-factchecks.html Columbia Journalism Review, AI Search Has A Citation Problem , 2025. [online] Available at: https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php [Back to Table of Contents] Questions About C2PA’s Viability in Combating AI Misinformation A recent RAND report casts doubt on the effectiveness of the Coalition for Content Provenance and Authenticity (C2PA), a private-sector initiative to counter the growing threat of AI-generated misinformation. Backed by industry giants such as Adobe, Google, and Microsoft, the C2PA seeks to embed cryptographic provenance into digital content, offering a technical solution to restoring trust online. Initially hailed as a potential bulwark against disinformation, the initiative now faces a fundamental design flaw. Its voluntary, closed-system model is ill-suited to the open nature of the internet. While it tags participating content with a digital “badge” of authenticity, it cannot account for the vast majority of legitimate content created outside its system. This creates what RAND calls an “authenticity gap,” where untagged content may be genuine or expertly faked, giving bad actors room to sow doubt. An outdated threat model that fails to address the accelerating sophistication of generative AI compounds the issue. Critics argue the standard must evolve to differentiate between credentials issued by verified, trusted entities and those from unvetted sources. Without such reform, the C2PA may inadvertently erode trust rather than enhance it. ​ Source:    Rand, Overpromising on Digital Provenance and Security , 2025 [online] Available at: https://www.rand.org/pubs/commentary/2025/06/overpromising-on-digital-provenance-and-security.html [Back to Table of Contents] NATO Warns of Generative ‘AI Agent Swarms’  The recent Virtual Manipulation Brief 2025  from NATO StratCom COE highlights a sharp escalation in Russian and Chinese information manipulation capabilities, driven by advances in AI. The report states that in the last year, campaigns using AI-generated content, deepfakes, and bot networks to spread anti-NATO, anti-Ukraine, and anti-U.S. narratives were more frequent, synchronized, and emotionally targeted than previously. A key finding is the increasingly sophisticated Kremlin-backed disinformation, which is increasingly effective in using AI to automate content creation and tailor messaging to target specific platforms and languages. This has enabled their hostile influence campaigns to exploit geopolitical developments rapidly. Telegram, VK, and YouTube have emerged as high-engagement platforms for pro-Russian messaging, while X serves as a broad reach vector for both Russian and Chinese narratives. The report highlights a growing alignment between pro-Russian and pro-Chinese narratives, which consistently frame NATO as a destabilizing force and portray the U.S. critically. China's messaging aligns closely with Russia's, portraying NATO as destabilizing and the U.S. as weak and corrupt, thereby expanding the threat beyond a single actor. Inauthentic behaviour, including cross-platform amplification and echo chamber tactics, was also frequently observed. Looking ahead, the brief warns of the potential for adversaries to coordinate "generative AI agent swarms" using emerging interoperability standards (MCP, A2A, ACP), raising concerns about the speed and scale of future influence campaigns. The Nato report concludes that countering these evolving threats is a critical challenge for democratic societies. ​ Source:    Nato Strategic Communications Centre of Excellence, May 2025,  Virtual Manipulation Brief, [online] Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/VMB-Final-8325b.pdf?zoom=page -fit [Back to Table of Contents] GENERAL REPORTS Jae-Myung Wins Disinformation-Saturated South Korean Election The recent South Korean election on June 3, 2025, starkly illustrated how hostile digital influence campaigns are increasingly saturating democratic processes. According to an article  by AFP Fact Check , both conservative and progressive candidates were targeted by widespread online disinformation campaigns, a trend in line with the escalating threats to electoral integrity around the world. In the run-up to the vote, progressive candidates, notably then-opposition leader Lee Jae-myung, faced a concerted effort to portray them as sympathetic to or even controlled by China. Although going on to win the election, Jae-Myung’s campaign was dogged by the false claim that Chinese nationals were exploiting weaknesses in the overseas voter registration system to cast fraudulent ballots. "It's now the claim that progressives are subservient to China that resonates -- especially amid worsening US-China tensions," noted Choi Jin-bong, a media communications professor. Fabricated AI images, including one showing Lee kowtowing to a Mao statue, further fuelled these misleading narratives. Conservative candidates were targeted over alleged ties to the United States. Doctored images depicted former Prime Minister Han Duck-Soo’s presidential campaign announcement with Trump's endorsement. Another false claim, spread by a popular right-wing YouTuber to 1.5 million subscribers, asserted the Pentagon's endorsement of a conservative candidate, a claim they swiftly denied. The sheer intensity and ubiquitousness of digital hostile influence attacks during the South Korean election can be viewed as a warning of the escalating challenges democratic nations face worldwide. In campaign environmentssaturated by digital misinformation, the integrity of democratic processes risks being further eroded. Source:    AFP Fact Check, Shim Kyu-Seok, South Korea election hit by misinformation, 2025, [online]; Available at: https://factcheck.afp.com/doc.afp.com.48ZT8CP [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION OpenAI’s Disinformation Tracking Report A recent report by OpenAI details  several covert influence operations (IOs). To counter these threats, OpenAI leverages its AI tools to support expert investigative teams in identifying and disrupting malicious activities. Over the past three months, they’ve uncovered and exposed operations involving social engineering, cyber espionage, and deceptive online campaigns. Operation “Sneer Review” notably involved ChatGPT accounts generating large volumes of social media comments on platforms like TikTok, X, and Reddit. These posts aligned with China’s geopolitical interests, including critical commentary on Taiwanese content and the defamation of activists, accompanied by staged, seemingly organic user engagement. Internal performance reviews were also AI-generated to manage the operation systematically. Another China-linked campaign, “Uncle Spam”, aimed to polarize U.S. public opinion by creating divisive posts on platforms such as X and Bluesky. AI-generated profiles, including those themed around U.S. veterans, were used to deepen political divides. The report also highlights influence activities by Russian and Iranian actors, such as “Helgoland Bite” in Germany, which promotes pro-AfD content, and the Iranian operation “STORM-2035,” which produced posts on U.S. immigration policy and European independence movements. The report illustrates how AI is increasingly exploited to mass-produce politically manipulative content to sway public opinion. Actors combine automated content creation, fake personas, and coordinated engagement to simulate authenticity. Although the actual impact of these operations varied and was sometimes limited, the analysis underscores the growing role of AI-driven influence campaigns in global information conflicts. Source: OpenAI, Disrupting malicious uses of AI: June 2025, 2025. [online]; Available at: https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf / [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid. [Back to Table of Contents]

  • To Understand The Nature of Modern Chinese Influence Operations, Study Russia First

    In her article, Maya Sobchuk  analyzes the growing strategic partnership between China and Russia in the field of information and influence operations . While China’s economic and military power often takes center stage, the piece shows that Beijing is increasingly drawing on Russian expertise in psychological warfare to target foreign audiences more effectively . Concrete examples illustrate how both countries align narratives, adopt shared tactics, and develop joint strategies in the information space. This compelling read explains why understanding Russia is essential to grasping China’s evolving role in global information warfare. [ Download Full Report here ]

  • The Moldovan Government Report on Russian Rlection Interference

    The Moldovan Intelligence and Security Service (SIS) has published a report investigating Russian interference in the 2024 referendum on EU accession. The report addresses illegal financing, propaganda, organized street protests, and cyberattacks. It highlights that Russian actors, such as Ilan Șor, played a central role in these activities. Additionally, the report examines hybrid tactics, the influence of organized crime, and digital disinformation.

  • Romanian Election and Findings of snoop.ro

    A recent investigation by Snoop.ro sheds light on the role of influencer campaigns in Romanian politics. At the center is a pro-Georgescu campaign, initially notable for its strong presence on TikTok, now embroiled in controversy over alleged takeovers and financial irregularities.

  • Cyberattack on U.S. Treasury Attributed to Chinese Actors

    Unclassified U.S. Department of the Treasury systems were breached through vulnerabilities in a contractor’s security system. Hackers linked to the Chinese government targeted the Office of Foreign Asset Control (OFAC) and the Office of Financial Research. They accessed data on sanctions, tariffs, and internal communications. Classified networks were not affected, but the stolen information could be used for strategic purposes or disinformation campaigns.

  • US Sanctions Against Beijing-Based Cybersecurity Firm Integrity Tech

    The U.S. Treasury's Office of Foreign Assets Control (OFAC) has sanctioned Beijing-based cybersecurity firm Integrity Tech , alleging ties to the state-backed hacker group Flax Typhoon. Accusations include operating a massive botnet used in cyberattacks across the U.S. and Europe, raising new concerns about state-sponsored cyber threats.

  • Fact-checking and its Limitations to Combat Influence Campaigns

    Meta’s shift from independent fact-checkers to user-driven “community notes” sparks debate over the effectiveness of moderating disinformation. Critics question the transparency and impact of past fact-checking efforts, while influence campaigns often bypass these measures by manipulating context rather than spreading outright falsehoods.

  • Deepfakes and Multilingual Misinformation: The Challenges of Tracking AI-generated Disinformation

    The rapid advancement of AI-generated content has transformed the landscape of disinformation, making it harder than ever to detect and counter false narratives. Deepfakes, synthetic media, and multilingual misinformation are being weaponized to manipulate public opinion and interfere with democratic processes worldwide. With AI-powered disinformation campaigns exploiting linguistic barriers and social media algorithms, traditional fact-checking methods struggle to keep up.

  • Countermeasures and Defense Against AI-Driven Disinformation

    Artificial intelligence (AI) is transforming the world at a rapid pace, increasingly shaping various aspects of life. With its advancements come new possibilities and challenges that have far-reaching implications for society, the economy, and communication. This technology's evolution raises fundamental questions beyond technical innovation, redefining our understanding of information, interaction, and control. In an ever-changing digital landscape, it remains uncertain how AI will continue to develop and what role it will ultimately play.

bottom of page