top of page

Search CRC

100 results found with an empty search

  • Weekly Report: Cyber based influence campaigns 21st - 27th of April 2025

    [Listen to the Podcast] [Download Report] > TABLE OF CONTENTS < SOCIAL MEDIA PLATFORMS [ Meta] Ongoing Criticism of Meta’s Fact-Checking Policy STATE ACTORS [ Russia] Dutch Annual Report Warns of Expanding Russian Cyber Threats Russia’s Disinformation Offensive in Moldova’s 2024 Elections The Kremlin Exploits Auschwitz Liberation Anniversary to Spread Disinformation EU Pledges Support Against Russian Disinformation in Moldova [ The War in Ukraine] Disinformation Campaigns Surround Chornobyl Drone Strike [China] Australia: The dangers of using Chinese apps in election campaigns AI RELATED ARTICLES AI-Fueled Misinformation Threatens Canada’s Election Anthropic Discovered Sophisticated AI-Powered Influence Campaign GENERAL REPORTS Bot-Like Networks Target Canadian Election with Disinformation Canada’s 2025 Election Targeted by Foreign Disinformation Surge of Visual Disinformation Escalates Political Tensions Gen Z’s High Vulnerability to Online Disinformation Meta’s Decision to Block Canadian News: A Gateway for Misinformation Disinformation Affecting Climate Debate in Australia’s Election GLOSSARY SOCIAL MEDIA PLATFORMS [Meta]  Ongoing Criticism of Meta’s Fact-Checking Policy   TechXplore highlights recent developments regarding Meta’s decision to end its U.S. fact-checking program on Facebook in January 2025 (see Cyfluence Weekly Review, W02/2025). Since then, Meta has shifted its focus to its new “Community Notes“ system. The change has sparked concern among many independent observers. The Meta Oversight Board, which reviews the company’s content moderation practices, called the move “hasty” and warned it could increase the spread of harmful and misleading content. In response, the Board issued 17 recommendations, including a call for Meta to assess how Community Notes compares to traditional third-party fact-checking. Critics like Michael Wagner from the School of Journalism and Mass Communication at the University of Wisconsin-Madison argue that relying on unpaid users to police misinformation across a platform with billions of users reflects a serious lapse in corporate responsibility and poses risks to public safety and human rights. Meta has pledged to follow the Board’s decisions in individual cases but is not required to implement its broader policy recommendations. Source:    Tech Xplore, 2025. Oversight board voices concern about Meta nixing fact checks . [online] Available at: https://techxplore.com/news/2025-04-oversight-board-voices-meta-nixing.html [Back to Table of Contents] STATE ACTORS [Russia] Dutch Annual Report Warns of Expanding Russian Cyber Threats The Netherlands’ Military Intelligence and Security Service’s (MIVD) 2024 annual report outlines security threats posed by several states, including Russia, China, and Iran. While China and Iran are primarily associated with traditional espionage, cyber espionage, and technology theft, Russia is presented as a significantly broader and more aggressive threat. A section on digital influence appears exclusively under the Russia chapter. According to the report, Russia pursues a whole-of-society strategy. State institutions, intelligence services, private companies, and hacktivists work cooperatively. The goal is to destabilize Western societies through the spread of pro-Russian narratives, the erosion of NATO cohesion, and the disruption of democratic processes, such as the run-up to the European elections in the Netherlands. This strategy’s key and increasingly used component is deploying artificial intelligence (AI) to automate and amplify digital campaigns. These activities form part of a hybrid warfare approach that combines digital tools, psychological manipulation, and sabotage. The MIVD cites specific incidents, including DDoS attacks on websites of political parties and public transport operators, which were intended to interfere with the European elections in the Netherlands. Additionally, Infosecurity Magazine reports that the Dutch defense minister has warned of Russia’s accelerating militarization, driven by its war economy, which is advancing faster than NATO’s current efforts. This development calls for an urgent reinforcement of Europe’s defense capabilities. By deliberately blurring the boundaries between peace and conflict, Russia employs disinformation as a tool to fragment Western cohesion and undermine collective responses to its geopolitical ambitions. Sources:   Military Intelligence and Security Service (MIVD), 2025. Public Annual Report 2024 . [online] Available at: https://www.politico.eu/wp-content/uploads/2025/04/22/MIVD_Openbaarjaarverslag2024_digitaal-1-1.pdf   Infosecurity Magazine, P. Muncaster, 2025. Dutch Warn of “Whole of Society” Russian Cyber-Threat . [online] Available at: https://www.infosecurity-magazine.com/news/dutch-warn-whole-society-russia/   [Back to Table of Contents] Russia’s Disinformation Offensive in Moldova’s 2024 Elections As stated in an EU vs. Disinfo article , during Moldova’s 2024 presidential election and EU referendum, Russia launched an aggressive and multifaceted disinformation campaign aimed at destabilizing the country’s democratic processes. Russia used covert and overt channels, including official state media like TASS and Sputnik Moldova. It intensified its Foreign Information Manipulation and Interference (FIMI) operations to undermine President Maia Sandu and discredit Moldova’s EU integration. The Kremlin recycled disinformation narratives from Ukraine, such as organ trafficking allegations, and tailored them to Moldovan audiences through a vast network of Telegram channels and fake local news sites.  To flood the information space, Russian-linked platforms created Moldova-focused domains and media outlets, many tied to RT and other state-backed entities. These outlets propagated anti-EU messages, portraying European integration as a threat to Moldova’s sovereignty, a danger to Russian-speaking communities, and a provocation toward Russia. The campaign used manipulative techniques like AI-generated content, fake documents, and fabricated fact-checks to confuse the public, while even deploying a Telegram bot to pay Moldovans to spread anti-EU propaganda—weaponizing local poverty for strategic gain.  Source:    EUvsDisinfo, 2025. In Russia’s FIMI laboratory: test case, Moldova . [online] Available at: https://euvsdisinfo.eu/in-russias-fimi-laboratory-test-case-moldova/   [Back to Table of Contents] The Kremlin Exploits Auschwitz Liberation Anniversary to Spread Disinformation  EUvsDisinfo reports that Russia was not invited to the international ceremony marking the 80th anniversary of the liberation of Auschwitz on 27 January 2025. The Auschwitz-Birkenau Museum declined to invite Russian representatives, citing the ongoing war of aggression against Ukraine, which it described as a “barbaric act.” The Kremlin responded with a coordinated disinformation campaign. Russian officials and state media framed the exclusion as a result of Western “Russophobia,” historical alsification, and a supposed revival of Nazi ideology in the EU and Ukraine. Russia portrayed itself as the victim of an intentional effort to erase its role in World War II. Key Kremlin narratives include the portrayal of Russia as the sole legitimate heir to the Soviet Union, the claim that only Russian soldiers liberated Auschwitz, and the regular labeling of Ukraine and its allies as “neo-Nazis.” Historical facts—such as the participation of around seven million Ukrainians in the Red Army—are deliberately omitted. This rhetoric is revisionist. It distorts historical reality, relativizes the Holocaust, and equates present- day political criticism of Russia with the crimes of Nazism. Russia attempts to position itself as a contemporary victim of a “fascist West”. One striking example of this disinformation strategy is the fabricated claim that Poland’s Chief Rabbi, Michael Schudrich, called President Zelenskyy a “dictator who has tamed Nazis on a leash.” This false statement first circulated on Russian Telegram channels. It was later repeated publicly by Russian Foreign Ministry spokesperson Maria Zakharova—illustrating a typical case of Kremlin-generated and state-amplified disinformation. Source:    EUvsDisinfo, 2025. 80 years after Auschwitz – The Kremlin’s manipulative use of the Holocaust . [online] Available at: https://euvsdisinfo.eu/80-years-after-auschwitz-the-kremlins-manipulative-use-of-the-holocaust/   [Back to Table of Contents] EU Pledges Support Against Russian Disinformation in Moldova  In a joint press conference , EU High Representative Kaja Kallas and Moldovan President Maia Sandu highlighted the escalating threat of Russian disinformation campaigns to destabilize Moldova's democratic institutions and its path toward EU integration. They emphasized the need for coordinated efforts to counteract these hybrid threats, which include spreading false narratives and manipulating public opinion to undermine trust in democratic processes.​  It is emphasized that the disinformation campaigns targeting Moldova are primarily conducted by Russian state-sponsored actors who use various platforms to spread false or misleading narratives. These efforts aim to destabilize the country, weaken its democratic institutions, and erode its sovereignty. In response, the European Union protects Moldova's democracy and electoral infrastructure. At the request of Moldovan authorities, the EU is deploying a Hybrid Rapid Response Team to strengthen institutional capacity and safeguard the integrity of upcoming elections. Additionally, the EU continues to assist Moldova in building resilience to hybrid threats and enhancing crisis preparedness.  Source:    European External Action Service, K. Kallas and M. Sandu, 2025. Moldova: Remarks at the joint press conference . [online] Available at: https://www.eeas.europa.eu/eeas/moldova-remarks-high-representativevice-president-kaja-kallas-joint-press-conference%C2%A0-president-maia_en   [Back to Table of Contents] [War in Ukraine] Disinformation Campaigns Surround Chornobyl Drone Strike  As published by DFR Lab , on February 14th, 2025, the drone strike on the Chornobyl nuclear power plant reignited global fears of a nuclear disaster and swiftly became a battleground for disinformation. While Ukrainian officials confirmed that the attack, allegedly by Russia, damaged the reactor’s protective shelter without causing a radiation leak, Russia denied involvement and blamed Ukraine. This was quickly amplified by Kremlin-affiliated media and over 100 Russian and Belarusian Telegram channels, many of which had already primed audiences with conspiracy theories weeks prior, claiming Ukraine was planning a provocation to frame Russia. These narratives not only accused Ukraine of staging the attack to gain sympathy before the Munich Security Conference but also questioned the structural integrity of the reactor’s protective dome, suggesting corruption and Western orchestration. The messaging weaponized the historical trauma of the 1986 Chornobyl disaster, evoking panic by implying that any attack—even a contained one—could unleash catastrophic fallout. By invoking past tragedies, Russian disinformation outlets effectively reinforced fear while positioning themselves as victims of Western and Ukrainian schemes.  Beyond the Chornobyl incident, the same networks have recycled similar disinformation tactics, including unfounded claims of Ukrainian strikes on the Kursk NPP and alleged NATO complicity. These coordinated campaigns use anonymous sources, fabricated evidence, and high-volume repetition across social media to create confusion, deflect blame, and undermine trust in Ukrainian leadership and Western allies.  Source:    DFRLab, I. Adam and S. Rothbardt, 2025. Narrative and provocation: What Telegram had to say about the February attack on Chornobyl nuclear plant . [online] Available at: https://dfrlab.org/2025/04/25/narrative-and-provocation-what-telegram-had-to-say-about-the-february-attack-against-chornobyl-npp/   [Back to Table of Contents] [China] Australia: The dangers of using Chinese apps in election campaigns   ABC News highlights   increased political activity on Chinese-language platforms during Australia’s 2025 federal election campaign. The Liberal Party, and to a lesser extent, Labor, use WeChat and RedNote to reach voters in electorates with large Chinese-Australian communities. The Liberals have published over 220 authorised ads on WeChat since January. Labor has published about 30. Liberal candidate Scott Yung is mainly active in Bennelong, a key marginal seat. Both major parties have also run attack ads targeting Prime Minister Anthony Albanese and Opposition Leader Peter Dutton. These appeared on WeChat. RedNote poses additional problems. The platform discourages political content and uses shadow banning. Posts and candidate names do not appear in searches. Some content is technically online but remains hard to find, limiting the visibility of political messages. Not all candidates can reach voters equally. Ads may cost more or be suppressed without explanation. Such conditions open the door to foreign interference. If specific messages are blocked or disadvantaged by opaque algorithms, the fairness of the electoral process is at risk . ​ Source:    ABC News, F. Willan, 2025. Election battle on Chinese apps intensifies as Liberals target crucial voters . [online] Available at: https://www.abc.net.au/news/2025-04-27/chinese-social-media-apps-political-campaigns-federal-election/105201336   [Back to Table of Contents] AI Related Articles AI-Fueled Misinformation Threatens Canada's Election  CyberNews reports that, in the lead-up to Canada’s federal election on April 28, 2025, a wave of politically themed books targeting Prime Minister Mark Carney appeared on Amazon. Many of these titles were generated using artificial intelligence. The content often included factual inaccuracies, exaggerated claims, and surreal passages. Most readers were unaware that the texts were AI-generated. A report published by Anthropic detailed the disruption of a sophisticated AI-powered disinformation operation that marked a significant evolution in online influence tactics. This financially motivated “influence-as-a-service” provider used Anthropic’s Claude AI to orchestrate over 100 fake social media personas on X and Facebook, managing interactions, generating responses, and coordinating image creation. Unlike traditional viral disinformation campaigns, this operation emphasized persistence, relationship building, and covert integration into real user communities to gradually shape political opinions, promoting narratives that supported or undermined European, Iranian, UAE, and Kenyan interests. The books were published via Amazon’s Kindle Direct Publishing platform, which does not require editorial review or disclosure of AI authorship. While many of these books were later removed, they had already reached a broad audience. The case shows how generative AI, combined with open publishing platforms, can distort political narratives at scale and with minimal oversight. This creates a potential tool for digital hostile influence, especially during election periods when the integrity of public discourse is particularly vulnerable.   Source:    Cybernews, M. Walsh, 2025. AI election interference enters the Amazon era . [online] Available at: https://cybernews.com/news/ai-amazon-canadian-election/   [Back to Table of Contents] Anthropic Discovered Sophisticated AI-Powered Influence Campaign  A report published by Anthropic  detailed the disruption of a sophisticated AI-powered disinformation operation that marked a significant evolution in online influence tactics (for more information see our Blog ) This financially motivated “influence-as-a-service” provider used Anthropic’s Claude AI to orchestrate over 100 fake social media personas on X and Facebook, managing interactions, generating responses, and coordinating image creation. Unlike traditional viral disinformation campaigns, this operation emphasized persistence, relationship building, and covert integration into real user communities to gradually shape political opinions, promoting narratives that supported or undermined European, Iranian, UAE, and Kenyan interests.  The operation demonstrated notable technical sophistication by centralizing persona management through a structured, JSON-based system. Claude AI made strategic decisions for each persona, aligning interactions with specific political goals and generating content in the persona’s native languages and styles. The coordinated use of AI tools also helped operators maintain consistent political narratives across accounts, evading traditional detection methods by mimicking authentic human behavior and engagement patterns.  Anthropic’s findings highlighted a significant shift in disinformation tactics, moving from content-centric to relationship-centric strategies. The report stressed that although the campaign had limited viral impact, it effectively embedded politically aligned personas into real communities, subtly influencing conversations over time.   ​ Source:    Anthropic, K. Lebedev, A. Moix, & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms . [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf   [Back to Table of Contents] GENERAL REPORTS Bot-Like Networks Target Canadian Election with Disinformation  DFR Lab’s investigation revealed  that bot-like accounts on X aggressively targeted Canadian political parties ahead of the 2025 federal election, amplifying disinformation through repetitive spam tactics. Primarily attacking the Liberal Party and its leader, Prime Minister Mark Carney, these accounts spread narratives linking Carney to China, Jeffrey Epstein, and the World Economic Forum. The disinformation strategy relied heavily on recycled graphics from sources like Canada Proud and Conservative leader Pierre Poilievre’s official account, rather than creating new content.  Actors involved included suspected automated X accounts and politically motivated groups, with some disinformation campaigns previously linked to Chinese influence efforts. The findings underscore broader concerns about the erosion of election integrity and democratic discourse, as highlighted by Canada's ongoing inquiry into foreign interference. DFR Lab’s research adds to mounting evidence that inauthentic online activity, especially during critical political periods, poses a profound existential threat to democratic systems worldwide.  Source:    Digital Forensic Research Lab, 2025. Bot-like activity targets Canadian political parties and their leaders ahead of election . [online] Available at: https://dfrlab.org/2025/04/25/bot-like-activity-targets-canadian-political-parties-and-their-leaders-ahead-of-election/   [Back to Table of Contents] Canada's 2025 Election Targeted by Foreign Disinformation  DFRLab investigated bot-like behavior on X (formerly Twitter) ahead of Canada’s April 2025 snap election. The analysis focused on eight English-language accounts belonging to major political parties and their leaders (Liberal, Conservative, NDP, Bloc Québécois). Between January 1 and April 17, 2025, researchers collected 3,507 posts and 2.4 million related interactions, including replies and reposts. Using behavioral indicators such as post volume, reply speed, and linguistic patterns, DFRLab identified 150 bot-like accounts, 65 of which were created in 2025, suggesting intent to influence the election. Many accounts followed generic naming conventions and demonstrated automated or coordinated behavior. Some were later restricted by X due to “unusual activity.”Roughly four of five accounts targeted the Liberal Party, especially Prime Minister Mark Carney, often through repeated image spam. Familiar narratives accused Carney of loyalty to China, ties to Jeffrey Epstein and Ghislaine Maxwell, and involvement in the World Economic Forum. Content was rarely original; bots reused graphics from groups like Canada Proud or the official account of Conservative leader Pierre Poilievre. Some replies contained text artifacts indicative of AI-generated content, such as spacing errors before punctuation. Conservative accounts were also targeted, but to a lesser extent. DFRLab concludes that such inauthentic activity distorts political discourse by artificially inflating certain narratives. In a high-stakes election marked by international tensions and record early voter turnout, bots present a clear risk to democratic integrity and public trust. ​ Source:    Digital Forensic Research Lab (DFRLab), 2025. Bot-like activity targets Canadian political parties and their leaders ahead of election . [online] Available at: https://dfrlab.org/2025/04/25/bot-like-activity-targets-canadian-political-parties-and-their-leaders-ahead-of-election/   [Back to Table of Contents] Surge of Visual Disinformation Escalates Political Tensions  A study by The Conversation examined the connection between visual disinformation and political violence. The aim was to investigate how manipulated images, memes, and other visual content contribute to political instability and conflict. The research team analyzed a dataset of approximately 6 million posts and 3 million images from 989 Russian military bloggers on Telegram. Using AI tools, they detected visual manipulations, focusing on the period leading up to Russia’s 2022 invasion of Ukraine. The study recorded a nearly 9,000% spike in posts and a 5,000% increase in altered images shortly before the outbreak of war. The findings suggest that visual disinformation is deliberately used to distort public perception, deepen social divisions, and justify violence. Images and memes often have a substantial emotional impact and spread more quickly than text-based content. One of the study’s key conclusions is that artificial intelligence can assist in identifying technical manipulations, but cannot reliably interpret visual content’s political or ideological meaning. Human expertise remains essential for understanding the context and intent of these messages. Source:    The Conversation, T. Weninger and E. Verdeja, 2025. Memes and conflict: Study shows surge of imagery and fakes can precede international and political violence . [online] Available at: https://theconversation.com/memes-and-conflict-study-shows-surge-of-imagery-and-fakes-can-precede-international-and-political-violence-233055   [Back to Table of Contents] Gen Z's High Vulnerability to Online Disinformation  Politico examines how Generation Z, though raised with digital technology, is highly susceptible to online misinformation. A 2021 Stanford University study found that only three out of 3,446 high school students could correctly identify the origin of a video that falsely claimed to show U.S. voter fraud but depicted footage from Russia. This result challenges the assumption that digital natives are naturally skilled at evaluating online content.   TikTok has become a key news source for many young users. At the same time, it is a space where conspiracy theories and unverified claims spread quickly. Despite expert warnings, these include doubts about Helen Keller’s life, allegations of government-controlled weather, and viral health trends such as applying beef tallow to the skin. A lack of trust in institutions, such as the media, education, and science, shapes how Gen Z processes information. Many rely on comment sections to assess accuracy rather than consulting reliable sources. These sections often reflect algorithm-driven echo chambers that reinforce existing views and filter out opposing perspectives. One example involved an AI-generated audio clip of Donald Trump that went viral. The correction appeared only later in the comments, after the video had been widely shared. Researchers like Rakoen Maertens from the University of Oxford point to limited life experience as another factor. Schools often teach close reading but rarely train students to verify information across sources. Experts note that this pattern may spread to older generations. As algorithms personalize information across platforms, all age groups may face growing exposure to misleading content. ​ Source:   POLITICO, C. Kim, 2025. How Gen Z Became the Most Gullible Generation . [online] Available at: https://www.politico.com/news/magazine/2025/04/23/gen-z-media-tiktok-misinformation-00287561   [Back to Table of Contents] Meta's Decision to Block Canadian News: A Gateway for Misinformation​  NewsGuard notes that in August 2023, Meta stated, “people using our platforms don’t come to us for news.” However, a June 2024 Reuters Institute survey found that 25 percent of Canadians use Facebook for news. The blocking of Canadian news due to the Online News Act created an information vacuum that was quickly filled by misinformation sources. False narratives began to spread with reliable news sources no longer available on Facebook, and Instagram. NewsGuard identified several false claims targeting Mark Carney, including one that he called Trump supporters “a generational damage to the United States,” and another claimed he sold Canada’s gold reserves. These falsehoods reached millions of users, even though they were debunked. Despite Canadian media quickly debunking these claims, the fact-checks were often not shared on Meta platforms. One example is the false claim that Carney was barred from serving in Parliament due to a Canadian law, which continued to spread as the corrections were not posted on social media. The blocking of Canadian news sources has undermined trust in democratic processes. Unchecked misinformation spreads, manipulating public opinion and contributing to societal polarization. This development illustrates the problematic nature of blocking legitimate news sources and how it can erode the integrity of public discourse. Source:    NewsGuard Reality Check, S. Howard and N. Dirks, 2025. With Canadian News Outlets Barred from Facebook and Instagram, Bad Actors Are Filling the Void with Election Misinformation . [online]  Available at: https://www.newsguardrealitycheck.com/p/with-canadian-news-outlets-barred   [Back to Table of Contents] Disinformation Affecting Climate Debate in Australia’s Election  According to The Conversation , before the 2025 Australian federal election, misinformation about climate change has emerged as a significant issue. Misleading claims about renewable energy and gas distort public perception and hinder climate action. Three major misinformation campaigns are active in the election campaign: Clive Palmer’s "Trumpet of Patriots" campaign spread an outdated scientific claim suggesting that temperatures in Greenland were not rising, which was removed by Google after 1.9 million views. The group "Responsible Future Illawarra" spreads false information about offshore wind farms, claiming they harm marine life, despite scientific studies showing the opposite. Finally, the pro-gas campaign "Australians for Natural Gas" promotes the idea that gas is essential for Australia’s energy mix and that climate policies harm the economy, even though a shift to renewable energy is feasible. Experts recommend protecting oneself from such misinformation by using "pre-bunking" strategies and the SIFT method, which involves verifying sources and tracing information to ensure the climate debate is based on facts rather than fiction.  ​ Source:    The Conversation, D. Lewins (AAP), 2025. These three climate misinformation campaigns are operating during the election run-up. Here’s how to spot them . [online] Available at: https://theconversation.com/these-3-climate-misinformation-campaigns-are-operating-during-the-election-run-up-heres-how-to-spot-them-253441   [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 23th March – 30rd March 2025

    INTRODUCTION Cyber-based hostile influence campaigns  aim to influence target audiences by disseminating information and/or disinformation over the internet, sometimes in conjunction with cyberattacks, which amplify their impact (hence, forcing Cyfluence, as opposed to cyberattacks that seek to steal information, extort money, etc.). Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary over the last decade, information is disseminated across various internet platforms, which are the different elements of the hostile influence campaign. As such, the connectivity and repetitiveness of content between these elements are the primary characteristics that define influence campaigns. Much like Cyber-attacks, hostile influence campaigns have also become a tool for rival nations and corporations to damage reputations or achieve various business, political, or ideological goals. Much like in the cybersecurity arena, PR professionals and government agencies respond to negative publicity and disinformation shared through news and social media. We use the term' cyber-based hostile influence campaigns' as we also include in this definition cyber-attacks aimed at influencing (such as hacking and leaking during election time), while excluding from this term other types of more traditional kinds of influence, such as diplomatic, economic, and military. Between March 24 and 30, 2025 , we observed, collected, and analyzed endpoints related to cyber-based hostile influence campaigns, including Cyfluence attacks. The following report summarizes the key events we consider most significant. Some campaigns involve social media and news outlets, while others utilize cyber-attack capabilities. [Listen to the Podcast] [Download Report] TABLE OF CONTENTS HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russian Intelligence Engages in Overt Disinformation [War in Urkaine] Austria: Russian Influence Campaign Unveiled [China] Global Times Rejects Allegations of Chinese Hostile Interference China's Disinformation Campaigns and Japan's Vulnerabilities AI RELATED ARTICLES The Rise of Disinformation in the Age of AI AI Spreads False Narratives and Climate Disinformation GENERAL REPORTS Disinformation Targets UK MPs and Bangladesh Central Bank Chief Hostile Influence Campaigns Target Canadian Electoral Process France Identified as Top Target of Foreign Hostile Influence in Europe Combating Disinformation in Kenya and Senegal Fake Accounts Drive Disinformation on Pope Francis’ Health APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION Disinformation Cited as Justification for Facebook Suspension in PNG GLOSSARY DISLCLAIMER HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS Russia Russian Intelligence Engages in Overt Disinformation EU vs. Disinfo warns  that Russian disinformation campaigns have long aimed to portray protests in Russia and neighboring countries as being Western-orchestrated. The goal is delegitimizing democratic movements, often by alleging that protesters are paid. A typical example is the false claim that the West funded the 2020 Belarus protests with $6 billion in cryptocurrencies. After the elections in Georgia, Russian state media, including Sputnik, claimed the EU was financing protests, reinforced by an official SVR press release. While Russian intelligence usually operates covertly through proxies, leaks, and fake news platforms, its recent public messaging signals a shift. The SVR’s open statement marks growing boldness, showing that Russian spies now act both secretly and publicly. According to EU vs. Disinfo, Russia’s hostile influence architecture comprises four interconnected layers, ranging from overt to covert. While some actors operate openly, others conceal their ties to the Kremlin. Source:   EUvsDisinfo, 2025. Russian spies as disinformation actors. [online] Available at: https://euvsdisinfo.eu/russian-spies-as-disinformation-actors/ War in Ukraine Austria: Russian Influence Campaign Unveiled Austria's Directorate for State Protection and Intelligence (DSN) has uncovered  a Russian-led disinformation campaign orchestrated by a covert cell operating within the country. At the center of the investigation is a Bulgarian national suspected of acting as a liaison for Russian intelligence services. She allegedly coordinated the distribution of propaganda materials intended to sway public opinion against Ukraine and President Volodymyr Zelenskyy. The campaign used online content and real-world actions, such as graffiti and stickers, which falsely appeared to originate from pro-Ukrainian activists but contained far-right and nationalist messaging. The disinformation cell became active shortly after the Russian invasion of Ukraine began in 2022. The suspect has confessed to her involvement and provided evidence pointing to additional accomplices located in Russia and the UK. According to the DSN, this campaign is part of a broader strategy of hybrid warfare designed to undermine democracy, destabilize society, and manipulate the political discourse. Source: DSN, 2025. DSN uncovers Russian disinformation campaign against Ukraine. Article No: 27890. [online]. Available at: https://www.dsn.gv.at/news.aspx?id=684974546579734E4A61453D China Global Times Rejects Allegations of Chinese Hostile Interference DisinfoWatch analyzes  an article by the Chinese state-run newspaper Global Times, which denies any hostile influence by the Chinese government in Canada’s political process. Instead, the article claims that the Conservative Party of Canada is using these allegations to discredit the Liberal Party’s electoral successes. It further suggests that Canadian authorities exaggerate the issue to distract from trade and security tensions with the United States. The Global Times also criticizes the language used in official reports, arguing that terms like “highly likely” indicate a lack of credible evidence. DisinfoWatch classifies these claims as misleading: Chinese interference efforts have been well-documented by intelligence agencies, parliamentary committees, and independent experts, particularly in the 2019 and 2021 elections. The criticized terminology is standard in intelligence assessments and reflects analytical caution, not a lack of evidence. These narratives reflect China`s geopolitical strategy to undermine and deflect international criticism of its foreign influence operations. Source:   DisinfoWatch, 2024. PRC State Media Falsely Accuse CSIS, Conservatives of Hyping Foreign Interference Concerns. [online] Available at: https://disinfowatch.org/disinfo/prc-state-media-falsely-accuse-csis-conservatives-of-hyping-foreign-interference-concerns/   China's Disinformation Campaigns and Japan's Vulnerabilities A 2025 report by DFRLab highlights  that China’s expanding digital influence strategy poses a significant challenge to Japan. Due to economic rivalry, territorial disputes in the East China Sea, and its close alliance with the U.S., Japan has become a key target of Chinese disinformation campaigns. These efforts serve Beijing’s broader geopolitical aims by spreading false narratives, including misinformation about the Fukushima wastewater release and separatist claims concerning Okinawa ( also see our Weekly Report: 17th March – 23rd March 2025 ). Coordinated campaigns amplify such messages across social media, often using fake accounts. Japan’s outdated digital infrastructure and limited institutional coordination hinder an adequate response. Although the Digital Agency was created to modernize government systems, it struggles to keep pace with evolving threats. While Japan raises concerns about disinformation internationally, its domestic response remains underdeveloped. Public demand for stronger regulation and digital defense is growing. To boost democratic resilience, Japan must accelerate its digital transformation, improve cybersecurity, and implement coordinated countermeasures. Source:   DFRLab, D.J. Plung and K. Thibaut, 2025. Japan’s technology paradox: the challenge of Chinese disinformation. [online] Available at: https://dfrlab.org/2025/03/25/japan-tech-chinese-disinformation/ [ Back to Table of Contents ] AI RELATED ARTICLES The Rise of Disinformation in the Age of AI In an article on TechXplore , computer scientist Andreea Pocol discusses the growing challenges posed by generative AI, also known as GenAI. These inexpensive and widely accessible tools create highly realistic images and videos, known as "deepfakes," which are increasingly used for disinformation campaigns. The dangers include intensifying political conflicts, election interference, increased societal polarization, and erosion of trust in media and democratic institutions. According to a 2022 study by Pocol, participants could distinguish between authentic and AI-generated images with only 61% accuracy. This rate is likely to have declined further today. Previous detection indicators, such as unrealistic eyes, teeth, or hair, have become less reliable due to technological advancements. Algorithms that detect deepfakes often fail, particularly with low-resolution or poorly lit images. Despite calls for stronger regulation, the technology is advancing faster than reliable detection methods. Source: TechXplore, A. Pocol, 2025. As generative AI becomes more sophisticated, it becomes increasingly difficult to distinguish the real from the deepfake. [online] Available at: https://techxplore.com/news/2025-03-generative-ai-sophisticated-harder-distinguish.html AI Spreads False Narratives and Climate Disinformation A report by NewsGuard analyzed an article published by ScienceofClimateChange.org, a site known for climate change skepticism. It claimed that solar activity, rather than human actions, is the primary driver of global warming. Grok 3, an AI tool developed by Elon Musk’s platform "X," was credited as the lead author. The piece garnered widespread attention among climate change skeptics. However, AI is not always a reliable source of factual information. Tools like Grok generate responses based on training data and the structure of user prompts. The output can reflect those distortions if the prompts are biased or misleading. Additionally, Grok 3 co-authored the article alongside known climate skeptics such as David Legates, whose influence likely shaped the direction and tone of the AI-generated content. NewsGuard also notes that Grok contradicted the article’s claims when prompted neutrally and confirmed that human activities are a major driver of climate change. Source: NewsGuard, N. Dirks, 2025. No, Grok AI-Written Study Does Not Prove that Global Warming is a Natural Phenomenon. [online] Available at: https://www.newsguardrealitycheck.com/p/no-grok-ai-written-study-does-not [ Back to Table of Contents ] GENERAL REPORTS Disinformation Targets UK MPs and Bangladesh Central Bank Chief The Guardian reports  that British MPs have raised concerns about a disinformation campaign targeting Ahsan Mansur, the Governor of the central bank of Bangladesh. The campaign began shortly after he arrived in the UK, where he is investigating assets allegedly laundered by the regime of Prime Minister Sheikh Hasina. Fabricated articles and emails sent by individuals posing as journalists contain unsubstantiated claims against Mansur and his family. His daughter is a particular focus, facing allegations of unclear sources of wealth. The effort appears aimed at damaging Mansur’s credibility as an anti-corruption advocate. British politicians, including former Minister Tulip Siddiq, are also mentioned, raising concerns that the campaign aims to stoke political tension and hinder cooperation. Mansur, a former IMF official, believes those involved in money laundering are behind the attacks. MPs have forwarded the material to cybersecurity teams and foreign affairs committees, calling for a full investigation. Source: The Guardian, R. Davies, 2025. MPs think they may have been targets of ‘disinformation’ over Bangladesh inquiry. [online] Available at: https://www.theguardian.com/politics/2025/mar/24/mps-think-they-may-have-been-targets-of-disinformation-over-bangladesh-inquiry Hostile Influence Campaigns Target Canadian Electoral Process A report by DisinfoWatch warns of coordinated foreign influence targeting Canada’s electoral process. The main actors identified are China, Russia, and U.S.-based extremist networks. While the Chinese Embassy in Ottawa and the state-run Global Times deny any interference, as noted earlier in our weekly report, Canadian intelligence agencies have documented targeted influence operations. These include disinformation campaigns, covert financing, and intimidation. In December 2024, a $1 million bounty was reportedly placed on a Conservative candidate. Russia, meanwhile, uses state media outlets like Sputnik to discredit Liberal Party leader Mark Carney, particularly for his pro-Ukraine stance. AI-generated fake images—some showing Carney with Jeffrey Epstein—have circulated online and were reportedly created in part using Grok 3, the AI platform developed by Elon Musk’s X. U.S. conspiracy theorist Alex Jones has also intervened, publicly endorsing PPC leader Maxime Bernier while promoting far-right narratives. This transnational entanglement of extremist actors fuels radicalization, polarization, and public distrust in democratic institutions. Source:   DisinfoWatch, 2025. Canadian 2025 Election Report #1. [online] Available at: https://disinfowatch.org/canadian-2025-election-report-1/ [ Back to Table of Contents ] France Identified as Top Target of Foreign Hostile Influence in Europe According to Le Monde , France has become one of the main targets of foreign hostile influence in Europe—second only to Ukraine. Prime Minister François Bayrou cited a European External Action Service report indicating that nearly one-third of the 505 documented incidents across Europe between 2023 and 2024 targeted France. While Russia is the primary actor, China has also been linked to coordinated influence efforts. A February report by France’s disinformation watchdog, Viginum, confirmed that Russia has conducted a sustained information campaign over the past three years to justify its invasion of Ukraine and manipulate public opinion in France (a review of the Viginum report is available on our blog). France’s support for Ukraine likely makes it a strategic target. On March 28, President Macron hosted a European security summit. However, ongoing digital influence operations may continue to undermine these diplomatic efforts. Source: Le Monde with AFP, 2025. PM says France is the top EU target for foreign disinformation campaigns. [online] Available at: https://www.lemonde.fr/en/international/article/2025/03/28/disinformation-pm-says-france-is-the-top-eu-target-for-foreign-campaigns_6739602_4.html Combating Disinformation in Kenya and Senegal A study by The Conversation highlights  how media professionals in Kenya and Senegal are countering disinformation. Political and health-related topics are particularly affected. Journalists employ source cross-checking, reverse image searches, geolocation tools such as Google Maps, and specialized tools like InVID for video analysis to verify content. Fact-checking platforms, such as PesaCheck, Piga Firimbi, and AfricaCheck, play a crucial role. At the same time, there is an ongoing debate over who should have the authority to define what is “true”—governments, tech companies, or independent actors. In the long term, media literacy is the most effective defense against disinformation. In Kenya, outlets produce public tutorials; in Senegal, AfricaCheck collaborates with radio stations in Wolof. However, structural barriers remain: a lack of curricula, language gaps, and limited political engagement. Experts argue that media literacy must be integrated into schools, workplaces, and religious institutions to build a resilient public capable of resisting manipulative narratives. Source: The Conversation, L. Diop, 2024. Fighting fake news: how media in Kenya and Senegal check facts. [online] Available at: https://theconversation.com/fighting-fake-news-how-media-in-kenya-and-senegal-check-facts-251123 Fake Accounts Drive Disinformation on Pope Francis’ Health An analysis by Cyabra revealed  that AI-driven fake profiles heavily influenced online discussion about Pope Francis’ health. 31% of the accounts on X were fake and responsible for spreading false claims about the Pope’s death and manipulating official Vatican communications. The disinformation campaign centered on three narratives: fabricated death reports, fake news shared by seemingly credible accounts, and distortion of official statements to sow distrust. Approximately 42% of the overall conversation was negative. Fake profiles used emotional language, repeated messaging, and hijacked high-engagement threads to amplify their false narratives. The Vatican was forced to respond publicly, dedicating time and resources to counter these rumors. The case highlights the growing role of AI-powered disinformation in shaping public perception and distorting legitimate communication.   Source:    Cyabra, R. Baruchin, 2025. Pope Francis’ Health Misinformation Fueled by Fake Profiles. [online] Available at: https://cyabra.com/blog/pope-francis-health-misinformation-fueled-by-fake-profiles/ [ Back to Table of Contents ] APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION Disinformation Cited as Justification for Facebook Suspension in PNG ABC News reports  that Papua New Guinea has temporarily blocked access to Facebook. The government described the move as a test aimed at countering disinformation, hate speech, and harmful content, carried out under anti-terrorism laws. According to media reports, the ban was imposed without a specified duration; access has since been restored. Officials argued that the measure was necessary to protect public safety and reduce the spread of false information, violent material, and pornography. The decision sparked criticism from opposition politicians, journalists, and civil society groups, who raised concerns about its implications for freedom of expression. They viewed the action as disproportionate and warned it could set a precedent for future restrictions. The government has previously accused Facebook of enabling the spread of misinformation. Observers noted that such actions could affect long-term access to digital platforms and reduce transparency in online public discourse. Source: ABC News, N. Sas & B. Kora, 2025. Papua New Guinea shuts down Facebook in test to stop 'pornography, misinformation, hate speech'. [online] Available at: https://www.abc.net.au/news/2025-03-25/facebook-shut-down-in-papua-new-guinea-in-test/105092564 [ Back to Table of Contents ] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.

  • Weekly Report: Cyber based influence campaigns 31th March – 6rd April 2025

    INTRODUCTION Cyber-based hostile influence campaigns  aim to influence target audiences by disseminating information and/or disinformation over the internet, sometimes in conjunction with cyberattacks, which amplify their impact (hence, forcing Cyfluence, as opposed to cyberattacks that seek to steal information, extort money, etc.). Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary over the last decade, information is disseminated across various internet platforms, which are the different elements of the hostile influence campaign. As such, the connectivity and repetitiveness of content between these elements are the primary characteristics that define influence campaigns. Much like Cyber-attacks, hostile influence campaigns have also become a tool for rival nations and corporations to damage reputations or achieve various business, political, or ideological goals. Much like in the cybersecurity arena, PR professionals and government agencies respond to negative publicity and disinformation shared through news and social media. We use the term' cyber-based hostile influence campaigns' as we also include in this definition cyber-attacks aimed at influencing (such as hacking and leaking during election time), while excluding from this term other types of more traditional kinds of influence, such as diplomatic, economic, and military. Between March 31 and April 06th, 2025 , we observed, collected, and analyzed endpoints related to cyber-based hostile influence campaigns, including Cyfluence attacks. The following report summarizes the key events we consider most significant. Some campaigns involve social media and news outlets, while others utilize cyber-attack capabilities. [Listen to the Podcast] [Download Report] TABLE OF CONTENTS HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Disinformation Strategy Russian Disinformation and Hostile Campaigns in Georgia [War in Ukraine] Conclusions on AI’s Influence in State-Sponsored Disinformation Campaigns Russia Escalates Disinformation War to Undermine Ukraine’s Global Support GENERAL REPORTS The Complexities of Disinformation Attribution Understanding FIMI: Key Findings and Trends in Digital Warfare APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION Early Detection of Disinformation Campaigns Using AI UK’s Foreign Influence Registration Scheme to Counter Disinformation GLOSSARY DISLCLAIMER HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Disinformation Strategy EUvsDisinfo reports  that the 3rd EEAS (European External Action Service)  report describes Russian disinformation as a structured, four-layered system. The first layer includes official state-controlled channels like RT and Sputnik, which openly represent the Kremlin’s voice. The second layer consists of state-linked platforms such as NewsFront. They try to hide their ties to the Russian state but follow its messaging closely. The third layer includes anonymous websites and accounts. They are hard to trace but show technical and behavioural signs of coordination with known pro-Kremlin sources. The fourth and deepest layer involves state-aligned actors. They cannot be directly linked to the Russian state but regularly repeat Kremlin narratives. They also use the same infrastructure and tactics as confirmed actors. Together, all layers serve Russia’s goal of shaping public opinion and spreading confusion. Researchers use technical clues like domain data and hosting and behavioral patterns like AI or automated posting to detect and track these operations. Source:   EUvsDisinfo, 3rd EEAS Report on The Architecture of Russia’s FIMI Operations. [online], (2025), Available at: https://euvsdisinfo.eu/the-architecture-of-russias-fimi-operations/ [ Back to Table of Contents ] Russian Disinformation and Hostile Campaigns in Georgia EUvsDisinfo highlights  that Russian media outlets such as Sputnik  and Russia Today (RT)  in Georgia played a central role in spreading disinformation to undermine the protests against the Georgian government’s decision to suspend EU accession talks until 2028. The protesters were labeled as "liberal-fascist" traitors and "puppets of the West," while the police’s use of force was portrayed as necessary, lawful, and proportionate. These outlets promoted the narrative of a Western-backed "color revolution" and accused the US and EU of destabilizing Georgia. Additionally, disinformation claimed that Western organizations like USAID and NED were funding the protests to undermine Georgia’s sovereignty. Russian media also sought to link the unrest to the war in Ukraine, framing Georgia as a “second front” against Russia. These campaigns aimed to depict the West as hostile while casting Russia as the stabilizing force in Georgia. These narratives align with Russia’s broader strategy of hostile influence. Source:   EUvsDisinfo, "The war on truth: Russian disinformation and Georgia’s path to EU discord," (2025), [online] Available at: https://euvsdisinfo.eu/the-war-on-truth-russian-disinformation-and-georgias-path-to-eu-discord/ [ Back to Table of Contents ] [War in Ukraine] Conclusions on AI’s Influence in State-Sponsored Disinformation Campaigns PNAS Nexus, published by Oxford University Press , features a recent report that examines the impact of generative AI on disinformation campaigns, focusing on DCWeekly.org , a propaganda site. This site, identified as part of a Russian influence operation, spread pro-Russian narratives targeting a global audience, especially in countries like West Africa, Turkey, India, and the U.S. The report shows that the use of generative AI, particularly OpenAI's ChatGPT-3, significantly increased the production of disinformation. Before AI adoption, the content was mainly copied and edited from other sources. After integrating AI in September 2023, the articles appeared more original, though they often retained the same source base. Notably, many of these disinformation campaigns focused on anti-Ukrainian narratives, including fabricated stories about Ukrainian President Volodymyr Zelenskyy. A survey showed that these AI-generated articles were perceived as equally persuasive and credible as previous ones, highlighting the growing threat of AI-supported disinformation. Source: Oxford Academic, PNAS Nexus, Morgan Wack, C. Ehrett, D. Linvill, P. Warren, 2025. Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. [online] Published by Oxford University Press. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11950819/pdf/pgaf083.pdf [ Back to Table of Contents ] Russia Escalates Disinformation War to Undermine Ukraine’s Global Support A report from RBC-Ukraine reveals  that Russia has launched a new disinformation campaign against Ukraine. The campaign involves key Russian media outlets such as Gazeta.ru , Sputnik , Vedomosti , and Voennoye Obozreniye , as well as foreign proxy platforms like Reseau International and Magyar Hírlap , which present themselves as independent sources. Dissemination occurs through anonymous Telegram channels, manipulated TikTok accounts, and YouTube videos impersonating Ukrainian soldiers or "independent experts." These materials are spread in various formats, including opinion pieces, fake interviews, analyses, and infographics. The primary objective of this operation is to blame Kyiv for the failure of peace talks, depict Russia as a "peacemaker," and portray Ukraine as unwilling to negotiate. Simultaneously, the campaign seeks to undermine trust in the Ukrainian government and weaken Western support for Ukraine. Source:   RBC-Ukraine, B. Babaiev, 2025. Russia blames Ukraine for stalled talks in new disinformation campaign – Ukraine's intelligence. [online] Available at: https://newsukraine.rbc.ua/news/russia-blames-ukraine-for-stalled-talks-in-1743858897.html [ Back to Table of Contents ] GENERAL REPORTS The Complexities of Disinformation Attribution Oxford Academic recently published  a study examining attribution's role in disinformation campaigns and its use as a deterrence strategy. The study highlights the political risks and uncertainties in attribution decisions, particularly in liberal democracies. The research argues that technical capabilities and domestic political contexts influence attribution by analyzing cases such as the 2016 US and 2021 German Bundestag elections. The study introduces the concept of the "uncertainty loop," which describes how varying levels of political, social, and technical uncertainty influence the timing and manner of attribution decisions. The findings suggest that while technical advancements have made attribution more feasible, political considerations, such as the risk of domestic backlash and the impact on international relations, often dictate whether or not attribution is publicly pursued. Thus, disinformation attribution serves as both a deterrence measure and a politically sensitive tool in modern international relations. Source: Oxford Academic. Hedling, E., & Ördén, H. (2025). Disinformation, deterrence and the politics of attribution , Published by Oxford University Press Available at:   https://academic.oup.com/ia/advance-article/doi/10.1093/ia/iiaf012/8100244 [ Back to Table of Contents ] Understanding FIMI: Key Findings and Trends in Digital Warfare The 3rd EEAS Threat Report examines  the growing threat of disinformation and foreign information manipulation (FIMI) in the digital age. Key actors include Russia and China, who use disinformation to deepen political divisions and undermine trust in democratic institutions. The report introduces the FIMI Exposure Matrix, a tool to identify media channels connected to FIMI operations. This matrix categorizes channels based on technical and behavioral indicators to analyze their ties to manipulative actors. The increasing use of AI-generated content and fake accounts to spread disinformation is also highlighted. The report emphasizes the role of digital platforms as the primary source of these threats and calls for enhanced institutional collaboration and improved detection methods to address these challenges. The goal is to strengthen societal resilience to FIMI and increase transparency on digital platforms. Source: European External Action Service (EEAS). "3rd EEAS Report on Foreign Information Manipulation and Interference Threats." European Union External Action , 2025, Available at: https://www.eeas.europa.eu/eeas/3rd-eeas-report-foreign-information-manipulation-and-interference-threats-0_en . [ Back to Table of Contents ] Appendix - Frameworks to Counter Disinformation Early Detection of Disinformation Campaigns Using AI In its report, RAND examines  using large language models (LLMs) to detect disinformation and propaganda. Unlike traditional methods, LLMs can analyze broader contexts and identify subtle propaganda patterns. They recognize classical techniques like exaggeration and deception. The report finds that fine-tuned LLMs effectively detect disinformation, especially when trained on propaganda data. It recommends including non-English sources and developing a larger corpus of propaganda instances. In conclusion, LLMs are a promising tool for detecting foreign malign information operations. Source: RAND Corporation. (2025). Defending American Interests Abroad: Early Detection of Foreign Malign Information Operations . Christopher A. Mouton, Caleb Lucas, Shaun Ee. Available at: https://www.rand.org/pubs/research_reports/RRA2853-1.html . [ Back to Table of Contents ] UK’s Foreign Influence Registration Scheme to Counter Disinformation The UK government announced  the launch of the Foreign Influence Registration Scheme (FIRS) during an oral statement to Parliament. This scheme is part of the National Security Act 2023 and addresses the increasing risk of covert foreign interference. Its main goal is to enhance transparency regarding foreign influence, particularly from countries like Russia and Iran, and to safeguard national security, democratic institutions, and the UK’s political system. Under FIRS, individuals and organisations must register if they carry out activities on behalf of foreign powers within the UK. The scheme operates on two levels: the political tier, which applies to all foreign states, and the enhanced tier, which focuses on hostile actors that pose a more significant threat. This includes foreign governments, authorities, and state-controlled political parties. FIRS will take effect on 1 July 2025, following a three-month transition period. Failing to register will be considered a criminal offence. Source : UK Government, Home Office and Dan Jarvis MBE MP, 2025. Foreign Influence Registration Scheme implementation . [online] Available at: https://www.gov.uk/government/speeches/foreign-influence-registration-scheme-implementation [ Back to Table of Contents ] Disinformation as a Geopolitical Weapon A study published in the Journal of Complex Networks  explores how disinformation spread through social media can disrupt the operation of critical infrastructure. Using a case study from New York City, the researchers show how false reports about supposedly closed subway stations can influence passenger behavior and lead to overcrowding and delays. Many people rely on social media to plan their routes. When targeted disinformation is circulated on these platforms, it can cause detours, congestion, and inefficient system use, resulting in real-world disruptions in urban transportation. The researchers developed a mathematical model to identify the most influential users in social networks. They then target these users with accurate information to prevent the spread of false narratives. The study clarifies that protecting critical infrastructure also means tackling digital disinformation campaigns. What begins online can quickly have tangible consequences in everyday life. Source : Journal of Complex Networks, S. Jamalzadeh, K. Barker, A.D. González, S. Radhakrishnan, E. Bessarabova, G. Sansavini, 2025. Disinformation interdiction: protecting infrastructure networks from weaponized disinformation campaigns . [online] Available at: https://academic.oup.com/comnet/article/13/2/cnaf003/8103952 [ Back to Table of Contents ] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 14th – 20th of April 2025

    [Listen to the Podcast] [Download Report] >TABLE OF CONTENTS< HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Global Disinformation Network Infiltrates AI and Wikipedia Storm-1516 Disinformation Network Targets France with AI-Fueled Propaganda National Survey Shows Kremlin Disinformation Widely Believed by Americans Pravda Dashboard Exposes Russia's Massive Global Disinformation Network [War at Ukraine] U.S. Cuts Funding to Ukraine’s Independent Media Russian disinformation Targets Ukraine and EU Politics [China] Chinese Influence Group Advances CCP Agenda Through Cultural Diplomacy  [Iran] Iran Increasingly Integrates AI to Amplify Disinformation and Cyber Operations AI RELEATED ARTICLES Reinforcement Learning’s Potential to Amplify Disinformation on Social Media GENERAL REPORTS State-Sponsored Disinformation Harassing Thailand’s Democratic Movement Domestic Disinformation Threatens Chinese-Australian Communities APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION EU Invests in Digital Innovation and Combating Disinformation U.S. Government Closes R/FIMI Office Federal Efforts to Combat Disinformation Face Rollback Under Trump Administration GLOSSARY HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Global Disinformation Network Infiltrates AI and Wikipedia As published by the Atlantic Council, Russia has significantly expanded its global disinformation campaign during the war in Ukraine, using sophisticated strategies to spread pro-Kremlin narratives across digital platforms. A key part of this effort is the Pravda network, a web of fake news portals active in over 80 countries. These sites recycle and repackage content from sanctioned Russian media and Kremlin-aligned Telegram channels, strategically surfacing during political events such as elections in Romania and Moldova or high-profile meetings between world leaders. The goal is to saturate global information spaces with Russian propaganda disguised as legitimate news. The campaign has now reached artificial intelligence tools and Wikipedia, enabling disinformation to seep into mainstream knowledge sources. Kremlin-aligned narratives influence AI-generated content by posing as credible references on Wikipedia and news sites cited in large language model (LLM) training data. This means that users relying on AI tools may unknowingly consume skewed, anti-Ukrainian, and anti-Western messaging—an alarming development in how information is shaped and distributed. As AI adoption grows, so does the risk of disinformation embedded in its outputs. Source:   Atlantic Council, V. Châtelet, 2024. Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia . [online] Available at: https://www.atlanticcouncil.org/blogs/new-atlanticist/exposing-pravda-how-pro-kremlin-forces-are-poisoning-ai-models-and-rewriting-wikipedia/ [ Back to Table of Contents ] Storm-1516 Disinformation Network Targets France with AI-Fueled Propaganda A report by NewsGuard revealed that Russia’s disinformation campaign, orchestrated by the Storm-1516 network, has aggressively shifted its focus to France, leveraging AI-generated content to amplify false narratives across social media and AI platforms. From December 2024 to March 2025, five Kremlin-linked hoaxes targeting France went viral in nearly 39,000 posts, racking over 55 million views. These included fabricated accusations involving French President Emmanuel Macron and First Lady Brigitte Macron, as well as baseless claims about Ukrainian President Zelensky and migrants, narratives timed to coincide with France’s increasing military support for Ukraine. What makes this wave of disinformation particularly dangerous is its infiltration into generative AI tools. NewsGuard found that several major AI chatbots—including Mistral’s French-made "Le Chat"—repeated Russian-fabricated stories, like the Macron affair hoax, as factual responses to user prompts. These claims, often laundered through fake whistleblower videos, phony regional news outlets, and YouTube uploads, are cited by AI systems trained on seemingly authoritative sources. Storm-1516’s playbook is now well-established: hijack real identities, use AI for face-swapping and video manipulation, spread content through multilingual platforms, and adapt narratives to current political tensions. While France has so far avoided a full-scale fake news site network like those deployed in the U.S. and Germany, officials are preparing for further escalation. As support for Ukraine increases, so does Russia’s digital assault. Source:   NewsGuard Reality Check, N. Huet, M. Sadeghi, and C. Labbe, 2025. Russian Propaganda Campaign Targets France with AI-Fabricated Scandals, Drawing 55 Million Views on Social Media . [online] Available at: https://www.newsguardrealitycheck.com/p/russian-propaganda-campaign-targets-france-with-ai-fabricated-scandals [ Back to Table of Contents ] National Survey Shows Kremlin Disinformation Widely Believed by Americans According to a NewsGuard report , a recent national survey conducted by YouGov for NewsGuard revealed the deep reach of Russian disinformation among the American public. One in three Americans believes at least one false claim promoted by Russian state media, highlighting how effectively the Kremlin’s narratives have taken hold. The study tested 1,000 adults with 10 widely circulated falsehoods, three of which were directly tied to Russian propaganda. Only 23.8% of respondents correctly identified all three Kremlin-linked claims as false. The misinformation included debunked narratives such as Ukraine stealing U.S. aid, Zelensky's alleged 4% approval rating, and claims that Ukraine sold U.S.-supplied weapons to Hamas. These fabrications, initially spread by outlets like RT and Sputnik, have not only gone viral on social media but have also been repeated by U.S.-based websites and even made their way into generative AI tools, further normalizing disinformation. NewsGuard analysts have cataloged nearly 300 such Russian falsehoods related to the war in Ukraine alone. The survey showed widespread vulnerability to misinformation on health and elections. A staggering 78% of respondents believed at least one false claim, and fewer than 1% accurately debunked all ten. Importantly, belief in disinformation crossed party lines, though Republicans were significantly more likely to accept Kremlin-backed narratives. Source:   NewsGuard Reality Check, M. Skibinski, 2025. One Third of Americans Believe Russian Disinformation, NewsGuard-YouGov Survey Finds . [online] Available at: https://www.newsguardrealitycheck.com/p/one-third-of-americans-believe-russian-disinformation-yougov-survey-finds [ Back to Table of Contents ] Pravda Dashboard Exposes Russia's Massive Global Disinformation Network The Pravda Dashboard, published  by the DFRLab, exposed the scale and strategy of Russia’s global disinformation network. With Finnish company CheckFirst, the DFRLab analyzed a dataset of over 3.7 million articles repurposed from Russian state media and questionable Telegram channels. Through an elaborate network of websites and automated republishing tools, the Pravda network bypassed sanctions to amplify pro-Kremlin narratives worldwide, especially in France, Germany, Ukraine, Moldova, and Serbia. Many articles sourced banned Russian outlets like TASS and RT, alongside disinformation-heavy Telegram channels such as InfoDefense and SurfNoise. The investigation revealed that the Pravda operation intensified during key political events, such as the 2024 EU Parliament elections and high-profile arrests, using surges in publication to flood news cycles with manipulated narratives. The most-cited source in the network was @Vbachir, a French-speaking Telegram account claiming ties to Russian forces, which alone appeared in more than 53,000 articles. The operation also strategically targeted linguistic regions by repackaging Russian-origin content through language-based clusters, using HTML tags to signal alternate versions for different audiences. This method enabled the network to deliver coordinated disinformation to specific countries or regions while maintaining the illusion of local news. Source:   DFRLab, V. Châtelet and A. Lesplingart, 2025. Russia’s Pravda network in numbers: Introducing the Pravda Dashboard . [online] Available at: https://dfrlab.org/2025/04/18/introducing-the-pravda-dashboard/ [ Back to Table of Contents ] [War in Ukraine] U.S. Cuts Funding to Ukraine’s Independent Media According to a report by the Atlantic Council, Ukraine’s independent media, a vital barrier against Kremlin disinformation, is facing collapse following a drastic cut in U.S. development aid. Since January 2025, the United States has quietly halted around 90% of its funding for Ukrainian media through USAID and affiliated partners. This comes at a time when Russia is intensifying its disinformation campaigns, leaving Ukrainian newsrooms like NikVesti, Vgoru, and Cykr without the resources needed to counter false narratives. These outlets, operating under war conditions, have served as a crucial line of defense by exposing propaganda and reporting on both Russian strikes and internal corruption. As these independent newsrooms downsize or shut down, Russia is rapidly exploiting the resulting information vacuum. A recent example in Sumy involved fake health warnings circulated by pro-Russian Telegram channels, falsely claiming a disease outbreak among Ukrainian soldiers. Disinformation crafted to mimic official sources is designed to mislead and erode public trust in all media. Without credible local journalism, these manipulative tactics become significantly more effective, especially in regions already isolated by war. Efforts to stay afloat through donor outreach, membership drives, and EU funding applications are underway, but are slow and uncertain. Journalists warn that without immediate support, independent media will vanish, and unchecked Russian propaganda will dominate Ukraine’s information space. Source:   Atlantic Council, M. Tahir, 2025. US funding cuts create openings for Russian disinformation in Ukraine . [online] Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/us-funding-cuts-create-openings-for-russian-disinformation-in-ukraine/ [ Back to Table of Contents ] Russian disinformation Targets Ukraine and EU Politics According to an article  by EU vs. Disinfo, as Russia continues its assault on Ukraine, its disinformation machinery works in overdrive to deny responsibility for deliberate attacks on civilians. The missile strike on Sumy on April 13, which killed at least 35 civilians during a religious holiday, was falsely framed by Kremlin sources as a military operation. This follows a disturbing pattern in previous attacks, such as the one in a restaurant in Kryvyi Rih, where fabricated justifications are spread rapidly to obscure the truth. By immediately releasing false narratives—claiming military meetings or weapon caches at bombed civilian sites—Russian propaganda seeks to confuse public perception and delay international condemnation. Despite mounting evidence, including UN reports and security footage, the Kremlin persists in pushing these narratives, even as the International Criminal Court issues arrest warrants for those responsible for attacks on non-military infrastructure. Beyond Ukraine, this disinformation model is now being deployed to influence foreign elections. In early April, a coordinated campaign targeted Poland’s upcoming presidential election with fake news videos impersonating credible outlets, falsely claiming security threats. These efforts followed Russia’s known disinformation playbook—spreading content through fake accounts and amplifying it with strategic reposts. Source:   EUvsDisinfo, 2025. Missiles and lies. Again.  [online] Available at: https://euvsdisinfo.eu/missiles-and-lies-again/ [ Back to Table of Contents ] [China] Chinese Influence Group Advances CCP Agenda Through Cultural Diplomacy  An article by National Review stated that the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), a group identified by U.S. intelligence as tied to Beijing’s influence operations, has intensified its efforts to promote the Chinese Communist Party’s (CCP) global vision during high-profile visits to the United States. During a U.N. event in New York, CPAFFC officials promoted Xi Jinping’s “Global Civilization Initiative,” framing it as a path toward global cooperation and understanding. However, experts warn that these initiatives are part of China’s strategic effort to reshape international norms and governance in favor of its authoritarian model. CPAFFC's activities, including speeches at the U.N. and meetings in California and Florida, mask strategic disinformation under the guise of cultural diplomacy. Despite branding itself as a non-governmental organization, CPAFFC operates within the CCP's United Front system, which subtly seeks to influence foreign entities. For example, the group has cultivated connections. with American communities through programs honoring WWII’s Flying Tigers, reinterpreting history to support modern CCP narratives, and extending Beijing's ideological reach. Source:   National Review, J. Quinn, 2025. Chinese ‘Friendship’ Group Visits America amid Trade War . [online] Available at: https://www.nationalreview.com/corner/chinese-friendship-group-visits-america-amid-trade-war/ [ Back to Table of Contents ] [Iran] Iran Increasingly Integrates AI to Amplify Disinformation and Cyber Operations According to a report by Recorded Future, Iran is increasingly integrating artificial intelligence into its national security and foreign policy strategy to amplify cyberattacks, influence operations, and domestic surveillance. While Tehran faces significant obstacles—such as economic isolation and top-down government control—it is leveraging AI as a force multiplier, especially in the context of regional conflicts and escalating tensions with Israel. Disinformation is a key component of Iran’s AI-powered influence campaigns. The regime will likely deploy generative AI and large language models to enhance social engineering efforts, spread state-sponsored propaganda, and manipulate public opinion domestically and abroad. These operations exploit cultural and linguistic familiarity to bypass detection and increase impact, particularly targeting the West's adversarial governments and tech sectors. Source:   Recorded Future – Insikt Group®, 2025. Iran’s AI Ambitions: Balancing Economic Isolation with National Security Imperatives . [online] Available at: https://go.recordedfuture.com/hubfs/reports/ta-ir-2025-0417.pdf [ Back to Table of Contents ] AI Related Articles Reinforcement Learning’s Potential to Amplify Disinformation on Social Media A recent study  by Concordia researchers presented a reinforcement learning (RL) framework designed to actively increase polarization and disagreement on social media platforms. Using the Double Deep Q-Learning algorithm, researchers developed a method to identify and take control of key user accounts in a network. These accounts are strategically chosen to maximize the spread of divisive content, effectively turning them into tools for orchestrated disinformation and societal disruption. The study models both private and public opinion dynamics, simulating how individuals form and express beliefs online. While private opinions update synchronously, public expressions are updated asynchronously, mirroring real-world social media behavior. RL agents only observe users' public posts and follower counts, yet this limited data proves sufficient for optimizing disinformation strategies. The model demonstrates how adversarial agents can effectively exploit social platforms to deepen divides and manipulate public discourse through simulations across different network structures. This research underscores the growing threat posed by intelligent, AI-driven disinformation tactics and highlights the urgent need for robust defenses against such manipulative strategies. Source:   IEEE, M. N. Zareer and R. R. Selmic, 2025. Maximizing Disagreement and Polarization in Social Media Networks using Double Deep Q-Learning . [online] Available at: https://ieeexplore.ieee.org/document/10831299 [ Back to Table of Contents ] GENERAL REPORTS State-Sponsored Disinformation Harassing Thailand’s Democratic Movement A report  published by the Citizen Lab revealed the existence and sustained activity of JUICYJAM, a state-sponsored online influence operation designed to harass and suppress Thailand’s pro-democracy movement. Since August 2020, this campaign operated across platforms such as X and Facebook using an inauthentic persona, “Ms. Juk Khlong Sam,” who systematically doxxed activists by revealing private information like family details and school affiliations. The posts published during the campaign employed manipulation tactics such as distraction and smearing to shift narratives and discredit individuals, particularly during politically sensitive moments like protests or elections. These tactics were not random - leaked military and police documents in March 2025 linked the campaign directly to the Royal Thai Armed Forces or the Royal Thai Police. Despite JUICYJAM’s clear inauthenticity, social media platforms failed to curb its reach. The operation amassed significant engagement, over 100,000 followers, and tens of thousands of interactions. Citizen Lab emphasized that current platform policies inadequately address state-sponsored disinformation and coordinated doxxing, particularly in authoritarian contexts. Source:   Citizen Lab, A. Fittarelli, M. Scott, and K. Wongsapakdee, 2025. JUICYJAM: How Thai Authorities Use Online Doxxing to Suppress Dissent . [online] Available at: https://citizenlab.ca/2025/04/how-thai-authorities-use-online-doxxing-to-suppress-dissent/ [ Back to Table of Contents ] Domestic Disinformation Threatens Chinese-Australian Communities New research published by The Conversation has revealed that during Australia's current election cycle, disinformation targeting Chinese-speaking communities is being driven not by foreign interference but by commercial and domestic political interests. Platforms like WeChat and RedNote (Xiaohongshu), which serve as key news sources for Chinese-Australians, are being used to spread misleading political narratives. These include manipulated statements, exaggerated threats, and racially charged content that distort policy positions and stoke fear among migrant communities. The research uncovered tactics such as misleading visuals, distorted timelines, and sensational headlines, often used to misrepresent policies on immigration, visas, and foreign relations. One WeChat article falsely claimed a surge in Coalition support, while misreporting poll results and promoting racial narratives against the Labor Party. Another RedNote post selectively quoted Prime Minister Anthony Albanese to mislead readers about immigration policy. Disinformation on these platforms is particularly harmful due to weak regulation, limited platform transparency, and the vulnerable position of marginalized communities. With automated detection tools blocked or ineffective, the burden falls on individuals and community efforts to counter these narratives. The report calls for improved civic education and tailored media literacy to help users critically assess digital content, especially in non-English media ecosystems where disinformation can flourish unchecked. Source:   The Conversation, F. Yang, L. Heemsbergen, and R. Fordyce, 2025. This election, disinformation is swirling on Chinese social media. Here’s how it spreads . [online] Available at: https://theconversation.com/this-election-disinformation-is-swirling-on-chinese-social-media-heres-how-it-spreads-253849 [ Back to Table of Contents ] Appendix - Frameworks to Counter Disinformation EU Invests in Digital Innovation and Combating Disinformation The European Commission announced it has launched four new funding calls totaling €140 million under the Digital Europe Programme (DIGITAL). These calls aim to accelerate the deployment of AI, boost digital skills, expand digital innovation hubs, and, notably, tackle disinformation. Of this total, €5 million is specifically allocated to creating a European Network of Fact-Checkers. This initiative will include protective measures for fact-checkers, a shared fact-checking repository, and rapid-response capabilities for misinformation, especially during crisis situations. The funding also supports broader goals, including integrating generative AI into public administration and the agri-food sector, while enhancing regulatory compliance and supporting semiconductor and data space initiatives. Through this investment, the Commission aims to foster innovation and counter disinformation more effectively, reinforcing resilience against manipulation and false narratives across the digital landscape. Source:   European Commission, 2025. Commission invests €140 million to deploy key digital technologies . [online] Available at: https://digital-strategy.ec.europa.eu/en/news/commission-invests-eu140-million-deploy-key-digital-technologies [ Back to Table of Contents ] U.S. Government Closes R/FIMI Office In a move framed as a defense of free speech, the U.S. government has announced the closure of the State Department’s Counter Foreign Information Manipulation and Interference office (R/FIMI), formerly known as the Global Engagement Center. As stated in Marco Rubio's announcement, the office, initially tasked with countering foreign disinformation, has come under fire for allegedly targeting and censoring American citizens, prompting claims that it violated foundational democratic principles. According to the announcement, the center used its $50 million annual budget to suppress dissenting opinions under the guise of combating foreign influence, blurring the line between safeguarding information integrity and stifling free expression. The closure has been positioned as a corrective measure to realign national policy with constitutional freedoms, particularly the right to free speech. A report by Politico added that critics of the closure, however, argue that dismantling R/FIMI undermines U.S. national security. The office had played a key role in exposing Russian disinformation operations in Africa and in forming international coalitions to push back against information warfare from adversaries like China and Russia. Supporters across party lines, including Senators Chris Murphy and John Cornyn, stressed the strategic importance of the office in countering foreign influence campaigns designed to deceive and destabilize democratic societies. Sources:   U.S. Department of State, M. Rubio, 2025. Protecting and Championing Free Speech at the State Department . [online] Available at: https://www.state.gov/protecting-and-championing-free-speech-at-the-state-department/ POLITICO, M. Miller, 2025. State Department eliminates key office tasked with fighting foreign disinformation . [online] Available at: https://www.politico.com/news/2025/04/16/state-department-shutters-gec-foreign-disinformation-00292982 [ Back to Table of Contents ] F ederal Efforts to Combat Disinformation Face Rollback Under Trump Administration As published by The Free Press, a wide-ranging federal effort to combat misinformation and disinformation, especially during the Biden administration, has come under intense scrutiny, with the Trump administration now actively shutting down dozens of programs. The debate centers on using more than $1.4 billion in taxpayer-funded grants and contracts since 2017, over 600 of which were awarded during Biden’s term, to research and counter disinformation. Critics, especially from the GOP, argue that many of these initiatives effectively served as censorship tools under the guise of protecting public discourse. One such example is a now-cancelled $683,000 grant to UC Irvine to study vaccine hesitancy among Black and Latinx communities, which was withdrawn after inquiries from The Free Press . The closures follow an executive order from President Trump (for further information, see W07 March 2025 Cyfluence Report), issued on his first day back in office, calling for an investigation into federal activities that allegedly violated Americans’ free speech rights. Agencies, including the NIH, State Department, and Pentagon, are now reevaluating and terminating contracts, with language around “misinformation” being replaced by terms like “adversary propaganda.” Some programs remain active, such as a $979 million Pentagon contract with defense contractor Peraton to track foreign information threats. Others, like those involving the University of Washington and disinformation analysis firm Graphika, have faced criticism for allegedly targeting conservative voices. Although some organizations insist their work focuses solely on foreign influence or election-related falsehoods, Republicans continue to pursue deeper cuts, investigations, and policy reforms to prevent what they view as government-funded ideological enforcement masquerading as disinformation mitigation. Source:   The Free Press, G. Kaminsky and M. Rowley, 2025. Joe Biden Made More Than 600 Grants to Stop ‘Disinformation.’ Donald Trump Now Has a Plan for Them . [online] Available at: https://www.thefp.com/p/joe-biden-made-600-grants-to-stop-disinformation-misinformation-donald-trump-cancels-awards [ Back to Table of Contents ] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Emotional Insights and Their Potential for Tackling Fake News

    Emotional Insights and Their Potential for Tackling Fake News Author: Osnat Mokryn The contribution explores how emotions shape experiences and enhance their significance . It presents psychological models explaining emotions and their influence on social interactions and sharing experiences, particularly online. Additionally, it highlights how emotions are intentionally used to amplify the spread of fake news and how emotional signatures can help mitigate their impact.

  • Norway´s Threat Assessment Report 2025

    This week, we examine Norway’s most recent national threat assessment (1) , which delineates key developments and emerging dynamics within the country’s security environment. The central state threat actors are Russia, China, and Iran . (2) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 11 ( Courtesy of PST) At the CRC, our focus is on digital hostile influence. This report highlights nicely that in the context of a broader system of hybrid warfare, digital hostile influence is one of many tools used by state actors. These are constantly adapted to shifting conditions and countermeasures. These so-called hybrid means may include cyber operations, sabotage, covert procurement, economic pressure, recruitment of human sources, and targeted influence and digital hostile influence campaigns. (3) These tools do not operate in isolation—they are interconnected and mutually reinforcing. Information obtained through cyberattacks or recruited sources can feed directly into digital hostile influence efforts, for example, to manipulate the political and societal discourse. Acts of sabotage, including those targeting civilian infrastructure, can create fear, undermine trust in institutions, and ultimately shift public opinion. (4) Russia The threat posed by Russia has intensified. In 2025, the Russian intelligence services are expected to continue operating through digital influence campaigns targeting Norway’s support for Ukraine. In this context, acts of sabotage are feared. As mentioned above, such actions are not solely aimed at the physical destruction of targets but also intended to influence public opinion, for instance, through media coverage and disinformation. It is also feared that information obtained through cyberattacks may be used in subsequent influence campaigns to undermine trust in political institutions and actors. (5) Noteworthy are historical references, such as a Russian information operation targeting the Nobel Peace Prize Committee. (6) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 26 ( Courtesy of PST) China China’s influence campaigns are increasing significantly and are part of its broader global power strategy. The country is showing a growing willingness and capability to target Western populations directly, including Norway. Digital influence operations are at the core of these efforts and are being systematically expanded. (7) A new trend is the involvement of commercial actors, who, for example, sell fake user accounts, produce propaganda videos, or recruit influencers. As a result, these campaigns are gaining in professionalism and reach. The use of artificial intelligence (AI) has also grown considerably. (8) There is also a referal to the "Viking United News" domain attributed to the Chinese disinformation campaign known as " Paperwall ". (9) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 15 , Courtesy of PST Iran Through Influence campaigns, Iran seeks to exert its influence through propaganda networks and the fostering of radicalization. A notable example is the AnzuTeam campaign, which primarily targets Sweden. However, there are indications that individuals in Norway were also targeted via SMS. (10) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 25 , Courtesy of PST Disinformation and Influence Campaigns Ahead of the 2025 Elections In light of the upcoming Norwegian parliamentary and Sámi parliamentary elections in 2025, an increase in threats and smears against politicians is anticipated, particularly when controversial issues garner significant public attention. Frequent public appearances during the campaign will likely make politicians more visible—and therefore more vulnerable—to confrontation. Additionally, it is expected that state actors will seek to exploit the election period to exert influence, using disinformation and propaganda to shape public opinion and steer political decisions in line with their interests. (11) [Footnotes (1) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025. [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025]. (2) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 8 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (3) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 8, 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (4) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 12 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2 (5) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 12, 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (6) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 26 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (4) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 14-15 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (5) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 18-20 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (6) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (7) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 14 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (8) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 15 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (9) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 15 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (10) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (11) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 47 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025]

  • VIGINUM REPORT: WAR IN UKRAINE

    This week, we are also reviewing the latest VIGINUM report . While it presents few new insights, it offers a structured overview of key Russian hostile influence activities in the context of the war in Ukraine since February 24, 2022. The influence campaigns identified target various geographical regions, including France, Europe, Ukraine (as well as Russian-occupied territories), and Africa. Both state-affiliated actors and private organizations carry out these operations. (1) Hostile Influence Targeting France and Europe Following the French President’s speech on February 26, 2024, in which the deployment of French troops to Ukraine was discussed, the intensity of Russian influence activities significantly increased. (2) “Reliable Recent News” (RRN) A key influence operation targeting France (and other European countries) is the campaign known as RRN —also referred to as Doppelgänger or Ruza Flood . It utilizes a network of several hundred disinformation websites that imitate the domains of established media outlets (e.g., Le Monde, The Washington Post, Der Spiegel ) and institutions (e.g., NATO, the French Ministry of Foreign Affairs) through typosquatting. The campaign also includes “ Pseudo-media” , which focuses on themes such as sports, lifestyle, or EU politics. Dissemination occurs primarily via X (formerly Twitter), Facebook, and TikTok , with support from inauthentic accounts and  sponsored content networks . To increase visibility, one tactic involves posting links in the comment sections of prominent accounts to redirect users to controlled sites. Another involves the “ Undercut ” network, which distributes short anti-Ukrainian videos using trending hashtags. Despite being publicly exposed —most notably by the U.S. Department of Justice  in September 2024—RRN remains active. The network employs sophisticated redirection techniques to evade detection by social media platforms. (3) “Stop Erdogan” Actions and Fake Anti-Ukraine Protests In March 2023, VIGINUM identified a campaign involving staged content, such as graffiti in Paris, Nazi salutes, and a provocative anti-Erdogan banner. The material was disseminated via Facebook groups targeting the Turkish diaspora in Europe. Staged protests were also held in cities such as Paris, The Hague, Brussels, and Madrid and were broadcast on Facebook and YouTube. The aim was to discredit Ukraine, the EU, and Turkey. Despite their high logistical and financial effort, the operations achieved minimal reach, mainly due to poor organization and unreliable execution by the individuals involved. (4) Screenshot of staged Protest; VIGINUM Report: War in Ukraine Report, page 7; Courtesy of VIGINUM Targeting Ukraine and Occupied Territories Since February 24, 2022, Russia has sought to establish propaganda-driven media  in Ukrainian territories under its occupation. By June 2022, pro-Russian actors had already launched local television stations in Kherson and Zaporizhzhia , later expanding to websites with regime-aligned content. Portal Kombat VIGINUM highlights the website pravda-fr[.]com , launched in September 2023, as part of the IMS known as "Portal Kombat". The network comprises more than 200 websites that do not generate original content but instead disseminate large volumes of pro-Russian material from sources such as Telegram, state-run Russian media, and "military correspondents." The aim is to foster pro-Russian sentiment among Russian-speaking Ukrainians and dominate the regional information space. Ukrainian cities targeted by the Portal Komat websites; VIGINUM Report: War in Ukraine Report, page 8; Courtesy of VIGINUM Following public condemnation by the foreign ministers of France, Poland, and Germany in February 2024, the network expanded, targeting EU countries as well as African and Asian states and prominent political figures. It aligns its content with major political events such as the 2024 European elections, the Paris Olympics, or elections in Moldova. (5) The Mriya Project Mriya  (Ukrainian for “dream”) is a Russian-language aggregator of Telegram channels that launched a website at the end of 2022. It disseminated pro-Russian content aimed at Ukrainian users, particularly in occupied territories. The project was run by Ukrainian influencers sympathetic to Russian narratives. It was amplified by RRN bot networks  on X, serving as the media front for the "Representative Office of the Ukrainian People" (ROUP), a separatist organization led by Dmytro Vasilets. Internal documents from the Russian firm SDA reveal that Telegram channels of ROUP members, including Maksym Chikhaliiev, were utilized by “Center S”, an internal SDA unit responsible, according to the U.S. Department of Justice, for information operations targeting Ukraine. (6) “Project Lakhta” in Africa In response to its growing international isolation, Russia intensified  its efforts in Africa, seeking to undermine Western, mainly French, influence on the continent. A central actor is Project Lakhta , also known as the Internet Research Agency (IRA), founded in 2013 by Yevgeny Prigozhin. The network has been particularly active in Francophone Africa, conducting disinformation  campaigns that support the Wagner Group. One prominent campaign targeted the African diaspora in France. (7) Screenshots of sponsored publications on Project Lakhta’s pages; VIGINUM Report: War in Ukraine Report, page 10; Courtesy of VIGINUM Between April 2024 and February 2025, fake accounts on Facebook and X circulated false claims that France was planning to send African migrants to fight in Ukraine. (10) Among the disinformation materials was a forged recruitment ad from  the French employment agency France Travail, allegedly seeking "database analysts and explosives engineers" for deployments in Cameroon. Fake screenshot of a France Travail job advert; VIGINUM Report: War in Ukraine Report, page 11; Courtesy of VIGINUM and France Travail The campaign combined digital manipulation (sponsored posts and fake accounts) with real-world elements, including staged demonstrations and paid media placements. (8) Conclusion Despite substantial technical, financial, and human resources, VIGINUM assesses the impact of Russian influence campaigns as limited—primarily due to technical shortcomings, poor content quality, and restricted reach. Although some disinformation narratives briefly gained traction, the overarching objective of these operations – namely, to deepen social divisions and exploit existing tensions – was ultimately not achieved. Nevertheless, most campaigns continue unabated despite repeated public attribution, sanctions, closures, and bans. This raises the critical long-term question of countering hostile influence operations effectively, sustainably, and strategically. [Footnotes] (1) Secrétariat général de la défense et de la sécurité nationale (SGDSN) – VIGINUM, 2025.  War in Ukraine: Three years of Russian information operations, p. 3.  [online] Available at: https://www.sgdsn.gouv.fr/files/2025-02/20250224_TLP-CLEAR_NP_SGDSN_VIGINUM_War%20in%20Ukraine_Three%20years%20of%20Russian%20information%20operations_1.0_VF.pdf   [Accessed 2nd of April 2025] (2) Secrétariat général de la défense et de la sécurité nationale (SGDSN) – VIGINUM, 2025.  War in Ukraine: Three years of Russian information operations, p. 4.  [online] Available at: https://www.sgdsn.gouv.fr/files/2025-02/20250224_TLP-CLEAR_NP_SGDSN_VIGINUM_War%20in%20Ukraine_Three%20years%20of%20Russian%20information%20operations_1.0_VF.pdf   [Accessed 2nd of April 2025] (3) Secrétariat général de la défense et de la sécurité nationale (SGDSN) – VIGINUM, 2025.  War in Ukraine: Three years of Russian information operations, p. 4.  [online] Available at: https://www.sgdsn.gouv.fr/files/2025-02/20250224_TLP-CLEAR_NP_SGDSN_VIGINUM_War%20in%20Ukraine_Three%20years%20of%20Russian%20information%20operations_1.0_VF.pdf   [Accessed 2nd of April 2025] (4) Secrétariat général de la défense et de la sécurité nationale (SGDSN) – VIGINUM, 2025.  War in Ukraine: Three years of Russian information operations, p. 7.  [online] Available at: https://www.sgdsn.gouv.fr/files/2025-02/20250224_TLP-CLEAR_NP_SGDSN_VIGINUM_War%20in%20Ukraine_Three%20years%20of%20Russian%20information%20operations_1.0_VF.pdf   [Accessed 2nd of April 2025] (5) Secrétariat général de la défense et de la sécurité nationale (SGDSN) – VIGINUM, 2025.  War in Ukraine: Three years of Russian information operations, p. 8.  [online] Available at: https://www.sgdsn.gouv.fr/files/2025-02/20250224_TLP-CLEAR_NP_SGDSN_VIGINUM_War%20in%20Ukraine_Three%20years%20of%20Russian%20information%20operations_1.0_VF.pdf   [Accessed 2nd of April 2025] (6) Secrétariat général de la défense et de la sécurité nationale (SGDSN) – VIGINUM, 2025.  War in Ukraine: Three years of Russian information operations, p. 9.  [online] Available at: https://www.sgdsn.gouv.fr/files/2025-02/20250224_TLP-CLEAR_NP_SGDSN_VIGINUM_War%20in%20Ukraine_Three%20years%20of%20Russian%20information%20operations_1.0_VF.pdf   [Accessed 2nd of April 2025] ] (7) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 10. [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (8) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 11. [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 202525]

  • Russia’s Use of genAI in Disinformation and Cyber Influence: Strategy, Use Cases and Future Expectations

    Russia’s Use of genAI in Disinformation and Cyber Influence: Strategy, Use Cases and Future Expectations Author: Dr. Christopher Nehring Intelligence Director at the Cyberintelligence Institute in Frankfurt, Germany. Dr. Christopher Nehring , a recognized disinformation and security communication expert, examines the role of generative artificial intelligence (genAI)  in Russian disinformation and influence operations. His analysis focuses on the extent to which state and state-affiliated Russian actors employ genAI in digital information environments and for what strategic purposes . The study explores specific applications — such as deepfakes , automated text generation, and algorithmic dissemination — and the range of actors involved, from intelligence services to seemingly civilian networks. Particular attention is paid to the degree of automation , the effectiveness  of these activities, and the presence or absence of coherent strategic intent. Through selected case studies , the report provides a structured overview of current uses of genAI in the context of informational influence. At the same time, it reflects on the dynamics , capabilities , and limitations  of this development — especially given the future role of AI in geopolitical conflict  and hybrid warfare .

  • Weekly Report: Cyber based influence campaigns 17th March – 23rd March 2025

    Table of Contents: Hostile Influence Campaigns State Actors Russia The Role of the Social Design Agency in Spreading Disinformation The War in Ukraine Disinformation Surrounding Russia’s Continued Attacks on Ukraine Russian Disinformation Targeting Canada and Ukraine China Disinformation on the Platform RedNote ahead of Australia’s 2025 election Graphika Highlights Coordinated Disinformation by Pro-China Network Chinese Malinformation in Okinawa AI Related Articles The Growing Threat of Deepfakes Deepfake Trump Audio Spreads Widely General Reports Disinformation Campaign on U.S. Biological Experiments in Armenia Foreign Influence Operations Targeting U.S. Corporations Intensify Anti-Israeli Disinformation on Wikipedia The Rise of Disinformation and Its Impact on Journalism in Brazil Appendix - Frameworks to Counter Disinformation The EEAS Efforts to Combat Disinformation

  • Weekly Report: Cyber based influence campaigns 10th March – 16th March 2025

    Table of Contents: Hostile Influence Campaigns State Actors Russia Russian Disinformation and Election Interference in Eastern Europe The War in Ukraine Russian Disinformation Campaign Targets Incoming Prime Minister Mark Carney China China Accuses Australian Think Tank of Spreading U.S.-Funded Disinformation China Imposes AI Content Labeling to Combat Disinformation AI Related Articles CrowdStrike Global Threat Report 2025 General Reports The Pravda Network and the Embedding of Disinformation in Public Platforms Decentralized Social Media Faces Disinformation Syria: Multi-Level Disinformation in a Fragmented Conflict Foreign Disinformation in the Pacific Hacktivist Groups Exploit Disinformation for Cyber Attack Appendix - Frameworks to Counter Disinformation NewsGuard Launches Service to Shield AI from Foreign Disinformation Misinformation Training and Its Effectiveness in Countering Disinformation

bottom of page