top of page

Search CRC

130 results found with an empty search

  • CRC Spotlight: Smart Cities: Future Urban Environments in the Crosshairs of Cyber Threats and Information Disorder

    Modern smart cities rely on extensively interconnected digital infrastructures that link not only administrative processes, but also mobility, energy systems, communication networks, urban services, and private-sector platforms. This dense connectivity creates significant exposure to hybrid threats in which technical cyberattacks overlap with strategic influence efforts, affecting both critical infrastructure and the wider informational sphere of a city. Against this backdrop, the article analyzes how smart cities evolve into environments where cyber vulnerabilities and informational fragilities reinforce one another, creating conditions for the emergence of cyfluence risks—hybrid threats that combine system intrusion with targeted narrative manipulation. As municipal infrastructure increasingly depends on IoT devices, real-time data streams, cloud-based applications, and automated urban management systems, disruptions can cascade across networks, while manipulated information circulating through public apps, digital signage, transport interfaces, and social media can amplify societal impact. Smart cities thus face a dual risk landscape in which breaches of technical systems and distortions of the information ecosystem can interact, accelerate one another, and undermine public trust at scale. [Full Report Below]

  • CRC Weekly: Cyber-based hostile influence campaigns 3rd-9th November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] 1. Kremlin-Linked Digital Campaigns Target Japanese Election and Corrupt AI Models 2. Russian Influence Operation "Storm-1516" Impersonates Journalists and Media 3. NATO Report Details Russia's Adaptive Strategy for Global Information Warfare 4. Russia's RT Uses Deepfakes and AI in Global Propaganda Pivot 5. Pro-Kremlin Channels Exploit Climate Change Discourse to Divide European Societies 6. The Valdai Discussion Club: Putin's Propaganda Soapbox 7. Canadian Province's Request Sparks National Call for China Interference Inquiry 8. China Uses "Sharp Power" to Deepen Influence in Western Balkans 9. Authoritarian Regimes Manipulate Context to Create Alternative Digital Realities 10. EU and UK Rules Target Manipulated Content in Global Conflicts [CRC Glossary] [Download Report] [ Report Highlights] Russian influence operations are now deliberately flooding the web with propaganda to corrupt the training data of AI language models. - Nippon Evidence of foreign influence is now surfacing even in Canada's smallest province, intensifying calls for a full national inquiry into Chinese operations. - The Hill Times A Russian operation codenamed "Storm-1516" is stealing journalists' identities to publish fake articles on fabricated news websites. - Euro News Authoritarian states are sculpting a "warped alternative reality" by manipulating context to exploit social media algorithms. - The Conversation New EU and UK laws aim to hold platforms accountable for amplifying propaganda and deleting evidence of human rights abuses. - European External Action Service (EEAS) A NATO report exposes the structure of Russia's influence machine, which pairs overt state media with deniable, covert "gray networks." - NATO StratCom COE To mark its 20th anniversary, RT released a deepfake video of U.S. news anchors admitting to serving government interests. - United24Media China is embedding its influence in the Balkans by creating deep technological dependency through surveillance and infrastructure. - Small Wars Journal Pro-Kremlin channels are weaponizing climate change, framing Europe's green energy policies as a form of "economic self-destruction" to divide societies. - EU vs. Disinfo In a major speech, Vladimir Putin claimed that Europe is fleeing “gender terrorism” as part of a campaign to undermine Western unity. - DisinfoWatch [Weekly Review] 1. Kremlin-Linked Digital Campaigns Target Japanese Election and Corrupt AI Models  Russian-linked influence operations targeted Japan's July 20 House of Councillors election using networks of bots and coordinated replies to amplify divisive content and sow doubt. An article  by Nippon revealed that a significant portion of hostile comments on domestic reporting originated from a Kremlin-aligned ecosystem, with one analysis finding that 32 percent of such comments on a single post were linked to Russia-aligned accounts. The hostile influence campaign relied on a high volume of small, inauthentic accounts to infiltrate online conversations. The operation also involved deliberately flooding the web with pro-Kremlin content to "groom" large language models and other AI tools. Japanese-language hubs republished hundreds of pro-Russia items daily, a strategy designed to bias search results and the training data used by AI chatbots. Audits reported that leading generative AI systems subsequently returned false or misleading information on controversial topics far more often, at an average rate of 35 percent. Source:  Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions', Available Online:   https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html Top Of Page 2. Russian Influence Operation "Storm-1516" Impersonates Journalists and Media According to a report from the NATO Strategic Communications Centre of Excellence , Russia has developed a comprehensive and adaptive communications strategy that integrates state-controlled media, covert influence networks, and digital campaigns to manipulate global narratives. The publication, titled "The Collage of Kremlin ComStrat," reveals how Moscow combines traditional propaganda with modern hybrid tactics, including AI-driven content and coordinated online amplification through proxy media outlets. This approach aims to sow distrust in Western institutions and shape perceptions of geopolitical events like the war in Ukraine. The Kremlin's strategy emphasizes psychological influence and information saturation, using overt channels like RT and Sputnik alongside covert networks on social media. The report underscores that these information operations are not isolated events but part of a long-term, state-sponsored effort to weaken support for Ukraine and amplify polarization in Western societies. By blending intelligence tradecraft with digital information warfare, Russia's communication strategy demonstrates how manipulated information has become a core instrument of state power. Source:  Euro News , 'False claims and stolen bylines: The Russian propaganda strategy haunting the newsroom' Available Online:   https://www.euronews.com/my-europe/2025/11/04/false-claims-and-stolen-bylines-the-russian-propaganda-strategy-haunting-the-newsroom Top Of Page 3. NATO Report Details Russia's Adaptive Strategy for Global Information Warfare Two decades after its launch, Russia's state-controlled media outlet RT has fully transformed into a tool for global hostile influence campaigns. A report from United24Media  highlights how, to mark its 20th anniversary, RT released a deepfake video using AI to impersonate prominent U.S. news anchors, falsely showing them admitting to serving U.S. government interests. This synthetic propaganda is emblematic of RT's pivot toward audiences in the Middle East, Africa, and Latin America—regions less affected by Western sanctions. Despite being banned across Europe and North America, RT has adapted its operations, functioning as what its editor-in-chief calls an "information guerrilla." The channel now utilizes mirror sites, front companies, and alternative platforms like Rumble and VK to continue its reach. It also employs AI to create fake journalists, clone voices, and automate the dissemination of propaganda. Former branches, including RT France and RT Germany, continue to shape local discourse through sympathetic media figures, ensuring the persistence of their information operations. Source:  NATO StratCom COE, 'The Collage of the Kremlin's Communication Strategy' Available Online:   https://stratcomcoe.org/publications/the-collage-of-the-kremlins-communication-strategy/324 Top Of Page 4. Russia's RT Uses Deepfakes and AI in Global Propaganda Pivot Two decades after its launch, Russia's state-controlled media outlet RT has fully transformed into a tool for global hostile influence campaigns. A report  from United24Media highlights how, to mark its 20th anniversary, RT released a deepfake video using AI to impersonate prominent U.S. news anchors, falsely showing them admitting to serving U.S. government interests. This synthetic propaganda is emblematic of RT's pivot toward audiences in the Middle East, Africa, and Latin America—regions less affected by Western sanctions. Despite being banned across Europe and North America, RT has adapted its operations, functioning as what its editor-in-chief calls an "information guerrilla." The channel now utilizes mirror sites, front companies, and alternative platforms like Rumble and VK to continue its reach. It also employs AI to create fake journalists, clone voices, and automate the dissemination of propaganda. Former branches, including RT France and RT Germany, continue to shape local discourse through sympathetic media figures, ensuring the persistence of their information operations. Source:  United24Media, '20 Years of RT: How Russia's Propaganda Hydra Survived the Ban' Available Online:   https://united24media.com/anti-fake/20-years-of-rt-how-russias-propaganda-hydra-survived-the-ban-13121 Top Of Page 5. Pro-Kremlin Channels Exploit Climate Change Discourse to Divide European Societies As Europe faces worsening climate disasters, pro-Kremlin channels are actively manipulating climate discourse to undermine trust in Western institutions. A report  from EU vs. Disinfo  explains that while Russia's official media acknowledges climate science, its broader information networks push climate denial and distort facts. For Kremlin propagandists, climate change is a strategic weapon used to divide societies and weaken democratic consensus. Their narratives often link EU green energy transitions and sanctions against Russia to "industrial decline," framing Europe's environmental efforts as economic self-destruction. These misleading claims are designed to erode public support for sanctions and renewable energy by exploiting legitimate economic fears. Through coordinated messaging, Moscow’s information operations also smear climate science as a "religion" and attack political leaders who address environmental realities. This strategy is part of a broader effort to portray Russia as a more responsible global actor than the EU while advancing its geopolitical goals. Source:  EU vs. Disinfo (the publisher), 'Sneaky heat: the Kremlin uses climate change to push its favourite FIMI narratives' (the original source title in quotes) Available Online:   https://euvsdisinfo.eu/sneaky-heat-the-kremlin-uses-climate-change-to-push-its-favourite-fimi-narratives/ Top Of Page 6. The Valdai Discussion Club: Putin's Propaganda Soapbox An article in  DisinfoWatch  examines how Vladimir Putin used the 2025 Valdai Discussion Club forum to advance propagandistic narratives, portraying the West as culturally collapsing and Russia as a moral alternative. The speech highlighted specific claims, such as Europe fleeing “gender terrorism” and NATO hysterically militarizing, to reframe defense and diplomacy narratives. By weaponizing culture-war rhetoric, Moscow seeks to undermine Western unity and credibility, especially among vulnerable audiences. The use of the Kremlin-backed Valdai platform and state media like RT ensures these messages are amplified globally, contributing to Russia’s ongoing information warfare campaign. The Valdai Discussion Club, a Moscow-based think tank, has long served as a key venue where Putin and Kremlin officials outline Russia’s ideological and geopolitical positions to both domestic and international audiences, making it a central component in their strategic communications. Source : DisinfoWatch, 'DisinfoDigest: Decoding Putin's Valdai Speech' Available Online:   https://disinfowatch.org/disinfodigest-decoding-putins-valdai-speech/ Top Of Page 7. Canadian Province's Request Sparks National Call for China Interference Inquiry Growing calls for a national inquiry into China's interference in Canada have followed Prince Edward Island (PEI) Premier Rob Lantz's request for a federal investigation into local groups allegedly linked to Beijing's United Front network. A report by The Hill Times  notes that this appeal follows revelations from a recent book and a media investigation exposing how Chinese state-affiliated organizations may be influencing Canadian institutions. The report argues that only a full-scale, independent national inquiry, paired with a criminal investigation, can uncover the extent of these hostile influence campaigns. Despite repeated intelligence warnings about election interference, diaspora intimidation, and espionage, federal responses have been described as fragmented and politically cautious. P.E.I.'s call for an inquiry is being viewed as a national call to action, demonstrating that even provinces removed from the country's geopolitical epicenters are experiencing the effects of foreign influence. Source : DisinfoWatch, 'DisinfoDigest: Decoding Putin's Valdai Speech' Available Online:   https://disinfowatch.org/disinfodigest-decoding-putins-valdai-speech/ Top Of Page 8. China Uses "Sharp Power" to Deepen Influence in Western Balkans As published  by the Small Wars Journal , China is deepening its presence in the Western Balkans through a blend of defense cooperation, technological dependence, and information manipulation, an approach described as "sharp power." Unlike overt tactics, Beijing's influence relies on subtle yet pervasive methods, with Serbia becoming the focal point of its regional strategy. The country has welcomed Chinese weapon systems, joint military exercises, and advanced surveillance technology that embed long-term dependencies. Chinese state media and local affiliates amplify pro-Beijing narratives through content-sharing agreements and educational programs that promote authoritarian governance models. Through control of digital infrastructure and surveillance systems via companies like Huawei, China not only gains access to critical data but also reinforces its influence over local governments and media ecosystems. This networked approach combines information manipulation with economic leverage, making democratic institutions more vulnerable to external control. Source : Small Wars Journal (the publisher), 'China's Rising Influence in the Western Balkans and How the West Should Respond' (the original source title in quotes) Available Online:   https://smallwarsjournal.com/2025/11/05/chinas-rising-influence-in-the-western-balkans/ Top Of Page 9. Authoritarian Regimes Manipulate Context to Create Alternative Digital Realities An article  by The Conversation explains how authoritarian regimes, particularly Russia and China, are perfecting a form of information operation that relies on manipulating context and selective truth rather than outright falsehoods. By amplifying strategically chosen facts while omitting others, these governments create a misleading "alternative reality" that portrays Western democracies as unstable and hypocritical. This strategy is executed through state-run media, influencer networks, and coordinated bot activity across social media platforms, ensuring that distorted narratives infiltrate the news feeds of both domestic and foreign audiences. The analysis argues that this is an adaptive, data-driven campaign designed to exploit the mechanics of modern social media algorithms. This form of narrative warfare reinforces cynicism and polarization, weakening trust in journalism, democratic governance, and the concept of a shared truth. The broader implication is the gradual normalization of authoritarian narratives within global discourse and the erosion of the common factual foundation necessary for democratic societies to function. Source : The Conversation, 'How authoritarian states sculpt a warped alternative reality in our news feeds' Available Online:   https://theconversation.com/how-authoritarian-states-sculpt-a-warped-alternative-reality-in-our-news-feeds-266092 Top Of Page 10. EU and UK Rules Target Manipulated Content in Global Conflicts According to a policy brief   by the  European External Action Service (EEAS ), manipulated information has become a key strategic weapon in modern conflicts, employed by state and non-state actors to disseminate propaganda and erode trust. Online platforms amplify these risks through algorithmic promotion of harmful content, while weak moderation in conflict zones allows hate speech and foreign information manipulation to proliferate. The brief highlights the European Union's Digital Services Act (DSA) and the United Kingdom's Online Safety Act (OSA) as emerging regulatory tools to counter these threats. These laws require platforms to assess and mitigate systemic risks, including those from hostile influence campaigns and foreign interference, through crisis response mechanisms and transparency requirements. By applying these frameworks with a conflict-sensitive approach, the EU and UK can strengthen information integrity, protect diaspora communities, and set global standards for platform accountability in times of conflict. Source : European External Action Service (EEAS), 'Assessing and Mitigating Conflict-Related Online Risks: Challenges for Governments, Regulators and Online Platforms ‘Available Online:   https://www.isdglobal.org/isd-publications/assessing-and-mitigating-conflict-related-online-risks-challenges-for-governments-regulators-and-online-platforms/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page [Download Report] Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 27th October - 2nd November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] 1. Russian Influence Operations Target Japan's Election and Poison AI Training Data 2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy 3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy 4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative 5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data 6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda 7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland 8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations 9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions [CRC Glossary] [Download Report] [ Report Highlights] An independent analysis found that nearly one-third of hostile online comments during Japan's election were linked to Russia-aligned accounts. - Nippon A coordinated smear campaign has falsely attributed 14 international properties to President Zelenskyy, including estates once owned by Nazi figures. - NewsGuard's Reality Check Moscow is exploiting climate issues as a front in its hybrid war, aiming to fracture EU consensus and protect its fossil fuel economy. - EUvsDisinfo Beijing is justifying its censorship of domestic grievances by framing online dissent as a form of "Western ideological infiltration." - Jamestown When asked about the war in Ukraine, major AI chatbots cited sanctioned Russian state media in nearly 18 percent of responses. - WIRED Despite moderation policies, top AI video tools are generating synthetic propaganda that included Holocaust denial and violent imagery. - Cybernews  Influence campaigns are "pivoting" entire networks of old accounts, turning COVID-era anti-vaccine profiles into tools for anti-Ukrainian messaging. - EU DisinfoLab As the Voice of America retreats from the global stage, Russian and Chinese state media are expanding operations to fill the information void. - GIOR [Weekly Review] 1. Russian Influence Operations Target Japan's Election and Poison AI Training Data An article from Nippon has revealed that Russian-linked information operations targeted Japan’s House of Councillors election by using networks of bots and trolls to sow doubt and amplify divisive narratives. The hostile influence campaign relied on a large number of small, inauthentic accounts to infiltrate online conversations and amplify pro-Kremlin messaging. An independent analysis by international affairs expert Ichihara Maiko identified that approximately 32 percent of hostile comments on a single post were connected to Russia-aligned accounts. Beyond direct engagement, the operation also sought to contaminate the information ecosystem by deliberately flooding the web with pro-Kremlin content to "groom" large language models. Japanese-language hubs, such as Pravda Nihon, republished hundreds of pro-Russia items daily. This strategy is designed to bias search results through query manipulation and pollute the training data used by AI chatbots, leading to the proliferation of AI slop. Independent audits confirmed that leading generative AI systems now return false or misleading information on controversial topics more frequently. Source: Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions' Available Online:   https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html Top of Page 2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy Russian state media and affiliated online networks have fabricated a series of false claims as part of a smear campaign accusing Ukrainian President Volodymyr Zelenskyy of owning a real estate empire worth over $682 million. A NewsGuard's Reality Check investigation revealed that the latest hoax alleged Zelenskyy purchased a $79 million ranch in Wyoming, a claim originating from a fake website mimicking a legitimate U.S. real estate firm. This marks the 14th property Russia has falsely attributed to the Ukrainian leader. The disinformation is disseminated using a network of fake websites, often featuring forged documents, before being amplified across major social media platforms like X, Facebook, and TikTok. Propagandists have even alleged that Zelenskyy’s supposed purchases included properties once owned by Nazi figures. Despite repeated debunking, the false narratives continue to circulate widely. Both NewsGuard and Forbes have confirmed that Zelenskyy's actual assets are valued under $20 million, finding no evidence of misused Western aid. Source: NewsGuard's Reality Check, NewsGuard, Forbes, ‘Russian Fake: Mapping Zelensky’s Made-Up $682 Million Real Estate Portfolio’ Available Online:   https://www.newsguardrealitycheck.com/p/russian-fake-mapping-zelenskys-made Top of Page 3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy The Kremlin is conducting information operations that weaponize climate change narratives to advance its geopolitical goals and undermine European unity. An article by EUvsDisinfo explains that these campaigns are a component of Russia’s broader hybrid warfare strategy against the West. Pro-Kremlin media outlets deliberately frame the European Union's Green Deal as an elitist policy designed to harm key sectors like agriculture, using claims that it is "killing farmers." This tactic of cognitive warfare aims to erode public support for environmental initiatives, which are seen by Moscow as a direct threat to its fossil fuel-dependent economy and its ability to exert energy-related pressure on Europe. By portraying decarbonization as self-destructive, the disinformation seeks to fracture social consensus within the EU, protect Russian energy exports, and weaken the bloc's resolve on sanctions and energy independence. Source: EUvsDisinfo, 'Weaponising climate change to undermine the West' Available Online:   https://euvsdisinfo.eu/weaponising-climate-change-to-undermine-the-west/ Top of Page 4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative The Cyberspace Administration of China (CAC) has launched a new "clean and bright" campaign that redefines online criticism and social frustration as "negative energy" that endangers national security. An article published by The Jamestown Foundation's China Brief explains that the campaign targets posts discussing unemployment, gender inequality, and social anxiety, portraying them as products of "Western ideological infiltration." This strategy reframes censorship as a necessary defense against cognitive warfare, empowering regulators to erase narratives that challenge the Chinese Communist Party's (CCP) image of a harmonious society. This effort is a deepening of the CCP's comprehensive system of propaganda and ideological management, or Xuanchuan. Influencers discussing youth job struggles have already been banned, reflecting the government's push to enforce an "authorized reality." By linking social stability to "total national security," Beijing normalizes censorship as a security measure and may be creating an exportable model of digital authoritarianism for other governments seeking to justify repression. Source: Jamestown: article, "Beijing's War on 'Negative Energy' Available Online:   https://jamestown.org/program/beijings-war-on-negative-energy/ Top of Page 5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data Generative AI systems are proving vulnerable to manipulation by Russian information warfare tactics, with leading chatbots frequently reproducing content from sanctioned state media. A study by the Institute for Strategic Dialogue (ISD), covered by WIRED, found that AI models exhibit a form of confirmation bias, delivering more pro-Kremlin content when users enter biased or manipulative prompts. This vulnerability is being actively exploited by Russian networks like the "Pravda" operation, which are deliberately working to "poison" the data that large language models (LLMs) are trained on. By flooding the information ecosystem with false narratives, these actors ensure their propaganda is ingested and later presented as factual by Western AI tools. The findings highlight a significant challenge for platform regulation, as the very architecture of current AI systems can be turned into a vector for disseminating state-sponsored disinformation. Source: WIRED, 'Chatbots Are Pushing Sanctioned Russian Propaganda' Available Online:   https://www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/ Top of Page 6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda The proliferation of synthetic propaganda is being accelerated by the failure of leading AI video generators to block the creation of extremist and hateful content. A new study from the Anti-Defamation League (ADL) found that top platforms produced antisemitic content, including Holocaust denial and violent tropes, in at least 40% of test cases when prompted with hateful text. Cybernews reports that despite stated moderation policies, these systems consistently failed to filter out harmful narratives, demonstrating a significant vulnerability in their design. The ADL warns that this capability not only allows malicious actors to create high volumes of disinformation but also poses a direct threat to historical memory and online safety. The findings illustrate how AI tools, trained on vast and often unfiltered datasets from the internet, can become powerful engines for amplifying and normalizing extremist ideologies. Source: Cybernews, Anti-Defamation League (ADL), 'Popular AI video generators amplify antisemitic tropes​, Available Online:   https://cybernews.com/ai-news/ai-videos-antisemitism/ Top of Page 7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland A factsheet published by EU DisinfoLab details how Poland's information space has been shaped by recurring disinformation waves pushed by far-right activists, politicized media, and Russia-aligned networks. The hostile influence campaigns have focused on anti-immigrant, anti-vaccine, anti-Ukrainian, and culture-war themes. Common tactics include "narrative pivoting," where repurposed accounts from the COVID era were switched to anti-Ukrainian messaging, and the amplification of rumors through bot and troll activity. Fabricated materials, such as a forged ministry letter and an AI-generated video, have also been used to inflame grievances. These information operations aim to polarize society, degrade trust in institutions, and distort policy debates on migration, public health, and EU agreements. Russia's invasion of Ukraine served as an accelerant, with Kremlin-linked narratives exploiting economic strains and election cycles. The response remains fragmented, and the politicization of public broadcasting risks laundering these narratives into the mainstream, contributing to widespread information disorder. Source: EU DisinfoLab, 'The disinformation landscape in Poland' Available Online: https://www.disinfo.eu/publications/disinformation-landscape-in-poland/ Top of Page 8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations The scaling back of U.S. international broadcasting is weakening American soft power and ceding narrative control to adversarial states in the global information war. An article in the Global Influence Operations Report (GIOR) details how the operational reductions at Voice of America (VOA) are creating a strategic vacuum that is being actively filled by Russia’s RT/Sputnik apparatus and the China Media Group. These state-sponsored actors are expanding their own information operations into regions where VOA was once a primary source of independent news. This shift represents a significant setback for U.S. strategic communications, as it removes a credible voice from contested information ecosystems and emboldens authoritarian regimes. By relinquishing its role in these environments, the U.S. allows hostile actors to more easily shape perceptions and advance their geopolitical objectives without counterbalance. Source: GIOR, 'Voice of America Shutdown Benefits Russia, China: GOP Warns - Global Influence Operations Report' Available Online:   https://www.global-influence-ops.com/voice-of-america-shutdown-benefits-russia-china-gop-warns/ Top of Page 9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions Blockchain analysis has revealed a coordinated effort to manipulate political prediction markets, representing a novel vector for platform-enabled foreign interference. According to a report in The New York Post, investigators found that a small number of digital wallets, funded overwhelmingly from offshore exchanges in China and the Middle East, were responsible for a disproportionate volume of bets on a New York City mayoral candidate. This activity, which appears automated and is not financially rational, constitutes a form of digital astroturfing designed to artificially inflate the candidate's perceived support. Experts warn that because media outlets and campaigns often cite these markets as indicators of public sentiment, such manipulation can distort the political narrative and potentially discourage voter turnout by creating a false sense of inevitability. The incident raises serious questions about the integrity of data from unregulated financial platforms in an electoral context. Source: The New York Post, ‘Foreign betting markets could influence NYC election — as data shows pro-Mamdani bets from China, Middle East skewing odds’ Available Online:   https://nypost.com/2025/10/28/business/pro-mamdani-bets-from-china-middle-east-skewing-market-odds-experts/?utm_campaign=nypost&utm_medium=social&utm_source=twitter Top of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website   Top of Page [Download Report] Top of Page

  • CRC Weekly: Cyber-based Hostile influence campaigns 20th-26th October 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] Matryoshka Campaign Deploys Synthetic Media to Attack Journalism Credibility Russia Trains Local Journalists to Spread Pro-Kremlin Narratives in Africa Russia Pushes False Arctic Narrative to Mask Arctic Military Expansion Pro-Kremlin Actors Use AI and Data Collection to Target Ukraine-EU Relations Beijing Combines Cultural Diplomacy with AI-Driven Influence in Europe American Fugitive in Moscow Runs AI-Powered Pro-Kremlin Fake News Network Russia Engages in 'LLM Grooming' to Manipulate AI Chatbots Climate Action Hindered by Coordinated Disinformation and Greenwashing Campaigns EU-Funded 'Digital Detectives' Initiative Trains Uzbek Journalists to Counter Falsehoods Europe's Counter-Disinformation Efforts Face External Threats and Internal Resistance [CRC Glossary] [Download Report] [ Report Highlights] An American fugitive in Moscow is behind a network of 141 fake news sites powered by an AI programmed to insert bizarre and irrelevant praise for Vladimir Putin into unrelated articles. - NewsGuard Through a strategy dubbed "LLM grooming" aims to manipulate AI chatbots by flooding the internet with pro-Kremlin content, effectively weaponizing the models to reproduce false narratives. - EUvsDisinfo Posing as legitimate news agencies, covert Russian entities are expanding hybrid warfare in Africa by training local journalists and influencers to spread pro-Kremlin narratives. - European Council on Foreign Relations To mask its own aggressive military expansion, a Russian information operation inverts reality by accusing Canada and NATO of militarizing the Arctic. - DisinfoWatch China's hybrid influence campaigns in Europe combine soft-power tactics through cultural and academic channels with advanced AI-driven digital operations. - Taipei Times Recognizing that information manipulation by fossil fuel interests is a primary obstacle to progress, the COP30 climate summit will make public trust a central issue for the first time. - Global Witness An EU-funded "Digital Detectives" project is building a nationwide network in Uzbekistan by training local experts to equip journalists and fact-checkers with advanced verification skills. - EEAS [The Week In Review] Matryoshka Campaign Deploys Synthetic Media to Attack Journalism Credibility  The Russian Matryoshka network is impersonating reputable media organizations to spread fabricated stories and undermine trust in Western journalism. A report from NewsGuard  details how the hostile influence campaign uses AI-generated videos and fake social media accounts to circulate false claims about political scandals in Germany and France. The videos have falsely attributed quotes to NewsGuard executives and presented entirely invented events, such as Germany suing the organization for exposing war preparations. Matryoshka’s strategy mirrors the very information manipulation tactics it accuses others of employing. Its content relies on AI voice-overs, manipulated footage, and fictitious experts, all designed to exploit real-world controversies, like France's 2023 bedbug panic, to insert Russian narratives into public discourse. The operation highlights a sophisticated use of synthetic media to attack the credibility of established news and research entities. Source:  Newsguard, Why Russia Puts Words in NewsGuard’s Mouth, Available Online: ( https://www.newsguardrealitycheck.com/p/why-russia-puts-words-in-newsguards ) Top Of Page Russia Trains Local Journalists to Spread Pro-Kremlin Narratives in Africa Russia has intensified its hybrid warfare tactics in Africa, employing information operations to influence public opinion and destabilize regional politics. The Kremlin established entities like the Africa Corps and the Africa Initiative to bolster its presence and spread pro-Russian narratives across the continent. These operations involve training local journalists, influencers, and activists to disseminate content in multiple languages, including English, French, Arabic, and regional languages like Hausa and Swahili. A report   by the European Council on Foreign Relations (ECFR)  notes that the Africa Initiative operates covertly, posing as a news agency while engaging in information manipulation. The ECFR highlights the need for a coordinated European response, suggesting current anti-disinformation policies are ineffective. Recommendations include investing in local media and using platforms like WhatsApp to counteract hostile narratives, as Europe risks ceding influence to Russia in Africa's information ecosystem. Source:  European Council on Foreign Relations, The bear and the bot farm: Countering Russian hybrid warfare in Africa , Available Online: https://ecfr.eu/publication/the-bear-and-the-bot-farm-countering-russian-hybrid-warfare-in-africa/#recommendations Top Of Page Russia Pushes False Arctic Narrative to Mask Arctic Military Expansion Russian state media is amplifying a narrative that Canada and NATO are promoting "war rhetoric" in the Arctic, while portraying Russia as a peaceful actor. This information operation inverts reality, as Russia has aggressively expanded its military infrastructure in the region since 2021, whereas recent Canadian measures are defensive. The Kremlin uses tactics including selective omission, projection, and euphemism laundering to present its maximalist Arctic claims as benign while framing allied defensive actions as provocative. The campaign is amplified through Russian diplomatic channels, Telegram, and pro-Kremlin outlets, reflecting a broader strategic goal of weakening allied cohesion and chilling Canadian Arctic policy. A DisinfoWatch report   notes that by framing Russia as restrained, the campaign seeks to normalize its jurisdictional ambitions and discourage deterrence investments, following a recurring Kremlin pattern of "peaceful Russia/militarizing NATO." Source:  DisinfoWatch, Russian MFA Accuses West and Canada of Militarizing The Arctic , Available Online: https://disinfowatch.org/disinfo/russian-mfa-accuses-west-and-canada-of-militarizing-the-arctic/ Top Of Page Pro-Kremlin Actors Use AI and Data Collection to Target Ukraine-EU Relations Pro-Kremlin propagandists have intensified information operations aimed at undermining Ukraine-EU relations and demoralizing Ukrainians. According to a report   by the Delegation of the European Union to Ukraine and the DARE Project, these campaigns use Telegram channels, Facebook groups, and fake news websites to spread false narratives. The fabricated stories include claims that the EU is "prolonging the war," accusations of aggressive policies toward Russia, and false stories about refugee conditions and child trade schemes. The report highlights that pro-Kremlin actors are using sophisticated strategies, including emotional manipulation, AI-generated visuals, and fake media outlets. Regional patterns revealed tailored falsehoods in Kherson, Donetsk, and Odesa, with claims about "combat moths" imported from the EU and the sale of cities to foreign interests. Some campaigns also collected personal data, illustrating a dual strategy of psychological influence and opportunistic exploitation. Source:  EEAS, Results of pro-Russian information manipulation and disinformation monitoring targeting Ukraine-EU relations during June – August, 2025 , Available Online: https://www.eeas.europa.eu/delegations/ukraine/results-pro-russian-information-manipulation-and-disinformation-monitoring-targeting-ukraine-eu_en Top Of Page Beijing Combines Cultural Diplomacy with AI-Driven Influence in Europe Concerns are growing over Beijing's disinformation and hybrid influence campaigns across Europe, even as some nations distance themselves diplomatically. A recent Italian Senate conference highlighted how China continues to exert pressure through psychological manipulation, propaganda, and economic coercion, despite Italy’s 2023 withdrawal from the Belt and Road Initiative. As published by the Taipei Times , Chinese influence persists through academic and cultural channels, including Confucius Institutes and the suppression of performances by groups critical of the Chinese Communist Party. The digital dimension of these operations leverages platforms like DeepSeek and AI-driven tools to manipulate public perception and amplify state-controlled messaging. This technological aspect has raised alarms among European governments, which now view China's use of AI and data tracking as a severe national security threat, prompting new measures to strengthen democratic resilience and curb foreign manipulation. Source:  Taipei Times, EU facing increased interference from China , Available Online: https://www.taipeitimes.com/News/editorials/archives/2025/10/26/2003787875 Top Of Page American Fugitive in Moscow Runs AI-Powered Pro-Kremlin Fake News Network John Mark Dougan, a former Florida deputy now based in Moscow, has become a key figure in Russia's digital influence operations, using a self-trained generative AI system to create large volumes of fake news. An investigation from NewsGuard  identifies Dougan as part of the pro-Kremlin influence group Storm-1516. His recent campaign involves 141 French-language websites spreading Russian propaganda and false claims aimed at undermining Western democracies. A notable feature of the AI-generated articles is the consistent insertion of exaggerated and irrelevant praise for Russian President Vladimir Putin, regardless of the topic. Evidence from cybersecurity researchers suggests Dougan's AI is programmed with a pro-Russia, anti-West bias, even leaving behind visible AI prompts that instruct it on how to frame content. While Dougan denies responsibility, he has publicly boasted about receiving a Russian state honor for his "work in the information sphere." Source:  NewsGuard, Russian AI Sites Can’t Stop Gushing About Putin , Available Online: https://www.newsguardtech.com/special-reports/ai-driven-john-mark-dougan-pro-kremlin-disinformation-campaign/ Top Of Page Russia Engages in 'LLM Grooming' to Manipulate AI Chatbots Russia has shifted its information warfare tactics to target artificial intelligence, deliberately manipulating large language models (LLMs) through a strategy known as "LLM grooming." This involves flooding the internet with millions of low-quality articles and content from pro-Kremlin websites, including the Pravda network, to ensure AI chatbots reproduce false narratives. The goal is to weaponize AI to spread misleading information, such as fabricated claims about Ukraine's President Zelenskyy. According to analysis by EUvsDisinfo , the campaigns involve multiple actors, including Russian state media, pro-Kremlin influencers, and offshoots of the Internet Research Agency. The broader significance lies in the Kremlin's ability to shape digital information ecosystems, erode trust in AI-generated knowledge, and amplify global security risks as automated disinformation becomes harder to detect and counter, threatening the integrity of online fact-finding. Source:  EUvsDisinfo, Large language models: the new battlefield of Russian information warfare , Available Online: https://euvsdisinfo.eu/large-language-models-the-new-battlefield-of-russian-information-warfare/ Top Of Page Climate Action Hindered by Coordinated Disinformation and Greenwashing Campaigns Information manipulation has become one of the most significant obstacles to meaningful climate action, as fossil fuel companies and their allies use influence campaigns to cast doubt on climate science and delay policy responses. These tactics range from outright denial to more insidious strategies like greenwashing, where polluters portray themselves as environmentally responsible while expanding fossil fuel production. Social media algorithms amplify such content, rewarding polarization over accuracy. The growing recognition of this threat has pushed information integrity into the spotlight, with COP30 set to make public trust a central issue for the first time. A Global Witness article   states that while informing people of the fossil fuel industry's deception can increase support for accountability, Big Tech's failure to curb falsehoods continues to erode public understanding. Experts now call for stronger oversight and education, arguing that defending information integrity is inseparable from defending the planet. Source:  Global Witness, What does information integrity have to do with climate? , Available Online: https://globalwitness.org/en/campaigns/digital-threats/what-does-information-integrity-have-to-do-with-climate/ Top Of Page EU-Funded 'Digital Detectives' Initiative Trains Uzbek Journalists to Counter Falsehoods A new initiative in Uzbekistan, the "Digital Detectives" project, aims to strengthen the country's defenses against disinformation and promote media literacy. Funded by the European Union and implemented by the Modern Journalism Development Centre, the project has launched its first Training of Trainers session in Tashkent to establish a nationwide network of experts. These trainers will assist journalists and fact-checkers across Uzbekistan in identifying and countering false information more effectively. As published by the EEAS , participants explored key fact-checking strategies, including promise tracking, detecting fake news, and utilizing digital verification tools such as the Wayback Machine. They also discussed the importance of storytelling as a method for strengthening credibility and public trust. By empowering local media professionals, the project represents a proactive effort to create a more resilient information environment and safeguard the public sphere against manipulation. Source:  EEAS, “Digital Detectives” Project Launches First Training of Trainers on Fact-Checking in Uzbekistan , Available Online: https://www.eeas.europa.eu/delegations/uzbekistan/%E2%80%9Cdigital-detectives%E2%80%9D-project-launches-first-training-trainers-fact-checking-uzbekistan_en Top Of Page Europe's Counter-Disinformation Efforts Face External Threats and Internal Resistance Europe's battle against information manipulation has reached a critical turning point, as new and complex challenges undermine progress. Foreign Information Manipulation and Interference (FIMI), fueled by geopolitical conflicts and hybrid warfare, continues to expand, while generative AI has lowered the barriers for malicious actors to produce large-scale propaganda. At the same time, the fight against disinformation is facing growing internal resistance, with some nationalist movements portraying counter-disinformation efforts as censorship, thereby weakening institutional trust. A recent article from the EU Disinfo Lab  notes that major digital platforms have also reversed some commitments to content moderation, allowing false narratives to spread more easily. This has created a dual threat from external state-backed propaganda and domestic disengagement. The report concludes that Europe's resilience depends on enforcing regulations, empowering civil society, and achieving strategic digital autonomy. Source:  EU Disinfo Lab, Documenting the setbacks: The new environment for counter-disinformation in Europe and Germany , Available Online: https://www.disinfo.eu/publications/documenting-the-setbacks-the-new-environment-for-counter-disinformation-in-europe-and-germany/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC  website   Top Of Page [Download Report] Top Of Page

  • Tonga Before the Election: Influence and the Information Space

    Background On 20 November 2025, Tongans will head to the polls to directly elect 17 representatives to the Legislative Assembly, while the country’s nobles choose another nine members. The final composition of parliament will include these and up to 4 additional seats determined through the established procedure. [i] As the only constitutional monarchy in the Pacific, Tonga blends democratic governance with deeply rooted traditional structures, where the monarchy retains significant influence over national affairs. Despite its small population of roughly 105,000, Tonga holds strategic importance in the South Pacific. [ii] It sits at the crossroads of a tense China-U.S. rivalry, with Australia and New Zealand playing a key role. Tonga’s strategic location and information environment make it an interesting case study for understanding information flows and cognitive resilience in small island democracies. Influence Vectors Tonga’s internal dynamics and international relations are shaped by a combination of financial dependence, migration trends, regional security cooperation, and diaspora engagement. At the same time, the country’s media landscape has largely shifted to the digital realm [iii] , where outlets face mounting challenges as social media increasingly dominates public discourse. This environment has made Tonga more vulnerable to information disorder [iv] , illustrated by incidents such as deepfake audio clips [v] , fabricated political letters [vi] , and COVID-19 conspiracy theories. Although these cases have largely been domestic and organic rather than coordinated foreign operations, they underscore the country’s vulnerability to information manipulation. Efforts to strengthen resilience are emerging, exemplified by local fact-checkers such as “Supa Mario” [vii], who has gained attention for his debunking work, and by education programs supported by international partners like ASPI–ROI [viii] . Nevertheless, systematic monitoring and institutional frameworks to counter information disorder remain scarce.   Economic and Development Assistance Recently, the United States has reduced its direct presence in the Pacific, while Australia, Japan, and New Zealand remain Tonga’s primary security, development, and disaster-response partners. They maintain military and police cooperation programs that provide training, capacity-building, and regional security coordination.   Figure 1 – Development financing by partner, Courtesy of Lowy Institute [ix] Meanwhile, China’s role is increasingly apparent: roughly two-thirds of Tonga’s foreign debt (≈USD 195 million) is owed to Beijing. Loan servicing consumes about 4% of GDP annually [x] , raising concerns about long-term strategic dependency. [xi] Chinese aid projects and infrastructure investments have increased visibly in the run-up to the 2025 elections, including a new agricultural agreement signed in October 2025. [xii] Aid, Physical Support, and On-the-Ground Presence Tonga’s 150th Constitution celebrations, held from 31 October to 4 November 2025, illustrated how external actors employ visible, on-the-ground engagement to assert presence. The Chinese Embassy sponsored the official fireworks display and supported the participation of over 300 members of the Chinese community in the float parade.  Figure 2 – Posts of the Chinese Embassy in Tonga, Courtesy of Facebook  Australia demonstrated its presence through the largest float parade, combining official and community representation to underline partnership and historical connection. Both governments extended these actions to digital platforms, where their embassies documented and circulated images, official statements, and hashtags. This online communication amplified the reach of their physical presence, turning local acts of participation into enduring digital signals of influence and engagement.   Figure 3 – Posts of the Australian Embassy in Tonga, Courtesy of Facebook    Migration and the Local Economy In recent years, Chinese immigrants have transformed Tonga’s small business landscape. Although consumers benefit from lower prices and greater availability of goods, many local businesses struggle to compete with Chinese-owned shops. Public opinion is therefore divided, with some Tongans expressing concerns over the country’s financial sovereignty. [xiii]   Diaspora Influence Tonga’s diaspora, which is larger [xiv] than its domestic population, plays an outsized role in shaping opinions back home. Communities in Australia, New Zealand, and the U.S. frequently engage in online debates about domestic politics, often injecting or amplifying narratives from afar. In contrast, external actors’ ability to leverage coordinated inauthentic behavior (CIB) is limited. Tonga’s tight-knit social networks and small population size make it harder to utilize sockpuppet accounts and operational assets effectively. In essence, diaspora-based involvement acts as a force multiplier in Tonga’s digital information ecosystem, primarily through Facebook, which reaches over 64% of the population. [xv]   Conclusion Tonga’s 2025 elections will unfold in an information environment inherently different from that of European nations, where foreign information manipulation and interference (FIMI) activities have had a significant impact. Notable examples include the recent elections in Czechia and Moldova , which are attributed to Russia. Ahead of the upcoming election, there are a few key takeaways for stakeholders, particularly Cyfluence Defence practitioners: Although there’s currently no evidence indicating ongoing coordinated FIMI efforts targeting the Pacific nation and its democratic processes, past misinformation incidents exhibit nascent vulnerabilities. The limited analytical and monitoring capacity within Tonga’s media and civil society means potential influence activities could go undetected. Empowering local institutions, including independent investigative journalism, is crucial. Media literacy and cognitive resilience must be seen as strategic assets that are essential to safeguard trust in public institutions and electoral integrity, and to ensure societal cohesion. [Footnotes:] [i]  Inter-Parliamentary Union (IPU), 2025. Tonga – Legislative Assembly (Fale Alea) . [online] Available at: https://data.ipu.org/parliament/TO/TO-LC01/ [ii]  Congressional Research Service, J. G. Tupuola, 2025. Tonga: Background and Issues for Congress . [online] pp. 1-2 Published 11 September 2025. Available at: https://www.congress.gov/crs_external_products/IF/PDF/IF12866/IF12866.3.pdf [iii]  ABC International Development, 2025.  State of the Media: Tonga, 2025 . [online] Published 4 March 2025. Available at:  https://www.abc.net.au/abc-international-development/state-of-the-media-tonga-2025/105005712   [iv]  ABC International Development, T. Kami Enoka & P.’Ulikae’eva Havili, 2023.  Tonga’s Star Fact-Checker Helps Fight COVID-19 Vaccine Misinformation and Government Corruption . [online] Published 14 March 2023; updated 16 March 2023. Available at:  https://www.abc.net.au/abc-international-development/pacmas-tonga-fact-checking/102073118 [v]  Australian Strategic Policy Institute (ASPI), B. Johnson, F. Fakafanua & S. Vikilani, 2024.  As technology distorts information, Pacific governments and media must cooperate . [online] Published 17 July 2024. Available at:  https://www.aspistrategist.org.au/as-technology-distorts-information-pacific-governments-and-media-must-cooperate/#:~:text=In%20Tonga%2C%20we%20have%20also,the%20reputation%20of%20those%20involved   [vi]  Radio New Zealand (RNZ), 2017.  Tonga police investigate letter claiming to be from PM . [online] Published 24 February 2017. Available at:  https://www.rnz.co.nz/international/pacific-news/325222/tonga-police-investigate-letter-claiming-to-be-from-pm   [vii]  Ibid. [viii]  Royal Oceania Institute, 2024.  Training Program for Tonga: “Disinformation: Government and Media Challenges” . [online] Published 8 May 2024. Available at:  https://royaloceaniainstitute.org/2024/05/08/training-program-for-tonga-disinformation-government-and-media-challenges/   [ix]  Lowy Institute, 2025. Tonga – Pacific Aid Map . [online] Available at: https://pacificaidmap.lowyinstitute.org/country/tonga/ [x]  Congressional Research Service, J. G. Tupuola, 2025. Tonga: Background and Issues for Congress . [online] pp. 1-2 Published 11 September 2025. Available at: https://www.congress.gov/crs_external_products/IF/PDF/IF12866/IF12866.3.pdf [xi]  Pacific Media Network, A. Vailala, 2025. No debt forgiveness from China, analyst warns as Tonga faces repayment pressure . [online] Published 30 April 2025. Available at: https://pmn.co.nz/read/political/no-debt-forgiveness-from-china-analyst-warns-as-tonga-faces-repayment-pressure [xii]  Radio New Zealand (RNZ), C. Rovoi, 2025. Tonga bets on China deal to modernise farming ahead of general election . [online] Published 30 October 2025. Available at: https://www.rnz.co.nz/international/pacific-news/577307/tonga-bets-on-china-deal-to-modernise-farming-ahead-of-general-election [xiii]  Tonga Independent News, 2025. ‘Trust Is More Important Than Money’: Inside One Chinese Businessman’s Vision for Tonga . [online] Published 14 August 2025. Available at: https://tongaindependent.com/trust-is-more-important-than-money-inside-one-chinese-businessmans-vision-for-tonga/ [xiv]  United Nations, 2022. The Kingdom of Tonga: National Voluntary GCM Review – Implementing the Global Compact for Safe, Orderly and Regular Migration . [online] Published 2022. Available at: https://www.un.org/sites/un2.un.org/files/imrf-tonga.pdf [xv]  DataReportal, n.d. Digital 2024: Tonga . [online] Published 2024. Available at: https://datareportal.com/reports/digital-2024-tonga

  • Information Warfare in the Early Stages of the Russia-Ukraine War

    The prelude and opening stages of Russia's 2022 invasion of Ukraine were one of history's most intense periods of hostile cyber and influence activity . Alongside conventional warfare, both states engaged in a sophisticated battle for influence, deploying digital propaganda , psychological operations , and cyberattacks . This study examines the conflict's information dimension from late 2021 to April 2022 via a novel analytical paradigm adapted from strategic marketing and audience segmentation. By focusing on who the target is, when they are susceptible, and how operations are executed, analysts can systematically map cyber, influence, and hybrid (Cyfluence) operations across time and audience, identifying strategic and operational intent, as well as potential cardinal indicators for conflict escalation. Applying this analytical model to the early stages of the Russia-Ukraine Information War provides valuable insights and strategic context from   a pivotal moment in the evolution of hybrid warfare . The analysis breaks down the key events and examines and expands on the key strategic and operational implications. The lessons drawn from this analysis are relevant for countries in the Southeast Asian and Indo-Pacific  region, as they grapple with the realization that they too may face a similar threat to Ukraine. China , for example, is closely following Russia’s playbook , is coordinating with Russian cyber-influence agencies, and has shown willingness to deploy its own advanced capabilities  in the region. And for European countries , while more familiar with Russian doctrines of hybrid warfare, the idea of a future hybrid conflict taking place in their backyard is more immediate. They too might benefit from a new analytical model on how to better predict, detect and defend  against future hybrid conflict s . [Full Report Below]

  • Cyfluence: The Latest Frontier of Cognitive Warfare

    The term 'Cyfluence' refers to the full spectrum of integrated cyber–influence operations that combine technical and informational tactics within a unified framework. It encompasses both cyberattacks conducted to shape perceptions or behavior and influence campaigns designed to facilitate or enhance cyber operations. In practice, Cyfluence represents the convergence of technical infiltration, sabotage, data exfiltration, information manipulation, and narrative campaigns - all embedded within mutually reinforcing, influence-centered kill chains. It is the comprehensive expression of how power is applied and projected across today’s interconnected information environments. In this primer, we present an updated definition of Cyfluence, reflecting the latest evolutions of the concept and the increasing convergence of cyber and cognitive domains. [ Download PDF Here ]

  • Not All-Powerful: A Granular Perspective on Influence Networks

    In many security policy debates, hostile influence campaigns by authoritarian states like China are often portrayed as hyper-efficient, strategically orchestrated, and almost omnipotent. The report "Baybridge – Anatomy of a Chinese Information Influence Ecosystem," published by the French military research center IRSEM  in October 2025, challenges this general perception . i     The notion of a uniformly centralized and effective Chinese disinformation apparatus is inaccurate because such a unified structure does not exist. Instead, a diverse range of actors operate within this ecosystem. These include private, commercially driven entities that act on behalf of the state or maintain links to state resources, which act without strategic coherence, professional execution, or operational efficiency. To assess influence operations strategically, the report calls for a deeper understanding of the specific actors, structures, interests, and operational logics involved by using a specific analytical approach . ii   The Actor-Specific, Granular Approach  The actor-specific, granular analytical approach does not view digital influence campaigns as complex networks of concrete actors with varying interests, capabilities, and motivations. At its core, the approach asks: Who is actually acting, within what organizational framework, using what tools, and to what end? It focuses on digital assets such as websites, social media profiles, and technical infrastructures, examining their connections, modes of control, and content strategy. This allows for the identification of the individuals, companies, or organizations involved and their actual roles and motives within the broader campaign.  The approach follows a multi-step process: first, the network structure is mapped and technical linkages are revealed. Next, digital traces are attributed to real-world actors, and their interests are analyzed. Simultaneously, the content is assessed for coherence, professionalism, and resonance with target audiences. Finally, the campaign’s actual impact is evaluated: Does it exert meaningful strategic influence, or is it merely an exercise in high-volume, low-impact output?  Case Study: The Network Around Haimai and Haixun  By using this approach  Baybridge  report examines a Chinese digital influence ecosystem centered on two companies: Shenzhen Haimai Yunxiang Media Co., Ltd. (Haimai) and Shanghai Haixun Technology Co., Ltd. (Haixun). Both market PR and media packages, run multilingual websites with seemingly journalistic content and share identical infrastructure. The report findings imply that this operation is not a centrally planned and applied influence operation but a network that functions as a commercial system with propagandistic features.    Figure 1 – Infrastructure Overview iii , Courtesy of IRSEM At the core are Wu Yanni, co-founder of Haimai and member of Shenzhen’s Municipal Party Committee Propaganda apparatus, and Zhu Haisong, owner of Haixun and member of Guangdong’s Propaganda Department.    Figure 2 - Activities of Wu Yanni & Zhu Haisong in the public & private sectors iv , Courtesy of IRSEM  The IRSEM report concludes that they are not strategic propagandists, but rather local entrepreneurs leveraging political ties for commercial gain. Their motivations appear to be primarily financial, including contract acquisition, rent-seeking, and fulfilling bureaucratic performance metrics such as article volume and reach.  Why the “Baybridge-Network” is Inefficient  Despite significant technical resources, the network exhibits major deficiencies in technical, structural, and content areas . Much of the content appears machine-translated, riddled with character encoding issues, and lacks editorial oversight. The result is an incoherent visual and linguistic output that undermines credibility and consistency.  An identified core flaw lies in the coexistence of contradictory narratives: Chinese content promotes “Positive Energy,” a state-endorsed messaging style that emphasizes harmony, optimism, and trust, while the same platforms often disseminate aggressive, conflict-driven Russian rhetoric critical of Western democracies. v  This juxtaposition, described in the report as a “narrative cacophony,” creates tonal contradictions that cancel each other out. This incoherence is particularly damaging during moments of symbolic significance for China, such as diplomatic visits, where simultaneously aggressive Russian-led messaging seems to undercut Beijing’s intended messaging. vi   Conclusion  The IRSEM report demonstrates that Chinese information operations are neither uniformly structured nor consistently effective. The “Baybridge” case study highlights a particular model in which private-sector actors with close ties to the state carry out influence operations on behalf of government entities. However, their activities are primarily shaped by commercial incentives and bureaucratic performance indicators. Within this logic, quantitative metrics such as content volume, geographic reach, and language variation are prioritized, while actual strategic impact on target audiences is secondary.  This setup can lead to inefficient campaigns: technically elaborate but strategically incoherent and lacking in persuasive quality. The core issue lies not in the absence of central coordination, but in the disconnect between political objectives, operational execution, and content effectiveness. These shortcomings are not unique to China, but they manifest in distinctive ways within authoritarian systems.  Rather than assuming a centralized and uniformly professional influence apparatus, the report suggests an actor-specific, granular analytical approach that enables differentiation. By mapping concrete actors, structures, and operational logics, it becomes possible to evaluate the actual relevance of an influence operation and to allocate security resources more effectively and proportionally. vii     [Footnotes:] [i] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.78-79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [ii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p. 79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.18 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iv] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.42 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [v] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.56-61 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vi] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.69-70 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf

  • CRC Weekly: Cyber-based hostile influence campaigns 13th - 19th October 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Highlights] A sophisticated narrative laundering operation identified by tracing a fabricated news story's journey from a fringe website through Russian state media and AI-powered search results to the U.S. congress - NewsGuard's Reality Check An investigation unmasks a sprawling pro-Kremlin influence network of 139 fake news websites in France, using AI-generated content and coordinated inauthentic behavior to manipulate public discourse. - NewsGuard Taiwan reports a significant escalation in Chinese 'cyfluence' operations, where millions of daily cyber intrusions are strategically combined with AI-driven disinformation campaigns to undermine state security and public trust. - The Record An analysis reveals how Chinese state and private actors, are using sophisticated AI tools to generate fake social media profiles for influence operations targeting India's democracy. - NDTV A detailed report outlines Iran's campaigns in Sweden, which combine traditional espionage with cyber operations like malware-laden apps and spear-phishing. - Eurasia Review Testing of OpenAI's Sora confirms its potential for creating synthetic propaganda, successfully producing realistic videos that advanced false narratives in 80% of test cases. - NewsGuard NATO's top information officer issues a stark warning that 'hybrid warfare has begun,' citing a combination of cyberattacks, disinformation campaigns, and physical disruptions. - Euronews French officials express alarm over the growing 'porosity' between the U.S. 'MAGA sphere' and Kremlin-aligned influence channels. - Le Monde [Weekly Review] From Fringe Site to US Congress: Anatomy of a Kremlin Narrative Laundering Operation AI-Powered Disinformation: Uncovering a Pro-Kremlin Network of 139 Fake French News Sites Estonian Politician Weaponizes Satire in Pro-Kremlin Influence Campaign Kremlin Deploys Disinformation to Foment Panic with 'Kyiv Evacuation' Hoax NATO Warns of China's Technologically Advanced FIMI Threat Taiwan Confronts Chinese 'Cyfluence' as Cyberattacks and Disinformation Surge Analysis: China's Use of AI and Private Firms Poses Influence Threat to India Iran's Hybrid Threat in Sweden Combines Cyber Espionage with Dissident Targeting Sora's Potential for Synthetic Propaganda Highlighted in New Analysis NATO Official: Hybrid Warfare Against Europe 'Has Already Begun' Investigation Reveals UK Far-Right Facebook Groups as 'Engine of Radicalization' French authorities fear mounting 'MAGA sphere' intrusions into domestic politics From Fringe Site to US Congress: Anatomy of a Kremlin Narrative Laundering Operation A fabricated story alleging corruption within Ukrainian President Volodymyr Zelensky's inner circle illustrates a textbook case of narrative laundering. A   report  by NewsGuard's Reality Check traces the claim's path from a fringe, pro-Russian Turkish website to amplification by Russian state media like TASS and Sputnik. The narrative gained a veneer of credibility after being republished by smaller websites and appearing on Microsoft's MSN news platform, despite a complete lack of evidence. The digital ecosystem played a crucial role in the operation's next phase, as screenshots and AI-generated summaries on Microsoft's Bing search engine facilitated the story's spread across social media. This hostile influence campaign achieved a significant milestone when U.S. Congresswoman Anna Paulina Luna shared the claim, citing MSN as her source. Russian state outlets then completed the propaganda feedback loop by citing the American lawmaker's statements as external validation of the original falsehood, demonstrating how contrived narratives can be pushed into mainstream discourse to achieve strategic objectives. Source: NewsGuard's Reality Check, How Russia Laundered a Lie About Ukraine Through Congress, Available Online:  https://www.newsguardrealitycheck.com/p/how-russia-laundered-a-lie-about-ukraine-through-congress Top Of Page AI-Powered Disinformation: Uncovering a Pro-Kremlin Network of 139 Fake French News Sites A network of 139 French-language websites with ties to Russia is disseminating false and misleading claims, often using AI-generated content to populate its pages. According to an   article  from NewsGuard , the operation is believed to be managed by John Mark Dougan, a former U.S. Marine who fled to Russia, with alleged support from Russian military intelligence (GRU). These fake websites were established between February and August 2025, using fabricated ownership details to masquerade as legitimate French media outlets. This coordinated inauthentic behavior is part of a broader Russian information operation, designated Storm-1516, which has also targeted the United States and Germany. The campaign’s tactics include impersonating real journalists and spreading fabricated narratives on high-profile topics to manipulate public discourse. The operation demonstrates an evolving approach to digital propaganda that leverages a distributed network of fake platforms to generate millions of views and influence public perception on key political issues. Source: NewsGuard, NewsGuard Rates Network of 139 Fake French News Websites with Ties to the Kremlin , Available Online:  https://www.newsguardtech.com/press/newsguard-rates-network-of-139-fake-french-news-websites-with-ties-to-the-kremlin/ Top Of Page Estonian Politician Weaponizes Satire in Pro-Kremlin Hostile Influence Campaign In Estonia, a pro-Kremlin politician has been repurposing satirical Russian content to spread malinformation among the nation's Russian-speaking population. A   report  from the  Atlantic Council’s DFRLab  identifies Genady Afanasyev, a candidate for the KOOS party, as the central actor in this hostile influence campaign. Afanasyev adapts stories from the Russian satirical outlet Panorama.pub by localizing them to Estonian contexts, altering names and institutions to make the fabricated stories appear as factual local news. This tactic exploits gaps in media literacy by mixing political messaging with humor to cultivate anti-government sentiment and normalize pro-Kremlin narratives. The content is primarily disseminated through KOOS-affiliated Facebook groups but also spreads across VKontakte (VK), TikTok, Telegram, and X, extending its reach within the target audience. The campaign highlights how foreign satirical content can be adapted into a targeted tool for domestic political influence, raising concerns about election integrity and the manipulation of specific linguistic communities. Source: DFRLab, Pro-Kremlin politician weaponizes satire to engage Russian population in Estonia ahead of local elections , Available Online:  https://dfrlab.org/2025/10/16/pro-kremlin-politician-weaponizes-satire-to-engage-russian-population-in-estonia-ahead-of-local-elections/ Top Of Page Kremlin Deploys Disinformation to Foment Panic with 'Kyiv Evacuation' Hoax Pro-Kremlin channels have been circulating a disinformation narrative claiming the West is urging an evacuation of Kyiv due to blackouts caused by Russian strikes. This information operation, detailed in an   article  by EUvsDisinfo , aims to exaggerate Ukraine's energy vulnerabilities and undermine public confidence in the Ukrainian government. By propagating these falsehoods through state-linked media and messaging platforms, the campaign seeks to distort perceptions of the conflict, reduce international support, and create the impression that Ukraine cannot withstand ongoing Russian attacks. In reality, neither Ukraine nor its allies have made any such calls for evacuation. Ukrainian authorities have maintained contingency plans since 2022 and continue to demonstrate resilience against energy disruptions. EU officials have reaffirmed their full support, mobilizing hundreds of millions of euros for energy aid and civil protection. The campaign exemplifies the Kremlin's persistent use of disinformation to generate fear and uncertainty, though international support for Ukraine remains strong. Source: EUvsDisinfo, DISINFO: The West calls on Ukraine to evacuate Kyiv amid blackouts , Available Online:  https://euvsdisinfo.eu/report/the-west-calls-on-ukraine-to-evacuate-kyiv-amid-blackouts/ Top Of Page NATO Warns of China's Technologically Advanced FIMI Threat China has significantly intensified its disinformation campaigns against NATO members since the COVID-19 pandemic, employing strategies designed to destabilize and weaken Western countries. According to a NATO   report  published by the Global Influence Operations Report (GIOR) , these operations leverage advanced technologies, social media platforms like TikTok, and cooperation with Russia to amplify pro-Chinese narratives. The campaigns aim to suppress criticism of the Chinese Communist Party and infiltrate local media ecosystems, substantially increasing the speed and reach of its information operations. The analysis emphasizes that these activities constitute a form of Foreign Information Manipulation and Interference (FIMI) that threatens Euro-Atlantic security, public trust in democratic institutions, and overall stability. By mapping key actors and tracing the tactical evolution of these campaigns, the report underscores the urgent need for coordinated countermeasures among allies to protect their populations, defend democratic processes, and mitigate the impact of Beijing's hostile influence activities. Global Influence Operations Report, NATO Report on Chinese Disinformation Reveals Escalating Threats , Available Online:  https://www.global-influence-ops.com/china-disinformation-nato-report-global-influence-operations/ Top Of Page Taiwan Confronts Chinese 'Cyfluence' as Cyberattacks and Disinformation Surge Taiwan's National Security Bureau (NSB) has reported a significant increase in cyberattacks and coordinated disinformation campaigns from China, aimed at undermining public trust and creating societal divisions. An   article  in The Record states that government networks faced an average of 2.8 million intrusions per day in 2025, a 17 percent annual increase targeting critical infrastructure. Beijing’s strategy represents a form of Cyfluence, combining these cyber intrusions with information warfare. The campaigns employ state media, an "online troll army" of fake users, and AI-generated content to spread fabricated narratives attacking the Taiwanese government and promoting pro-China messaging. The NSB report identified over 10,000 suspicious social media accounts distributing more than 1.5 million disinformation posts. This state-level strategy involves military, civilian, and private-sector hackers, with cybersecurity researchers linking activity to actors like TA415. These hybrid operations are designed to manipulate online discourse and shape public perception ahead of Taiwan's 2026 local elections. Source: The Record, Taiwan reports surge in Chinese cyber activity and disinformation efforts , Available Online:  https://therecord.media/taiwan-nsb-report-china-surge-cyberattacks-influence-operations Top Of Page Analysis: China's Use of AI and Private Firms Poses Influence Threat to India China is deploying sophisticated global influence operations that leverage disinformation, AI-generated content, and social media manipulation to polarize societies and exploit divisions within democratic systems. An   opinion article  published by NDTV  highlights the use of Chinese state institutions and private entities like GoLaxy, which run campaigns using AI tools to generate realistic social media profiles and fabricate narratives targeting individuals in India, the U.S., and elsewhere. These operations also enlist academics, media figures, and influencers to amplify messaging and reach specific audiences. For India, the campaigns risk fueling domestic polarization, undermining democratic processes, and exerting strategic influence over regional geopolitics. The analysis emphasizes the need for India to develop proactive countermeasures, including AI-focused digital forensics, robust legal frameworks, and dedicated counterespionage strategies. As China continues to exploit the information environment, vigilance is required to protect India’s domestic stability and strategic interests. Source: NDTV, What Ashley Tellis 'Spying' Allegation Should Tell India About Chinese 'Influence Ops' , Available Online:  https://www.ndtv.com/opinion/what-ashley-tellis-arrest-should-tell-india-about-chinese-influence-ops-9473545 Top Of Page Iran's Hybrid Threat in Sweden Combines Cyber Espionage with Dissident Targeting The Islamic Republic of Iran has conducted extensive intelligence, cyber, and influence operations in Sweden targeting dissidents, Jewish communities, and Israeli interests. A recent   analysis  in Eurasia Review  details how these activities are part of a broader hostile campaign to advance Tehran's geopolitical objectives. The operations employ a range of tactics, including cyber espionage through malware-laden apps and spear-phishing campaigns, assassination plots, and the infiltration of academic institutions. Iran also exploits local criminal networks and religious institutions to carry out surveillance, intimidation, and influence activities aimed at silencing opposition and evading international sanctions. These operations reveal significant vulnerabilities in Sweden's cyber defenses and immigration vetting processes. By coordinating with Russia and leveraging criminal proxies, Iran’s activities threaten not only targeted communities but also the stability of Swedish society and regional security, prompting calls for more decisive countermeasures. Source: Eurasia Review, A Growing Security Threat: Iranian Intelligence Operations In Scandinavia (Part Two: Sweden) – Analysis , Available Online:  https://www.eurasiareview.com/27092025-a-growing-security-threat-iranian-intelligence-operations-in-scandinavia-part-two-sweden-analysis/ Top Of Page Sora's Potential for Synthetic Propaganda Highlighted in New Analysis OpenAI's new text-to-video generator, Sora, produced realistic videos advancing false claims in 80% of test cases, including several narratives originating from Russian disinformation operations. A   report  from NewsGuard  found that the tool allows users to create synthetic propaganda with minimal effort, enabling hostile actors to rapidly amplify misleading narratives. The analysis raises concerns about the proliferation of high-quality manipulated media and the erosion of trust in authentic content. While OpenAI has implemented guardrails such as watermarking and C2PA metadata, the investigation found these measures can be circumvented, allowing generated videos to appear authentic to unsuspecting viewers. Sora’s accessibility and speed significantly lower the barrier for creating convincing fabricated content, which could be weaponized in large-scale information operations. The findings underscore the broader implications for media integrity and the challenge of countering AI-driven falsehoods in contested information environments. NewsGuard, OpenAI’s Sora: When Seeing Should Not Be Believing , Available Online:  https://www.newsguardtech.com/special-reports/sora-report/ Top Of Page NATO Official: Hybrid Warfare Against Europe 'Has Already Begun' Hybrid warfare, combining cyberattacks, disinformation campaigns, and physical disruptions, is already underway in Europe, with Russia suspected as a key actor. In an   article  from Euronews , NATO's first Chief Information Officer, Manfred Boudreaux-Dehmer, warned that recent incidents like unidentified drones forcing airport shutdowns are part of a broader strategy to disrupt daily life and weaken public morale. These non-kinetic tactics are designed to exploit digital and psychological vulnerabilities within NATO member states. Boudreaux-Dehmer noted that the Alliance is enhancing its cyber resilience through a new defense center in Belgium and increased coordination among its 32 members. He described the current environment as a constant technological and informational race between adversaries and defenders. The growing use of disinformation and other soft warfare methods highlights a strategic shift toward battles over public perception and trust, making collaboration with the private sector and academia critical for Alliance security. Source: Euronews, Hybrid warfare has begun, senior NATO official tells Euronews , Available Online:  https://www.euronews.com/2025/10/15/hybrid-warfare-has-begun-senior-nato-official-tells-euronews Top Of Page Investigation Reveals UK Far-Right Facebook Groups as 'Engine of Radicalization' A network of far-right Facebook groups in the United Kingdom is exposing hundreds of thousands of members to racist language, conspiracy theories, and extremist disinformation. An   investigation  by The Guardian  describes these online spaces as an "engine of radicalization." The analysis of over 51,000 posts across three large public groups revealed the widespread promotion of anti-immigration tropes and dehumanizing rhetoric. A key finding is that these groups are often managed by older, otherwise ordinary Facebook users, who moderate content and disseminate disinformation across the network. This dynamic leverages peer-to-peer trust, making users more likely to perceive the content as credible compared to institutional sources. Experts warn that such online ecosystems, amplified by platform algorithms, can accelerate radicalization, a threat potentially magnified by emerging technologies like deepfakes and automated bots. Despite a review, Meta found the groups did not violate its policies, highlighting ongoing challenges in moderating extremist content at scale. Source: The Guardian, Far-right Facebook groups are engine of radicalisation in UK, data investigation suggests , Available Online:  https://www.theguardian.com/world/2025/sep/28/far-right-facebook-groups-are-engine-of-radicalisation-in-uk-data-investigation-suggests Top Of Page French authorities fear mounting 'MAGA sphere' intrusions into domestic politics French authorities are increasingly concerned by the expanding influence of the American far-right "MAGA sphere" and its convergence with Russian disinformation networks targeting Europe.  Le Monde   reports  that this concern grew after Elon Musk amplified a claim by Telegram's founder that French intelligence attempted to censor certain accounts, an allegation officials viewed as pro-Russian propaganda. In response, France's Foreign Ministry launched an X account to counter such online falsehoods. A French official described the phenomenon as a "porosity" between U.S. far-right and Kremlin-aligned influence channels, noting that narratives on migration, freedom of expression, and the war in Ukraine spread rapidly across these ecosystems. The French government now views the MAGA-aligned media sphere, including outlets like Breitbart News and platforms like X, as a growing source of foreign information manipulation and interference that could be used to sway upcoming French elections. Le Monde, French authorities fear mounting 'MAGA sphere' intrusions into domestic politics , Available Online:  https://www.lemonde.fr/en/international/article/2025/10/14/french-authorities-fear-mounting-maga-sphere-intrusions-into-domestic-politics_6746437_4.html Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC  website   Top Of Page [Download Report] Top Of Page

  • CRC Spotlight: Ride-Hailing Apps as Vehicles of Foreign Solidarity and Potential Influence Operations

    In August and September 2025, a series of civil and political upheavals, primarily in Asian countries, shocked regional observers and elites. Viral images featuring the ‘One Piece’ pirate flag, adopted as a symbol by protestors, made front page news and social media was soon flooded with cross-country messages of support and solidarity.   Interestingly, a key characteristic of this wave of protests was the role played by popular ride-hailing and delivery apps as well as the ‘gig economy’ workers that rely on them. Platform users became central to the movement's core narratives, while being supplied in real-time by supportive netizens.   In this CRC Spotlight article, we examine the potential operational implications of this development: how commercial apps can serve as channels for on-the-ground support and how they might represent a new vector for Influence Operations. The platforms and their users are already vulnerable to exploitation, with active "Fraud-as-a-Service" networks using tactics like account takeover (ATO) and location spoofing for financial gain.   Although this wave of protests appears to be organic, existing Tactics, Techniques, and Procedures (TTPs) could easily be repurposed from financial fraud to political interference, such as astroturfing support for unrest. This emerging threat is amplified by the difficulty in attribution, inherent to the spontaneous, grassroots nature of platform-based aid.   With gig economy platforms becoming de-facto civic infrastructure worldwide, their potential for malign socio-political exploitation is outpacing the regulatory frameworks needed to mitigate the risks. Read the full report below for in depth analysis [ Download Full Report here ]

bottom of page