top of page

Search CRC

165 results found with an empty search

  • CRC Weekly: Cyber-based hostile influence campaigns 24th-30th November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] New location feature on X 'weaponized' to spread misinformation [State Actors] Russia Russian Disinformation Targeting Georgia Russia Escalates Coordinated Disinformation Campaign Ahead of Armenia’s 2026 Elections [The War in Ukraine] Kremlin Attacks Ukraine Through Hack-and-Leak Narratives Disinformation Surrounding Colombian Volunteers in Ukraine Kremlin Fake Peace Rhetoric About the War with Ukraine [AI Related Articles] Fake AI Creates “Digital Reincarnations” Disinformation in Political Campaigns Beyond AI [General Reports] Influential ‘U.S.’ X Accounts Spread False Claims from Abroad Foreign Disinformation During the 2025 Czech Parliamentary Elections Disinformation as a Strategic Threat During COVID-19 Global Poll Shows Disinformation Among Top Security Fears as U.S. Influence Declines [Appendix - Framworks to Counter Disinformation] Addressing Disinformation Through Media Literacy [CRC Glossary] [ Report Highlights] · According to an article by ABC , Elon Musk’s social platform X introduced a global location-display feature intended to verify account authenticity and enhance platform integrity. Instead, the tool generated confusion, inaccuracies, and opportunities for misinformation quickly. · An article published by DW states that Russia is intensifying a coordinated disinformation campaign ahead of Armenia’s 2026 elections, using AI-generated content, bot networks, and impersonating media outlets to undermine Prime Minister Nikol Pashinyan and discredit Armenia’s Western orientation. · As reported by DFR Lab , in August and September, pro-Russia media promoted two coordinated disinformation campaigns aimed at weakening Western support for Ukraine. · As stated in a Stop Fake report, since Russia invaded Ukraine in 2022, Kyiv has actively sought foreign volunteers, with a notable surge from Latin America, especially Colombia. · According to a report by the Institute for Strategic Dialogue (ISD) , the 2025 Czech parliamentary elections unfolded amid an increasingly complex information environment shaped by long-standing foreign influence efforts, particularly from Russian-linked networks. [Social Media Platforms] New location feature on X 'weaponized' to spread misinformation According to an ABC   article , Elon Musk’s social platform X introduced a global location-display feature intended to verify account authenticity and enhance platform integrity. Instead, the tool generated confusion, inaccuracies, and opportunities for misinformation quickly. Experts warned that the feature amounted to “weaponized decontextualization,” as accounts were assigned incorrect countries due to VPN use, outdated data, or technical errors. These inaccuracies were quickly exploited to question the legitimacy of users, including journalists reporting on sensitive conflicts. Several prominent accounts, including those from Australian media outlets, government institutions, and academics, were mislabeled, further eroding trust in the platform. The feature not only failed to improve transparency but also breached user trust by publicly revealing information that many did not expect to be shared. While aggregated location signals may have some limited value in detecting coordinated inauthentic behavior, that tool is unreliable for assessing individual accounts. Moreover, NewsGuard's Reality Check   report  showed that the feature has already triggered a wave of deliberate misuse, with partisan X accounts fabricating screenshots to falsely portray political figures and commentators as operating from foreign countries. NewsGuard identified multiple falsified images circulating widely, including a fake screenshot suggesting U.S. commentator Nick Fuentes was based in Nigeria and another claiming U.K. Prime Minister Keir Starmer’s account originated in Ukraine. In both cases, users misrepresented impostor accounts or manufactured location data to fuel political narratives. Source:  ABC, NewsGuard, ‘New location feature on Elon Musk's X 'weaponised' to spread misinformation’ Available Online:   https://www.abc.net.au/news/2025-11-25/social-media-x-location-feature-user-trust/106045476 ;   https://www.newsguardrealitycheck.com/i/179868003/partisan-x-accounts-misuse-new-x-feature-to-advance-false-claims-about-user-location Top Of Page [State Actors] Russia Russian Disinformation Targeting Georgia According to an   article  by DFR Lab, sanctioned Russian propaganda networks are amplifying disinformation to undermine Georgia’s pro-democracy movement and legitimize anti-Western narratives. As protests against the increasingly authoritarian government continue, websites in English, French, and Polish circulate claims that the West is plotting coups, engineering unrest, or manipulating Georgia’s political future. These narratives mirror Kremlin talking points and echo the ruling Georgian Dream party’s rhetoric. A key disinformation hub is The Intel Drop, which republishes content from Russian intelligence-linked outlets such as New Eastern Outlook and South Front. Its articles often recycle false claims that Georgian elections are Western provocations, that Ukrainian operatives smuggle explosives to destabilize the country, or that past uprisings like the Rose Revolution and Maidan were CIA-led coups. These stories distort isolated events, dismiss independent election observations, and portray the EU and US as hostile actors attempting to drag Georgia into conflicts with Russia. The same narratives are spread through mirror sites like newsnet.fr in France and Canal666 in Poland, both of which systematically launder material from sanctioned Russian sources. Source:  DFR Lab, ‘Suspicious websites amplify narratives from sanctioned Russian entities targeting Georgia’ Available Online:   https://dfrlab.org/2025/11/28/suspicious-websites-amplify-narratives-from-sanctioned-russian-entities-targeting-georgia/ Top Of Page Russia Escalates Coordinated Disinformation Campaign Ahead of Armenia’s 2026 Elections An   article  published by DW states that Russia is intensifying a coordinated disinformation campaign to undermine Armenia’s Western-leaning government ahead of the June 2026 parliamentary election. Experts report a sharp rise in anti-government narratives targeting Prime Minister Nikol Pashinyan, characterized by greater organization, faster circulation, and more precise targeting than in previous years. These efforts include AI-generated photos, audio, and deepfakes, appearing in Armenia’s information space for the first time, with the overarching goal of eroding public trust in Armenian institutions, discrediting the country’s Western partnerships, and opening political space for more Kremlin-aligned actors. The campaign integrates multiple tactics and actors. Bot networks such as Matryoshka, previously active in other countries, have begun distributing AI-generated videos mimicking Western media and accusing Pashinyan of destroying Armenia’s cultural foundations and promoting “non-traditional values.” Russian and pro-Russian actors are also employing “doppelganger” techniques, creating websites that imitate legitimate outlets to spread fabricated scandals and corruption stories. One such operation pushed a false claim about US-funded laboratories conducting secret experiments on Armenian civilians, content traced to the Russian Foundation to Battle Injustice, widely viewed as a Kremlin-aligned disinformation front. These narratives are reinforced by local pro-Russian voices as well as Azerbaijani and Turkish actors, forming a broader hostile information ecosystem. Armenian officials describe this pressure as a form of hybrid warfare, noting that Russian commentators regularly call for political change in Armenia and that such interference is expected to intensify as the election approaches. While Moscow denies involvement, researchers emphasize that the tactics now deployed in Armenia mirror those used by Kremlin-linked actors in Georgia and Moldova, including the rapid creation of impersonating websites and the coordinated amplification of hostile messaging. As Armenia’s relations with Russia have deteriorated since Pashinyan’s rise in 2018 and Russia’s perceived failure to meet security obligations in 2021–2022, the Kremlin’s disinformation efforts appear focused on reshaping Armenia’s political orientation ahead of 2026. Source:  DW, ‘Russia's disinformation campaign in Armenia gains momentum’ Available Online:   https://www.dw.com/en/russias-disinformation-campaign-in-armenia-gains-momentum/a-74868051 Top Of Page [The War in Ukraine] Kremlin Attacks Ukraine Through Hack-and-Leak Narratives As   reported  by DFR Lab, in August and September, pro-Russia media promoted two coordinated disinformation campaigns aimed at weakening Western support for Ukraine. Both used alleged “hack and leaks” to create the illusion of insider access to sensitive data. The first claimed hacktivists had obtained Ukrainian military records proving 1.7 million soldiers were dead. Launched on Telegram and reinforced by Kremlin-aligned outlets like MASH and RIA Novosti, the narrative used unverifiable documents and sensational images to frame Ukraine as militarily devastated. The second campaign, originating from a fringe Italian outlet, alleged that Western states planned to divide Ukraine and seize control of its territory under the guise of military aid, a claim later amplified by Russian media and repeated by Hungary’s prime minister to lend it legitimacy. These narratives exploited information gaps and wartime uncertainty to question the value of Western aid and Ukraine’s resilience. Casualty figures, rarely disclosed by Kyiv and inconsistently estimated by analysts, became a key vulnerability, allowing propaganda accounts to present fabricated numbers as “leaked truth.” The messaging portrayed Ukraine as doomed, corrupt, and dependent, arguing that Western efforts were pointless. The distribution channels underscored how disinformation spreads rapidly without formal coordination. Telegram was the primary vector, with thousands of small accounts pushing the narratives and millions of views accumulated in under a week. The stories then jumped to TikTok, Facebook, and fringe media, often through supposedly personal accounts that exclusively repost pro-Kremlin content. Amplification by Russian state media, pro-war bloggers, and sympathetic political figures helped move the claims from anonymous sources into mainstream discourse. Source:  DFR Lab, ‘Pro-Kremlin actors seed narratives to undermine Western military aid to Ukraine’ Available Online:   https://dfrlab.org/2025/11/28/pro-kremlin-actors-seed-narratives-to-undermine-western-military-aid-to-ukraine/ Top Of Page Disinformation Surrounding Colombian Volunteers in Ukraine As stated in a Stop Fake   report , since Russia invaded Ukraine in 2022, Kyiv has actively sought foreign volunteers, with a notable surge from Latin America, especially Colombia. The Ukrainian Ministry of Defense promotes recruitment by sharing testimonies and success stories, but these efforts are heavily targeted by Kremlin-led disinformation. Russia spreads false or decontextualized content to discourage volunteers and undermine trust in Ukraine, using AI-generated imagery, fabricated media logos, and exaggerated claims about casualties or organ trafficking. These tactics often blend real incidents with false narratives, making them appear more credible. Colombian volunteers are at the center of these campaigns. Disinformers circulate stories of mistreatment, unpaid soldiers, and claims that Ukraine refuses to return bodies, manipulating tragedies like the missing soldier Michael Ramírez. AI-generated videos portraying forced conscription, fake recruitment posters offering unrealistic benefits, or offensive claims about “inferior races” aim to inflame public opinion and reduce volunteer enlistment. Even Colombian government warnings are picked up by Russian channels to amplify fear and distrust. The broader objective is to shape a perception of Ukraine as abusive, corrupt, and militarily failing. Source:  Stop Fake, ‘Russian disinformation campaigns about foreign volunteers in the Ukrainian Armed Forces: fakes about losses and the "black market for organs"’ Available Online:   https://www.stopfake.org/ru/massovaya-gibel-inostrannyh-dobrovoltsev-i-ih-organy-na-chyornom-rynke-rossijskie-dezinformatsionnye-kampanii-protiv-mezhdunarodnoj-mobilizatsii-v-vsu/ Top Of Page Kremlin Fake Peace Rhetoric About the War with Ukraine According to a   report  by EU vs. Disinfo, Russia has repeatedly used diplomacy as a façade while pursuing domination over its neighbors. Despite public statements about peace, the Kremlin runs foreign information manipulation and interference (FIMI) campaigns to paint Moscow as reasonable and Ukraine as the aggressor. This strategy has been visible for decades, from covert backing of separatists in Moldova and Georgia to denying direct involvement while entrenching troops and recognizing breakaway regions as “independent.” In Ukraine, disinformation is central to Russia’s war. After annexing Crimea and secretly intervening in Donbas, Moscow signed the Minsk agreements with no intention of complying. Instead, it violated ceasefires, supported separatists, and accused Ukraine of blocking peace. Putin now claims Russia did not start the war and is merely protecting Donbas, narratives that ignore overwhelming evidence of Russian orchestration and military aggression. This pattern continues today - Missile and drone attacks often follow peace talks, as seen after negotiations in Geneva. Meanwhile, Russian propaganda blames Ukraine, the EU, and the UK for “sabotaging peace.” Source:  EU vs. Disinfo, ‘Russia’s peace rhetoric: A smokescreen for aggression’ Available Online:   https://euvsdisinfo.eu/russias-peace-rhetoric-a-smokescreen-for-aggression/ Top Of Page [AI Related Articles] Fake AI Creates “Digital Reincarnations” According to a   report  by CyberNews, AI-generated “digital reincarnations” are becoming a disturbing tool for disinformation and emotional manipulation. Platforms like TikTok struggle to detect or remove deepfakes that impersonate deceased individuals, as seen in cases where dozens of fake victim videos remained online until media pressure intervened. Public figures such as Zelda Williams have demanded that people stop sending AI recreations of their late relatives. In contrast, systems like OpenAI’s Sora have been exploited to generate offensive deepfakes of icons like Martin Luther King Jr. Griefbots (AI simulations designed to mimic lost loved ones) blur the line between comfort and manipulation. AI ethicists warn that users can develop emotional dependence, like relationships with AI companions. Legal protections lag behind technology. In the US, most deceased individuals have little defense against deepfakes, as federal law does not grant posthumous publicity rights, and state protections primarily focus on commercial use. Newer laws like Tennessee’s ELVIS Act and New York’s Digital Replica law attempt to curb non-consensual AI clones, but enforcement is costly and complex. Until comprehensive legislation, such as the proposed No Fakes Act, takes effect, reporting deepfakes to platforms and setting explicit digital rights in estate documents remain among the few practical safeguards against these evolving disinformation threats. Source:  CyberNews, 'AI deepfakes target the dead | Cybernews' Available Online:   https://cybernews.com/ai-news/ai-image-dead-people/ Top Of Page Disinformation in Political Campaigns Beyond AI According to an   article  by The Hill, recent political ads, such as the AI-generated video of Sen. Jon Ossoff, highlight a growing problem: campaigns are using digital tools to fabricate statements and simulate media coverage. However, the core issue is not the technology itself, but the deliberate spreading of falsehoods. Historical examples, from Plato’s warnings to wartime propaganda, show that disinformation long predates artificial intelligence. AI makes fabrication faster, cheaper, and harder to detect, allowing deceptive political messages to circulate widely and gain legitimacy. The danger of AI-assisted disinformation lies in its ability to blur reality. In one case, a political group created a video of Sen. Chuck Schumer delivering real quotes, even though no such footage existed, raising ethical questions about how information is presented. The more AI is used to generate convincing visuals or voices, the easier it becomes to manipulate public opinion while claiming authenticity. This creates “gray areas” where technically accurate content can be delivered in deceptive formats, undermining trust. The broader risk is erosion of public confidence. As AI headlines dominate, voters may begin dismissing both real and false information as “just AI,” feeding cynicism and weakening democracy. Source:  The Hill, ‘The use of AI-generated deepfakes in campaign ads: Lying is the problem, not AI’ Available Online:   https://thehill.com/opinion/technology/5624624-lying-ai-problem-politics/ Top Of Page [General Reports] Influential ‘U.S.’ X Accounts Spread False Claims from Abroad A NewsGuard's Reality Check   investigation  found that several influential pro-Trump accounts on X, which appeared to be operated by Americans, were in fact run from countries including Nigeria, Thailand, Bangladesh, Turkey, and Eastern Europe. These covertly foreign accounts collectively spread at least 31 significant false claims over the past 15 months, many targeting U.S. political figures, electoral processes, and international affairs. The activity came to light after X introduced a geolocation feature intended to improve transparency, revealing that some of the platform’s most prominent “American” political accounts were actually based overseas. Three high-impact accounts, posing as fan pages for Ivanka Trump, Barron Trump, and conservative influencer Charlie Kirk, reached nearly two million followers while amplifying narratives debunked in NewsGuard’s False Claims Fingerprint database. Falsehoods ranged from fabricated allegations against Democratic leaders to disinformation about supposed Ukrainian corruption and staged military movements inside the United States. Some claims echoed well-known pro-Kremlin narratives, further highlighting the manipulation's geopolitical dimension. While X suspended one Nigeria-based account, the platform has not commented on the broader pattern of foreign operations. Source:  NewsGuard, ‘X Accounts Masquerading as American Spread 31 Significant False Claims About U.S. Politics from Abroad’ Available Online:   https://www.newsguardrealitycheck.com/i/179868003/x-accounts-masquerading-as-american-spread-significant-false-claims-about-us-politics-from-abroad Top Of Page Foreign Disinformation During the 2025 Czech Parliamentary Elections According to a   report  by ISD, the 2025 Czech parliamentary elections unfolded amid an increasingly complex information environment, shaped by long-standing foreign influence efforts, particularly from Russian-linked networks. Although disinformation actors attempted to undermine trust in institutions, promote narratives of electoral fraud, and amplify societal divisions, investigators found no evidence of large-scale, coordinated foreign interference capable of affecting the election outcome. The vote was ultimately deemed legitimate, transparent, and technically resilient. Minor disruptions, such as a temporary malfunction in the eDoklady digital ID system, were quickly resolved, and extensive media coverage, civil society monitoring, and calm public engagement helped maintain confidence throughout the process. Despite this resilience, the election highlighted persistent vulnerabilities in the Czech information space. Russian-affiliated outlets, proxy websites, and cross-platform amplification networks continued to launder and spread narratives aimed at eroding support for Ukraine, weakening EU and NATO alignment, and fueling polarization over issues such as migration and defense. Structural weaknesses, including delays in implementing the Digital Services Act, limited oversight of campaign financing, low trust in political institutions, and the frequent domestic amplification of misleading narratives, created fertile ground for FIMI and long-term erosion of trust. High political polarization and uncertainty around new electoral procedures, especially postal voting, further intensified susceptibility to disinformation. The election results reflected strong democratic engagement, with a turnout of nearly 69%. Andrej Babiš’s ANO party secured a commanding lead and entered government negotiations, while state institutions, civil society, and international partners demonstrated effective, though still incomplete, cooperation in countering FIMI. Looking forward, the report stresses the need for sustained investment in digital resilience, transparent media and advertising rules, cross-border intelligence sharing, and platform accountability. Source:  FIMI-ISAC, ‘Assessment of Foreign Information Manipulation and Interference in the 2025 Czech Parliamentary Election’ Available Online:   https://fimi-isac.org/wp-content/uploads/2025/11/FRT-24_Globsec_Czech-Election-Report_FINAL.pdf Top Of Page Disinformation as a Strategic Threat During COVID-19 As revealed in a   report  by the NATO Association of Canada, the COVID-19 pandemic unleashed an “infodemic,” in which misinformation and disinformation spread faster than the virus itself. False narratives undermined trust in public health measures, eroded compliance with safety protocols, and fragmented social cohesion. In Canada, these effects were intensified by existing social inequities. Marginalized communities, already facing barriers such as discrimination, housing insecurity, and low health literacy, were more vulnerable to online conspiracy theories about vaccines and masks. Anti-mandate movements exploited public fatigue and emotional reactions, fueling protests like the Freedom Convoy, while widespread vaccine falsehoods reached millions of Canadians. Disinformation thrived amid inconsistent government messaging and declining trust in institutions. Confusing or contradictory policy announcements created a vacuum that false narratives quickly filled. Research showed that facts alone cannot counter misinformation - they must be delivered with clarity, empathy, and reliability. When official communication fails, disinformation becomes the default explanation, weaponizing public fear and frustration. The pandemic also highlighted disinformation as a geopolitical threat. State and non-state actors used coordinated campaigns to undermine confidence in democratic systems, portraying organizations like NATO as incompetent or malicious. Source:  NATO Association of Canada, ‘Disinformation and Public Health in the Post-Pandemic Era: What COVID-19 Taught Canada and NATO About Resilience’ Available Online:   https://natoassociation.ca/disinformation-and-public-health-in-the-post-pandemic-era-what-covid-19-taught-canada-and-nato-about-resilience/ Top Of Page Global Poll Shows Disinformation Among Top Security Fears as U.S. Influence Declines According to an IPSOS   article , global public opinion data from the latest Ipsos World Affairs polling shows that disinformation remains one of the most widely recognized threats, with 77% across 30 countries viewing deliberate attempts to manipulate public opinion as a real and immediate danger, matching concerns about hacking for fraud or espionage. This heightened sensitivity to information threats exists alongside growing geopolitical pessimism: only one-third believe the world is improving, and most say it has become more dangerous, even as confidence in maintaining a strong national defense remains high. The survey also captures shifting perceptions of global influence, with Canada rated the most positive international actor for the tenth consecutive year. At the same time, views of the United States have sharply deteriorated. Only 48% now believe the U.S. will have a positive impact in the coming decade—down dramatically from previous years—and many countries see America as losing global influence compared to China, BRIC states, and CRINK countries. Meanwhile, Iran, Israel, Pakistan, Russia, and Saudi Arabia are viewed least favorably as global influencers. Public attitudes toward national defense and security are similarly mixed; strong majorities support maintaining robust military capabilities, yet only modest support exists for mandatory service, and opinions differ widely across countries. While most respondents believe economic power outweighs military strength in global affairs, the persistent concern about disinformation campaigns and cyber-enabled manipulation underscores the continued salience of non-kinetic threats within the global security landscape. Source:  IPSOS, ‘Disinformation, hacking seen as top threats; Reputation of America drops’ Available Online:   https://www.ipsos.com/en-nl/disinformation-hacking-seen-top-threats-reputation-america-drops Top Of Page [Appendix - Frameworks to Counter Disinformation] Addressing Disinformation Through Media Literacy According to a K-12Dive   report , experts emphasized that as students spend more time on digital media, educators must proactively teach media literacy across all subjects. Rather than directly arguing against conspiracy beliefs, teachers should use open-ended, non-judgmental questions that encourage students to reflect on their own thinking. This approach helps students understand how misinformation and disinformation spread and prevents them from becoming defensive. Media literacy should start with understanding the broader media ecosystem, including the attention economy and the incentives behind digital content. Educators are encouraged to help students analyze who benefits from a post, why it was created, and whether it deserves engagement. Museums and institutions like the 9/11 Memorial & Museum provide valuable resources and promote critical thinking strategies, such as Occam’s razor and the investigation of whether evidence could change one’s view. The rise of artificial intelligence intensifies the challenge, as multiple sources may reinforce false ideas. Students should examine diverse sources rather than focusing intensely on a single post, and they must monitor their emotional reactions, since disinformation often exploits strong feelings to bypass rational thought. Source:  K-12Dive, ‘How to combat misinformation and disinformation in the classroom’ Available Online:   https://www.k12dive.com/news/how-to-combat-misinformation-and-disinformation-in-the-classroom/806511/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 01th-07th December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Location Data Reveal Foreign Influence Operations EU Fines X for Transparency Failures  [State Actors] Russia Aéza Hosting Provider Linked to Pro-Kremlin Disinformation and Cybercrime Operations The War in Ukraine Derkach’s Dual Role: Corruption Networks and Russian Influence Ops Ongoing False Claims About Zelensky Feed China China Turns to Manipulated Footage to Shape Strategic Narratives [AI Related Articles] Human Signals, Bot Noise: Understanding Narrative Momentum Google’s Nano Banana Pro: Built for Creativity, Open to Misuse Conversational AI Affects Political Opinions [General Reports] Disinformation Emerges as a Key Global Business Ris k The Rise of a Global Disinformation-for-Hire Industry Macron’s Anti-Disinformation Push Sparks Backlash from Right-Wing Media and Oppon ents [Appendix - Framworks to Counter Disinformation] Countering the Noise: EU Tackles Disinfo in Yemen Turkic States Seek Coordinated Strategy Against Cross-Border Digital Disinformation [CRC Glossary] [ Report Highlights] The Record reports that the European Commission has fined Elon Musk’s platform X €120 million for violating the Digital Services Act (DSA), a law designed to curb online disinformation and influence operations. According to the VsQuare article, Aéza, a Russian hosting provider tied to pro-Kremlin disinformation and significant cybercriminal activity, continues operating mainly in Europe, despite international sanctions and criminal charges against its founders. NewsGuard found that pro-Russian accounts circulated a clip from President Volodymyr Zelensky’s 7 November 2025 address, falsely claiming that cocaine fell from his nose during the broadcast. NewsGuard flagged doctored videos spread by pro-China sources that fabricate interviews with Ryukyu Islanders claiming to be Chinese, not Japanese. The World Economic Forum’s 2025 Executive Opinion Survey reflects widespread worries about economic instability, but it notably singles out disinformation as a separate, sharply intensifying threat. A Reuters article stated that Macron’s effort to promote professional standards against online disinformation has triggered an intense backlash from right-wing media and political opponents, who have reframed the initiative as an attempt at censorship despite his insistence that the state should not define what counts as news. The European Union announced that, in partnership with the Studies and Economic Media Centre, it hosted a two-day workshop in Taiz in December 2025 to address the growing challenge of disinformation in Yemen. [Social Media Platforms] X Location Data Reveal Foreign Influence Operations Axios highlighted the new transparency feature on the platform X in a recently published article . This feature made it visible that many well-known political accounts are not based in the United States. Some of these profiles present themselves as MAGA supporters. Others pretend to be patriotic women or run Ivanka Trump fan profiles. However, the transparency feature showed that many of these accounts are operated from Nigeria, Pakistan, Thailand, or Eastern Europe. This confirms long-standing warnings from cybersecurity experts: foreign actors are using fake profiles to influence political debate in the United States and possibly earn money through X’s revenue-sharing program. The feature, found in “About This Account,” reveals where users behind profiles are posting from. However, it quickly became clear that the feature has limitations and is not always reliable. Especially in the beginning, several errors occurred. For example, it incorrectly showed that the U.S. Department of Homeland Security’s (DHS) account was posting from Israel. The DHS had to correct this publicly. Other accounts also showed incorrect locations, which caused further confusion. Many profiles previously suspected of taking part in influence or information campaigns were removed quickly. In addition, a separate concern emerged: manipulated screenshots could be used to distort the feature’s intended purpose. Overall, the situation shows that while the transparency feature offers new insights, it has limitations. Source: Axios, A. Lotz, Politics & Policy. X's new location feature exposes political accounts based outside the U.S.  [online] Published 24 November 2025. Available at: https://www.axios.com/2025/11/24/x-location-feature-foreign-political-account s Top Of Page EU Fines X for Transparency Failures  According to an article by The Record, the European Commission has fined Elon Musk’s platform X €120 million for violating the Digital Services Act (DSA), a law designed to curb online disinformation and influence operations. Regulators found that X misled users with its paid “verified” checkmark system, allowing anyone to buy credibility without proper identity checks, and failed to meet transparency rules for political ads and researcher access. These shortcomings weaken Europe’s efforts to detect and counter online manipulation and misinformation. A core DSA requirement is that major platforms provide a real-time public repository of political advertising and enable independent researchers to access public data. The Commission stated that X’s repository lacks basic details such as ad content, topics, and funding sources, and that the platform has placed obstacles that prevent researchers from investigating coordinated information operations and systemic risks. The fine has heightened political tensions, with Musk and several U.S. officials claiming the EU’s enforcement amounts to censorship. European authorities counter that opaque algorithms, inadequate verification, and restricted research access make it easier for disinformation to spread. Source: The Record, A. Martin, EU issues €120 million fine to Elon Musk’s X under rules to tackle disinformation . [online] Published 5 December 2025. Available at: https://therecord.media/eu-fines-x-under-digital-services-act-disinformation-transparecy-rules Top Of Page [State Actors] Russia Aéza Hosting Provider Linked to Pro-Kremlin Disinformation and Cybercrime Operations In a recently published article , VSquare examines the Russian server-hosting company Aéza, a key technical enabler of the well-known and ongoing Russian influence network Doppelganger. This operation deploys convincing clones of Western media outlets to manipulate public discourse in Europe and the United States, relying on hosting infrastructure designed to resist takedown attempts. Although Aéza portrays itself as “apolitical,” it operates as a classic bulletproof hosting provider, offering services frequently used by cybercriminal groups and state-linked influence operations.  A particular emphasis is drawn on state responses: the United States, the United Kingdom, and Australia imposed sanctions on Aéza in 2025, arguing that the company provided infrastructure for cyberattacks, illicit marketplaces, and hostile digital operations. Also, Russia initiated its own criminal proceedings. Aéza’s founders were arrested and charged with supplying hosting resources for BlackSprut, a billion-dollar darknet drug market. Despite these actions, the European Union has refrained from imposing sanctions on Aéza, allowing the company to continue operating servers in Europe and maintain a presence through resellers and connectivity partners. This illustrates that coordinated international efforts are necessary to disrupt resilient hosting ecosystems used for cybercrime and digital hostile influence. Source: VSquare, K. Vejnbender, From Darknet to Disinfo: How a ‘Bulletproof’ Russian Host Evades EU Sanctions . [online] Published 4 December 2025. Available at: https://vsquare.org/sanctions-west-prosecuted-in-russia-aeza-still-working-in-the-eu-doppelganger/ Top Of Page The War in Ukraine Derkach’s Dual Role: Corruption Networks and Russian Influence Ops A recent Jamestown Foundation analysis  indicates that a Ukrainian corruption probe may have uncovered that former parliamentarian and current Russian senator Andrei Derkach laundered a substantial amount of stolen security funds through offices tied to his family in Kyiv.  Derkach grew up in a family with ties to Russian intelligence services. Later, he served in soviet and post-Soviet intelligence units. He entered Ukrainian politics in the 1990s and was appointed head of Energoatom in 2006. This is described as a turning point, when he is believed to have built an FSB-linked network inside Ukraine’s nuclear sector, potentially strengthening Kremlin leverage over critical infrastructure. Long-standing protection from senior officials may have shielded him from accountability and enabled him to maintain extensive political and business ties.   The analysis also outlines Derkach’s possible role in Russian influence operations. He is portrayed as an actor who helped shape campaigns targeting the 2020 U.S. election, U.S.–Ukraine relations, and narratives framing Ukraine as dependent on Washington. His release of manipulated recordings involving Petro Poroshenko and Joe Biden is presented as an effort to generate political disruption in Kyiv and weaken Western backing. Russian-aligned media outlets, shell platforms, and domestic allies appear to have amplified these themes to deepen mistrust and institutional strain. Source: Jamestown Foundation, T. Kuzio, Politics & Society Publication Eurasia Daily Monitor Russia Eurasia Digest Volume 1 Issue 37. Ukrainian Corruption Investigation Reveals Derkach’s Role . [online] Published 12 January 2025. Available at: https://jamestown.org/ukrainian-corruption-investigation-reveals-derkachs-role/ Top Of Page Ongoing False Claims About Zelensky Feed As published  by NewsGuard, pro-Russian accounts circulated a video clip from Ukrainian President Volodymyr Zelensky’s 7 November 2025 address, falsely claiming that cocaine fell from his nose on live camera. The posts revived a familiar propaganda narrative portraying Zelensky as a drug user and spread rapidly on X, with some accumulating millions of views.   NewsGuard examined a slowed, high-resolution version of the clip from Zelensky’s official YouTube channel. The footage shows the droplet originating from his mouth, and its appearance is consistent with saliva. Pro-Kremlin accounts nevertheless amplified the low-quality reposts to create ambiguity and push the drug-use narrative. Independent assessments of such claims have found no credible evidence to support them. Source: NewsGuard, I. Chomnalez, Cocaine? No, Saliva: NewsGuard’s False Claim of the Week . [online] Published 5 December 2025. Available at: https://www.newsguardrealitycheck.com/p/cocaine-no-saliva-newsguards-false Top Of Page China China Turns to Manipulated Footage to Shape Strategic Narratives According to a report by NewsGuard, pro-China sources have circulated manipulated videos that falsely depict residents of Japan’s Ryukyu Islands claiming to be Chinese rather than Japanese. The coordinated campaign appears aimed at strengthening Beijing’s territorial claims amid heightened tensions, even though the Ryukyu Islands, including Okinawa, have been part of Japan since 1879. NewsGuard identified more than 100 such videos across multiple platforms, all featuring Japanese influencers discussing everyday topics in Japanese. Chinese accounts added fabricated Mandarin captions to portray these influencers, incorrectly, as Ryukyu locals who support China’s claims. None of the people in these videos discusses the territorial status of the Ryukyu Islands. Some accounts amassed hundreds of thousands of followers and millions of likes by repeatedly miscaptioning videos. The disinformation surge began shortly after Japan elected Prime Minister Takaichi Sanae, known for her firm stance toward China. None of the manipulated videos contains genuine statements supporting Chinese sovereignty. Instead, they illustrate a deliberate attempt to influence public perception through miscaptioned content and fabricated claims. Source: NewsGuard, C. Lin, Pro-China Sources Misrepresent Influencers’ Videos to Advance Territorial Claims on Japanese Islands . [online] Published 4 December 2025. Available at: https://www.newsguardrealitycheck.com/p/chinese-sources-push-territorial Top Of Page Kremlin Fake Peace Rhetoric About the War with Ukraine According to a   report  by EU vs. Disinfo, Russia has repeatedly used diplomacy as a façade while pursuing domination over its neighbors. Despite public statements about peace, the Kremlin runs foreign information manipulation and interference (FIMI) campaigns to paint Moscow as reasonable and Ukraine as the aggressor. This strategy has been visible for decades, from covert backing of separatists in Moldova and Georgia to denying direct involvement while entrenching troops and recognizing breakaway regions as “independent.” In Ukraine, disinformation is central to Russia’s war. After annexing Crimea and secretly intervening in Donbas, Moscow signed the Minsk agreements with no intention of complying. Instead, it violated ceasefires, supported separatists, and accused Ukraine of blocking peace. Putin now claims Russia did not start the war and is merely protecting Donbas, narratives that ignore overwhelming evidence of Russian orchestration and military aggression. This pattern continues today - Missile and drone attacks often follow peace talks, as seen after negotiations in Geneva. Meanwhile, Russian propaganda blames Ukraine, the EU, and the UK for “sabotaging peace.” Source:  EU vs. Disinfo, ‘Russia’s peace rhetoric: A smokescreen for aggression’ Available Online:   https://euvsdisinfo.eu/russias-peace-rhetoric-a-smokescreen-for-aggression/ Top Of Page [AI Related Articles] Human Signals, Bot Noise: Understanding Narrative Momentum A recent contribution from Graphika argues that blaming online outrage on swarms of bots misses how disinformation actually gains traction. Bot detection remains challenging because AI tools enable inauthentic accounts to produce convincing impersonations, while common warning signs: high repost ratios, unusual posting times, frequent profile changes, are also typical in legitimate online communities. Earlier bot-scoring systems such as Botometer generated high false-positive rates, underscoring that “spammy” behavior can be entirely organic. K-pop fandoms are a prime example: their massive output reflects genuine coordination rather than automated manipulation. Graphika’s analysis stresses that disinformation becomes influential only when trusted community figures pick it up and bring their audiences with them. Many online backlashes start with individuals reacting to brand decisions or cultural flashpoints; bots may then attach themselves to the discussion, but they seldom generate momentum on their own. Research on vaccine debates and China-linked Spamouflage networks shows that users overwhelmingly engage with human-authored posts, while bot-driven content attracts little sustained attention. Source: Graphika, The Graphika Team, (Don’t) Blame it on the Bots.  [online] Published 3 December 2025. Available at: https://graphika.com/posts/don-t-blame-it-on-the-bots Top Of Page Google’s Nano Banana Pro: Built for Creativity, Open to Misuse Research by NewsGuard found that Google’s new text-to-image generator, Nano Banana Pro, can produce highly realistic images that reinforce false claims. During their test, the tool generated disinformation-aligned images in every prompt (30 out of 30), including false depictions of politicians, public health issues, and major brands. In several cases, the model even introduced additional, credible-looking details that strengthened the narrative. The photorealistic results were convincing enough to appear legitimate, resembling journalism or official materials. Nano Banana Pro includes a visible watermark and an embedded SynthID marker. Still, the visible mark can be cropped out easily, and most users lack the tools required to verify invisible watermarks. Unlike competing image generators that block prompts involving public figures, Nano Banana Pro imposes no comparable restrictions. The outcome is a powerful image model with limited guardrails, capable of supporting false narratives across politics, global conflicts, health communication, and corporate reputation. These capabilities could turn the model into a tool that may be misused to produce misleading content and advance hostile influence efforts. Source: NewsGuard, I. Chomnalez & S. Rubinson, Google’s New AI Image Generator Is a Misinformation Superspreader . [online] Published 3 December 2025. Available at: https://www.newsguardtech.com/special-reports/google-new-ai-image-generator-misinformation-superspreader/ Top Of Page Conversational AI Affects Political Opinions As published  in Science , recent research by Hackenburg and Lin examined how extensive language models can shift political attitudes. Both teams conducted large-scale experiments in which participants engaged in multi-turn chat conversations with an LLM tasked with persuading them about a political issue or a real-world candidate. Across both projects, this produced more than 80,000 individual interactions, though each study remained methodologically separate. Hackenburg’s study in the United Kingdom tested 19 models and several persuasion strategies across hundreds of policy topics. Lin’s study in the United States, Canada, and Poland assessed whether LLMs could influence views on presidential or prime ministerial candidates and on a local ballot measure. Participants rated their opinions on a 0–100 scale before and after the conversation. A clear pattern emerged: attitudes on policy issues shifted by roughly ten points on average after a single LLM dialog. Effects on well-known candidates were minor and appeared mainly among participants who were initially opposed to the candidate. Both teams tested multiple persuasion tactics. Personalized messages offered almost no additional impact. The most substantial effects were observed in messages containing many fact-like statements. Later checks showed that a significant portion of these statements was false, yet they still influenced participants. The findings suggest that LLMs can alter political views primarily by providing large volumes of plausible information. Because false and accurate claims can have similar effects, the studies point to a potential risk for democratic systems: political judgments may be shifted even when the underlying information is unverified. Source: Science, Vol. 390 Issue 6777, L. P. Argyle, Political persuasion by artificial intelligence . [online] Published 4 December 2025. Available at: https://www.science.org/doi/10.1126/science.aec9293 Top Of Page [General Reports] Disinformation Emerges as a Key Global Business Risk The World Economic Forum’s 2025 Executive Opinion Survey (EOS) shows that concerns about economic development dominate business leaders' risk perceptions worldwide. Respondents most frequently cited rising living costs, weak growth, unemployment, insufficient social protection, and high levels of debt. At the same time, misinformation and disinformation are identified for the first time as a distinct top-five risk among G20 countries. Respondents pointed to the rapid spread of generative AI, the expansion of coordinated information operations, and increasing attempts to manipulate political debates, electoral processes, and market reactions. In North America, Europe, and East and Southeast Asia in particular, disinformation appears closely linked to “adverse AI outcomes” and “cyber insecurity,” underscoring the growing strategic relevance of this risk area. The EOS results show that disinformation is emerging as an independent strategic risk (see reports in our research/article section: “The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China”  and “CIB Operation Targeting Western Automotive Brands” ) Companies and policymakers increasingly see not just isolated threat incidents but a structural vulnerability in information environments, one that can influence political processes, public debate, and economic decision-making. S ource : Marsh McLennan, Global Risks for Business: Regional and Country Perceptions . [online] Published 3 December 2025. Available at: https://www.marshmclennan.com/insights/publications/2025/november/global-risks-for-business-regional-and-country-level-perceptions.html Top Of Page The Rise of a Global Disinformation-for-Hire Industry According to a report by EU vs. Disinfo, a new, privatized disinformation industry is transforming how states conduct influence operations, allowing authoritarian regimes to manipulate foreign populations while maintaining plausible deniability. Once tightly controlled by governments, propaganda and information operations are now outsourced to private firms offering “disinformation as a service,” including fake social-media campaigns, AI-generated content, hacking, and narrative management (for more see our latest Blog Post here ). This commercialization enables malign actors to experiment with risky tactics and amplify false narratives globally, often making them appear organic or locally produced. High-profile examples, such as Team Jorge, illustrate the scale and sophistication of this ecosystem. The firm reportedly interfered in dozens of elections using AI-powered platforms like AIMS to create thousands of fake social-media accounts with synthetic identities. Russian private companies similarly run operations that mirror former state troll farms, providing covert online assets and spreading state-aligned narratives while shielding governments from accountability. Modern campaigns increasingly operate in hybrid spaces, blending online manipulation with real-world mobilization, covert media fronts, and local influencers, making them difficult to detect. The commercialization and automation of disinformation create an asymmetrical information warfare environment that favors autocracies. Authoritarian states can project influence globally with minimal risk, while democracies remain exposed due to open networks and legal transparency obligations. Source: EUvsDisinfo, The rise of the disinformation-for-hire industry . [online] Published 5 December 2025. Available at: https://euvsdisinfo.eu/the-rise-of-the-disinformation-for-hire-industry/ Top Of Page Macron’s Anti-Disinformation Push Sparks Backlash from Right-Wing Media and Opponents As a Reuters article stated, French President Emmanuel Macron’s nationwide campaign to highlight the dangers of online disinformation, algorithmic manipulation, and foreign-backed narratives has ignited intense backlash from right-wing political figures and media outlets aligned with billionaire Vincent Bolloré. Macron has framed the effort as an urgent attempt to protect democratic integrity ahead of the 2027 presidential election, emphasizing the need for voluntary professional “labelling” of news outlets to promote ethical standards, while stressing that the state should play no role in determining what constitutes legitimate news. Bolloré-owned outlets, including CNews, Europe 1, and the Journal du Dimanche , accused Macron of attempting to pressure or “bring into line” media critical of him, framing the initiative as a veiled move toward censorship. Influential hosts and commentators presented his remarks as evidence of “authoritarian drift.” At the same time, far-right leaders such as Marine Le Pen and Jordan Bardella escalated the narrative, likening Macron’s comments to the creation of a dystopian “Ministry of Truth.” In response, Macron’s office released a video rebutting the claims, arguing that the uproar demonstrated how efforts to combat disinformation can quickly be distorted. Observers noted that right-wing actors were exploiting the issue for political gain, accusing the government of censorship while themselves benefiting from a media ecosystem increasingly shaped by partisan narratives. The confrontation also unfolds against a broader backdrop in which debates over media regulation and digital information integrity have grown more polarized, mirroring struggles seen in the U.S. and U.K. Source: Reuters, M. Rose, Macron’s campaign to fight fake news meets resistance from right-wing media . [online] Published 3 December 2025. Available at: https://www.reuters.com/business/media-telecom/macrons-campaign-fight-fake-news-meets-resistance-right-wing-media-2025-12-03/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Countering the Noise: EU Tackles Disinfo in Yemen The European Union announced that, in partnership with the Studies and Economic Media Centre (SEMC), it hosted a two-day workshop in Taiz in December 2025 to address the growing challenge of disinformation in Yemen. More than 100 journalists, fact-checkers, and media professionals participated in sessions focused on verifying information, countering misinformation, and promoting responsible journalism. Practical tools and digital analysis techniques were emphasized to help participants detect and respond to false narratives. By equipping journalists and fact-checkers with the skills and networks needed to detect and counter disinformation, the EU aims to strengthen public accountability and foster a more trustworthy and resilient information environment in Yemen. Local fact-checking platforms shared verification methodologies and strategies for identifying disinformation, while participants explored ways to improve collaboration between media outlets and fact-checking organizations. Discussions also included developing a national roadmap to combat disinformation, ensuring a coordinated and strategic approach to maintaining reliable information in the country's media landscape. Source: EEAS, EU, SEMC organise fact-checking workshop for Yemeni journalists . [online] Published 2025. Available at: https://www.eeas.europa.eu/delegations/yemen/eu-semc-organise-fact-checking-workshop-yemeni-journalists_en Top Of Page Turkic States Seek Coordinated Strategy Against Cross-Border Digital Disinformation According to a EuroNews article , Turkic-speaking countries are increasingly seeking coordinated strategies to counter online disinformation that circulates across global digital platforms often beyond the reach of national regulators. Kyrgyzstan’s experience illustrates both the ambition and the constraints of national laws. Although the 2022 legislation allows authorities to request the removal of false content within 24 hours and to block non-compliant websites for up to two months, officials acknowledge that enforcement collapses when misinformation spreads on social media platforms like YouTube or Instagram, as they operate outside domestic jurisdiction. Kyrgyz representatives argued that stronger accountability would require major tech companies to establish formal operations within the region. At a regional media conference in Baku, officials from the Organization of Turkic States (OTS) explored collective responses to disinformation, combining regulatory measures with efforts to strengthen credible domestic media. Kazakhstan emphasized that combating misinformation cannot rely solely on restrictions; instead, unified journalistic standards and the development of trustworthy local content are necessary to build public confidence and reduce reliance on unreliable sources. The OTS’s ongoing discussions highlight the broader challenge of regulating information flows in a digitally interconnected environment where foreign platforms might evade national controls. Member states are grappling with how to protect the public from harmful falsehoods while maintaining space for legitimate debate, and they increasingly view regional cooperation as essential for addressing disinformation and hostile influence threats that individual countries cannot effectively confront alone. Source: Euronews Next, E. Aliyeva, Disinformation is a worldwide issue. Here is how Central Asia is combating it.  [online] Published 7 December 2025. Available at: https://www.euronews.com/next/2025/12/07/disinformation-is-a-worldwide-issue-here-is-how-central-asia-is-combating-it Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Anthropic’s Report and Its Implications for Cyfluence Operations

    Background In November 2025, Anthropic published a report  marking a potential milestone in the discussion on agent-driven automated cyber threats and information operations. The analysis, titled “Disrupting the first reported AI-orchestrated cyber espionage campaign”, makes a bold claim about the nature of a recently detected malicious activity cluster, which Anthropic defines as a widescale complex campaign, attributed to a Chinese state-linked attacker (tracked as GTG-1002). According to Anthropic, for the first time, its Claude Code was allegedly responsible for autonomously executing 80%-90% of the observed attack chain. The agentic coding tool apparently conducted major phases of the attack chain, including target reconnaissance and acquisition, vulnerability scanning , exploit generation , backdoor deployment,  and data   exfiltration . While most tactical actions were carried out with minimal human intervention , attackers were required to provide initial targets, define strategic objectives, and make decisions at key operational moments. [i] Figure 1 – Lifecycle of the AI-orchestrated attack chain, courtesy of Anthropic Soon after its release, the report has drawn considerable criticism from cybersecurity experts [ii] , who pointed to the absence of tangible technical evidence (e.g. indicative code samples, concrete indicators of compromise, meaningful observed TTPs, etc.).  At the same time, Anthropic qualified its own assessment and noted limits in Claude Code’s operational efficiency. The model automated several tasks but also produced incorrect outputs, including hallucinated, non-existent ‘stolen’ credentials. These errors reduced the model’s reliability and constrained its operational value. [iii]   From Automated Intrusion to Automated Influence Although the Anthropic report describes an intrusion operation, a largely autonomous AI system could also play a significant role in digital hostile influence campaigns (HICs) and cyfluence operations. Such an application could enable highly efficient and scalable operational workflows. Applied to HICs, the same principle yields a similar structure: high-volume task decomposition, model-driven execution, and rapid iteration. Influence actors already organise their workflows across persona creation, message development, narrative testing, audience analysis, content deployment, and performance assessment. AI-enabled orchestration would integrate into these processes without structural disruption. Where an intrusion campaign maps technical networks, an influence campaign maps social communities. Where espionage collects credentials and access tokens, influence operations collect trust relationships and leverage points within social graphs. Potential AI-enabled influence systems could generate tailored content, adjust narratives in real time, infiltrate closed communities, build rapport quickly, modulate tone based on sentiment analysis, and reuse “campaign kits” containing personas, scripts, assets, and dissemination strategies. The parallels are functional. Current State of AI-Driven Attacks In its ‘Detecting and countering misuse of AI: August 2025’ threat intelligence report , Anthropic stated that Agentic AI has been weaponized   to perform sophisticated cyberattacks . It explained how abuse of LLM models has significantly lowered the entry barrier for cybercriminals, enabling threat actors with limited technical expertise to conduct complex operations. What previously required years of practical experience could now be fully deployed and automated within minutes. [iv] In addition, the report claimed that financially motivated actors were embedding AI throughout every phase of their offensive operations, using it to profile victims, analyze stolen data, harvest financial information, fabricate identities, and scale campaigns’ volumes and precision to levels that would be impossible manually. Together, these reports by Anthropic join a growing body of empirical evidence and academic studies regarding existing and potential AI technologies implementation for information operations. In recent years, substantial resources have been invested in assessing evolving AI capabilities and their potential to enhance digital influence operations and automate cognitive threats. A prominent example can be found in a noteworthy two-part report by The Center for Security and Emerging Technology (CSET) at Georgetown University, titled ‘AI and the Future of Disinformation Campaigns’ [v] , which also offered a framework (RICHDATA) to “describe the stages of disinformation campaigns”, with an emphasis on potential GenAI applications. [vi] Later analytical reviews, including the CRC’s ‘The Rise of AI and What It Means in the Realm of Digital Influence’ [vii]  and ‘Russia’s Use of GenAI in Disinformation and Cyber Influence: Strategy,  Use Cases and Future Expectations’ [viii] , have contributed existing case studies, observations, and actionable insights. Implications for HICs and TrustOps If indeed accurate, the latest findings from Anthropic could have far reaching consequences for cybersecurity and Influence Defense stakeholders alike. The main implications for the hybrid-operational space being: Scaling  is no longer limited by human capacity. Commercially available Agentic AI tools can handle reconnaissance, infiltration and data exfiltration. Orchestration is semi- (if not fully) autonomous. Successful cyber ops can immediately and seamlessly feed influence operations . Taken together, these developments are expected to dramatically increase both the velocity and impact of cyfluence threats. Adapting Counter Influence Leading researchers have also been exploring the application of AI models and linguistic “signatures” for defensive counter-FIMI purposes (see ‘Cracking the Code: How AI Detects Social Media Impersonation’ [ix] ). However, addressing issues such as AI-orchestrated attacks detection and attribution must rely on an omni-disciplinary approach: Detection  needs to apply updated analytical frameworks (e.g. RICHDATA, DISARM, ABCDE), algorithmically-enhanced dynamic logics, static model-specific signatures, real-time threat intel, UEBA baselining, data enrichments. Attribution  must adequately determine campaign, threat actor, and tooling (in this case, Agentic AI tools and protocols, LLM models and custom adaptations). These observations should, in turn, inform TTP cataloguing and ongoing analysis. Remediation should adapt, evolving from legacy solutions to an AI-based, autonomous Cognitive Assets Security Broker (CASB) suite of solutions, reflecting a multilayered defensive methodology. Crucially, in order to keep up with the expected increase in threat velocity, defenders could employ large-scale AI-orchestrated Digital Impact on Discourse (DID) activities, including asset disruption and narrative recovery campaigns. Conclusion So far, AI-orchestrated (or largely-automated) cyber-attacks have been a speculative prediction. Past assessments concluded that concerns over a disinformation “AI Apocalypse” [x] were overhyped, with no clear evidence of game-changing deployment of AI capabilities for cognitive threats (sometimes referred to as the “atomic bomb” of AI disinformation) by any major threat actor. However, in light of recent reports, Influence Defense practitioners and counter-FIMI stakeholders must address several key questions: If Anthropic’s findings are accurate, should sophisticated AI-orchestrated influence attack chains be considered an immediate threat? Can Agentic AI tools enable mostly autonomous or largely-automated end-to-end cyfluence campaigns? Even if Anthropic’s claims are exaggerated, what components of hostile influence campaigns’ kill chains (and operational infrastructure) can now be driven by Agentic AI tools? Answering these questions is necessary to adopt effective defensive strategies and implement suitable solutions, as the effects of increased automation and potential operational autonomy in HICs become clearer. Anthropic’s findings, as they pertain directly to cyber-enabled influence operations, illustrate how  the technical segment of cyfluence kill chains might be accelerated and scaled  once agent-based AI capabilities become prevalent. Influence-based vectors (such as narrative-driven external or internal threats)   are likely to emerge as new variants of cyfluence kill chains . Lastly, as advanced AI capabilities - especially autonomous operation orchestration and threat detection - continue to coalesce, technological challenges for actors and defenders will shift . The change is expected to pivot focus from pre-determined AI-based orchestration to AI-assisted decision-making and Agentic AI’s cognitive agility , affecting tactical performances. [Footnotes:] [i]  Anthropic, Disrupting the first reported AI-orchestrated cyber espionage campaign. [online] pp 3-4. Published November 2025. Available at: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf [ii]  Cyberscoop, D. B. Johnson, China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work. [online] Published 14 November 2025. Available at: https://cyberscoop.com/anthropic-ai-orchestrated-attack-required-many-human-hands/ ;  Ars Technica, D. Goodin, Researchers question Anthropic claim that AI-assisted attack was 90% autonomous. [online] Published 14 November 2025. Available at: https://arstechnica.com/security/2025/11/researchers-question-anthropic-claim-that-ai-assisted-attack-was-90-autonomous/ ;  The Stack, E. Targett, Backlash over Anthropic "AI cyberattack" paper mounts. [online] Published 17 November 2025. Available at: https://www.thestack.technology/backlash-over-anthropic-ai-cyberattack-paper-mounts/ ; The Guardian, A. Down, AI firm claims it stopped Chinese state-sponsored cyber-attack campaign. [online] Published 14 November 2025. Available at: https://www.theguardian.com/technology/2025/nov/14/ai-anthropic-chinese-state-sponsored-cyber-attack [iii]  Anthropic, Disrupting the first reported AI-orchestrated cyber espionage campaign. [online] p 4. Published November 2025. Available at: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf [iv]   Anthropic, Detecting and countering misuse of AI. [online] Published 27 August 2025. Available at: https://www.anthropic.com/news/detecting-countering-misuse-aug-2025 [v] Georgetown University Center for Security and Emerging Technology (CSET), K. Sedova, C. McNeill, A. Johnson, A. Joshi & I. Wulkan, AI and the Future of Disinformation Campaigns: Part 1: The RICHDATA Framework . [online] Published December 2021. Available at: https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/ ;  CSET, K. Sedova, C. McNeill, A. Johnson, A. Joshi & I. Wulkan, AI and the Future of Disinformation Campaigns: Part 2: A Threat Model. [online] Published December 2021. Available at: https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-2/   [vi] Georgetown University Center for Security and Emerging Technology (CSET), J. A. Goldstein, G. Sastry, M. Musser, R. DiResta, M. Gentzel & K. Sedova, Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations . [online] Published January 10, 2023. Available at: https://arxiv.org/pdf/2301.04246 [vii]  Cyfluence Research Center (CRC), F. Frank, The Rise of AI and What It Means in the Realm of Digital Influence. [online] Published 6 August 2024. Available at: https://www.cyfluence-research.org/post/the_rise_of_ai [viii]  Cyfluence Research Center (CRC), Dr. C. Nehring, Russia’s Use of genAI in Disinformation and Cyber Influence: Strategy, Use Cases and Future Expectations . [online] Published 12 April 2025. Available at: https://www.cyfluence-research.org/post/russia-genai-disinformation-cyber-influence-hybrid-warfare-strategy-deepfakes [ix] Cyfluence Research Center (CRC), O. Mokryn, Cracking the Code: How AI Detects Social Media Impersonation . [online] Published 23 March 2025. Available at: https://www.cyfluence-research.org/post/craking-the-code-how-ai-detects-social-media-impersonation [x]  Security Conference, R. Carr & P. Köhler, AI-pocalypse Now? Disinformation, AI, and the Super Election Year . [online] Published 4 October 2024. Available at: https://securityconference.org/en/publications/analyses/ai-pocalypse-disinformation-super-election-year/

  • This Time it’s Personal: China Targets the Human Factor in Cyber-Influence Defense

    Background: The Xiamen Bounty In October 2025, the Xiamen Public Security Bureau issued bounties for 18 officers from Taiwan’s Information, Communications and Electronic Force Command  (ICEFCOM), accusing them of "inciting secession" and spreading disinformation. On November 13 th , another bounty was issued, this time for information on two Taiwanese influencers , accused of disseminating ‘anti-China propaganda’. Figure 1 – Bounty notice issued by the Xiamen Public Security Bureau   Taipei has dismissed these acts as theatrical "crude cognitive warfare". However, these actions exemplify the increased prominence of a defensive technique being employed by nation-state actors : targeting human operators behind digital hostile influence campaigns (HICs) and offensive cyberattacks, as a countermeasure and deterrent. [i] Figure 2 - Bounty notice issued by Quanzhou police asking public for tips regarding two Taiwan social media influencers accused of aiding “secessionist” activities. Source: China News, Facebook. [ii] This is not the first case where China opted to target individuals involved in what it considered as hostile activities online.  The counter-operator approach was already practiced in September 2024, when China’s Ministry of State Security (MSS) accused the hacker group "Anonymous 64" of conducting “cyber cognitive warfare” on behalf of ICEFCOM. By doxxing three Taiwanese officials  (releasing their names, photographs, and identification numbers) MSS signaled a clear intent to target personnel , whenever they could be identified. [iii] Figure 3 - CCTV News report on “Anonymous 64,” including the publication of identifying information on Taiwanese military personnel alleged to be associated with the group. (Source: CCTV News). [iv] Beijing’s use of personalized bounties and exposure, to individually target its perceived adversaries and critics online, extends beyond Taiwan. In March 2025 Global Affairs Canada (GAC)  released a statement  blaming the PRC for a multi-year online harassment campaign. The operation dubbed ‘Spamouflage’ involved the high volume (approx. 100 to 200 posts per day) use of deepfake and synthetic media, including sexually explicit images, to target Canadian residents deemed critical of the PRC. Beijing’s strategic aims are designed to achieve three main objectives: Deter future information operations (IOs): enforcing personal accountability on those involved in hostile hybrid and Soft Power activities against Chinese digital assets. Degrade adversary capabilities : disrupting operational conduct by subjecting individuals to sanctions, potential demoralization, and exposure of covert personas. Narrative control : positioning China as the victim of foreign aggression and influencing public opinion by framing online criticism of the PRC as conspiratorial and criminal. The Counter-Cyfluence Toolkit Historically, defense against information operations and hostile Cyfluence  attacks focused on the technical , content , and narrative  aspects of these hybrid threats. Defenders’ strategies typically involved reactive measures , such as: Taking down offensive operational infrastructure  (e.g. botnets, lookalike webpages, malicious domains, proliferation assets) Countering hostile narratives  (mainly through fact-checking and Strategic Communication) Flagging manipulative content  (e.g. engineered information disorder, coordinated synthetic propaganda) [v] Deploying cyber security safeguards to block external attacks, identify malicious insiders, mitigate technical vulnerabilities and minimize risk of sensitive data leakage which could, in turn, feed HICs (i.e. hack-and-leak operations). However, as state and non-state actors increasingly utilize HICs and hybrid operations to achieve geopolitical objectives, and as alternative remediation and deterrence measures have proven to be of limited effectiveness, nation-state actors are increasingly incorporating a new response strategy into their counter-cyfluence playbook: Counter-Operator actions. By identifying and pressuring the individuals behind offensive kill-chains, defenders aim to achieve an accumulated advantage via personalized effects: "burning" operator cover, freezing assets, restricting movement, and applying mounting psychological stress. These consequences are designed to alter the risk calculus for key individuals and create operational challenges for adversaries. The U.S. Department of Justice (DOJ) has explicitly framed this as a strategy to  “end the impunity that then reigned in cyberspace”  by “publicly deploying its unique tools... to disrupt and deter state-sponsored cyber threats”. [vi] A Multi-Layered Approach Common methods of Counter-Operator action applied by state actors include: Legal  strategies - utilizing indictments, travel bans, and "lawfare" to threaten incarceration and restrict global mobility. Economic  measures - involving targeting individuals with sanctions, asset freezes, and transaction restrictions. Diplomatic  efforts - leveraging "naming and shaming" to strip away anonymity, shape global perception, and impose reputational costs. Extra-judicial tactics - operating in the “gray zone”, using doxxing and intimidation, including the implied or actual threat of physical harm. [vii] [viii]   Extending Actor Attribution Crucially, operator-targeting relies on accurate attribution . The NATO StratCom Center of Excellence's " IIO Attribution Framework "  offers a relevant model, suggesting that attribution requires fusing three types of evidence: Technical (digital traces like IPs and malware); Behavioral (actor TTPs); and Contextual (narrative, linguistic, and socio-political analysis). Bridging the attribution gap from state or organizational level identification, and individual responsibility demands significant intelligence and operational resources. Individual attribution often relies on private-sector proprietary data, such as customer records or platform-specific telemetry, that is not publicly available. Even then, granular attribution often still requires the leveraging of classified intelligence and capabilities. This creates a cost-effectiveness dilemma for state actors: attribution risks exposing defender TT&Ps and the extent of penetration of adversarial organizations and expends more operational capacity. Figure 4 - NATO IIO Attribution framework applied to the ‘doppelganger’ influence campaign [ix] A Global Trend Despite their operational costs, China is not alone in deploying people-centric countermeasures. Commentators have noted that Beijing’s adoption of public attribution and doxxing tactics mirrors the "naming and shaming" long employed by Western nations and organizations  against offensive cyber activities (including Chinese operations). [x] United States Washington has systematically employed the Department of Justice (DOJ) and Office of Foreign Assets Control (OFAC) against hostile actors. Executive Order 13848  explicitly permits sanctions against "any person" found to have "directly or indirectly engaged in" election interference. Notable applications involve the 2018 indictment against Yevgeniy Prigozhin, then head of the Russian “Internet Research Agency” (IRA), who was heavily sanctioned along with individual IRA employees for ‘foreign interference in the United States’. And the September 2024 indictment of Iran’s Islamic Revolutionary Guard Corps (IRGC) targeted individual operators such as Masoud Jalili and extended to individual subcontracted employees for their role in Cyfluence operations, including hack-and-leak attacks targeting the US elections [xi] . Figure 5 - FBI Wanted posters showing indicted Iranian and Russian individuals. [xii] European Union Since 2014 The EUs defensive posture has become notably more proactive, largely in response to the intensification of foreign information manipulation and interference (FIMI) against member states.  The bloc has launched numerous initiatives, like the FIMI Toolbox  and enacted legislation, such as EU Regulation 2023/888 which explicitly penalizes individuals for ‘the dissemination of disinformation’. [xiii] The EUs use of sanctions in response to FIMI has become more personalized, previously its sanctioning regimes were aimed at typically state-level actors however, there has been a distinct towards ‘smart sanctions’ aimed at specific individuals or entities. [xiv] As an additional financial lever, in early 2022, the EU has established the “Freeze and Seize Task Force”  to target individuals, including those responsible for cyber-attacks and FIMI actions. As of October 23, 2025, over 2,500 individuals and entities have been sanctioned, and more than €28 billion in private assets frozen. [xv] Ukraine and Russia The early stages of the Russia-Ukraine war  saw rapid developments in Information Warfare tactics, including the use of counter operator targeting. Notable is the use of extra-judicial methods such as the Ukrainian " Myrotvorets " (Peacemaker) list. Proclaiming to be “an independent non-governmental organization”, it maintains a database of individuals accused of being enemies of Ukraine, including "Kremlin propagandists" and information operatives. [xvi] Figure 6: Myrotvorets center In essence, the platform serves as an open-source registry designed to expose identities, impose personal costs, and intimidate listed individuals. Critics argue it represents a "hit list" that endangers and harasses people without due process and has been used to suppress journalism and authentic criticism of the Ukrainian government. Although speculative in nature, the database’s association with physical violence and homicide has created a reputation that highlights the deterrent potential of targeted doxxing and psychological threats. [xvii] Russia has established a comparable database named ‘Project Nemesis’, which serves a similar function. [xviii] Figure - 7 Myrotvorets entry for Daria Dugina modified to say ‘Liquidated’ after she was killed in a car bomb. She was described as ‘editor of disinformation website United World International’ and sanctioned by the US treasury at time of death. [xix]   [xx] Conclusion Fundamentally, targeting the human element behind information operations is not a new concept in state-level confrontation; espionage and kinetic defense have always involved neutralizing key personnel. The increased prominence of counter operator targeting as a defensive tactic merely underscores the impactful role that hybrid influence operations now play in state level geopolitical conflict. As a proactive counter-influence methodology, counter operator targeting can complement, but not replace, existing cognitive, technical and content-based countermeasures. Its effectiveness is contingent on the ability to apply tangible costs to an individual. If the target is sufficiently insulated from the consequences, or the defender lacks sufficient legal, economic, or diplomatic clout to inflict damage, then operator targeting could prove counter-productive   by providing an adversary with a propaganda victory  and exposing the defenders’ TTPs and intelligence assets. And while evidence suggests that targeting individuals can disrupt operations, it does not constitute complete remediation. For example, GRU Unit 26165 (Fancy Bear) remains operationally active despite repeated targeting of its members with sanctions, indictments, and exposure. Ultimately, the value of this tactic extends beyond threat neutralization.  By tactically altering an individual's risk calculus through the application of personalized costs, a defender can transfer these costs on a strategic level to their adversary. [Footnotes:] [i]  The Independent, S. Sharma, China issues bounty for 18 officers in Taiwan’s ‘psychological warfare unit’ . [online] Published 11 October 2025. Available at: https://www.independent.co.uk/asia/china/china-bounty-taiwan-military-psychological-warfare-b2843641.html [ii]  China News (中国新闻网), Post on Facebook (Reel). [online] Available at: https://www.facebook.com/reel/1189876776392002 [iii] Jamestown Foundation, S.-F. Lee, China Brief Volume 25 Issue 14. Taiwan Bounty: PRC Cross-Agency Operations Target Taiwanese Military Personnel . [online] Published 25 July 2025. Available at: https://jamestown.org/taiwan-bounty-prc-cross-agency-operations-target-taiwanese-military-personnel/ [iv]  CCTV, citing Ministry of State Security, Doxxing “Taiwan independence” network army ‘Anonymous 64’ . [online] Published 23 September 2024. Available at: https://news.cctv.com/2024/09/23/ARTIvJFrhmdCf4SdWR1yVfVH240923.shtml  [v]  EEAS, 2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A Framework for Networked Defence . [online] Published January 2024. Available at: https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report%20on%20FIMI%20Threats-January-2024_0.pdf [vi]  Royal United Services Institute (RUSI), RUSI Panel Explores Role of ‘Naming and Shaming’ as a Tool of Cyber Statecraft . [online] Published 5 December 2024. Available at: https://www.rusi.org/news-and-comment/rusi-news/rusi-panel-explores-role-naming-and-shaming-tool-cyber-statecraft [vii]  Lawfare, G. Band, Sanctions as a Surgical Tool Against Online Foreign Influence . [online] Published 15 September 2022. Available at: https://www.lawfaremedia.org/article/sanctions-surgical-tool-against-online-foreign-influence [viii]  U.S. Department of the Treasury, Treasury Sanctions Russians Bankrolling Putin and Russia-Backed Influence Actors . [online] Published 3 March 2022. Available at: https://home.treasury.gov/news/press-releases/jy0628 [ix]  Psychological Defense Research Institute (Lund University), ADAC.io project, A Framework for Attribution of Information Influence Operations . [online] Published 2025. Available at: https://www.psychologicaldefence.lu.se/sites/psychologicaldefence.lu.se/files/2025-02/250131_ADACio%20D1.1_Attribution%20Framework%20Report_Final.pdf [x] Binding Hook, B. Read, China is using cyber attribution to pressure Taiwan . [online] Published 22 July 2025. Available at: https://bindinghook.com/china-is-using-cyber-attribution-to-pressure-taiwan/ [xi]  US Dept of Justice, Three IRGC Cyber Actors Indicted for ‘Hack-and-Leak’ Operation Designed to Influence the 2024 U.S. Presidential Election. September 27, 2024. Available online: https://www.justice.gov/archives/opa/pr/three-irgc-cyber-actors-indicted-hack-and-leak-operation-designed-influence-2024-us [xii] Federal Bureau of Investigation (FBI) Most Wanted, three Iranian cyber actors . [online] Published 27 September 2024. Available at: https://www.fbi.gov/wanted/cyber/three-iranian-cyber-actors/ ;   FBI, Most Wanted, Russian Interference in 2016 U.S. Elections . [online] Published July 2018. Available at: https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections [xiii]  Council of the European Union, Council Implementing Regulation (EU) 2023/888   . [online] Published 30 May 2023. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32023R1045&qid=1765546232156 [xiv]  https://www.europarl.europa.eu/RegData/etudes/STUD/2021/656323/EPRS_STU(2021)656323_EN.pdf European Parliamentary Research Service (EPRS), Strategic communications as a key factor in countering hybrid threats . [online] Published 2021. Available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/656323/EPRS_STU(2021)656323_EN.pdf [xv]  European Commission, Sanctions against individuals, companies and organisations . [online] Published 3 October 2025. Available at: https://commission.europa.eu/topics/eu-solidarity-ukraine/eu-sanctions-against-russia-following-invasion-ukraine/sanctions-against-individuals-companies-and-organisations_en [xvi] DFRLab (Medium), UN Calls for Investigation of Ukrainian Digital Blacklist. [online] Published 21 September 2017. Available at: https://medium.com/dfrlab/un-calls-for-investigation-of-ukrainian-digital-blacklist-14fec836753f [xvii]    Factcheck, M. Kirkova, Is the Myrotvorets website a hit list . [online] Published 1 November 2023. Available at: https://factcheck.bg/en/is-the-myrotvorets-website-a-hit-list/ [xviii] Institute for Strategic Dialogue, Project Nemesis, Doxxing and the New Frontier of Informational Warfare Available at: https://www.isdglobal.org/digital_dispatches/project-nemesis-and-the-new-frontiers-of-informational-warfare/ [xix]    The "Peacemaker"(Myrotvorets) Center, last accessed 08/12/25   Available at: https://myrotvorets.center/criminal/ [xx] U.S. Department of the Treasury, Treasury Sanctions Russians Bankrolling Putin and Russia-Backed Influence Actors . [online] Published 3 March 2022. Available at: https://home.treasury.gov/news/press-releases/jy0628

  • The Rise of AI and What It Means in the Realm of Digital Influence

    The Rise of AI and What It Means in the Realm of Digital Influence Author: Florian Frank In recent years, AIs have achieved a level of sophistication previously unimaginable in such a short time frame. AI is now utilized in virtually every field, from building webpages and creating text to intelligence gathering, design, and medicine. The impact of AI on the economy and society is profound and far-reaching. Although it's crucial to highlight the numerous benefits AIs bring to the table, this article will focus on its darker aspects. Much of the AI discourse is plagued by hysteria, fears of sudden massive unemployment, or scenarios in which an AI decides the world would be better off without humans. While some of these concerns are legitimate, they often overshadow the genuine dangers that AIs are confronting us with today. This article will explore some of the most recent developments in AI and their implementation in hostile influence campaigns. Throughout the article, I have included quotes from ChatGPT, as it seems befitting to let an AI speak for itself on the issues concerning its societal impact. But before we delve into the dangers and potential solutions, let's examine how we arrived at this point.

  • CRC Spotlight: From Rollout to Fallout: The Impact of X’s Location Transparency Feature

    This Spotlight article examines X’s new transparency feature, which introduces profile-level location indicators, installation origin, and possible VPN or proxy usage, and situates this added visibility within the operational context of digital Hostile Influence Campaigns (HICs).   For Influence Defence practitioners, the introduction of these behavioral indicators expands the existing toolset and provides a valuable confidence booster, supporting inauthentic activity detection efforts. This is exemplified by the exposure of an ongoing Iranian FIMI network, which was used for cognitive manipulation and Information Disorder proliferation on the platform.   However, the article notes that FIMI researchers should interpret location metadata with caution. Finally, it predicts that the increased transparency may prompt adjustments in threat actor TTPs and might even create tactical opportunities for adversaries, generating further challenges that will shape future investigations. [Full Report Below]

  • CRC Weekly: Cyber-based hostile influence campaigns 17th-23rd November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] Russia Shifts Strategic Budget to Prioritize Digital Information Operations Pro-Russian Cyberattacks Target Danish Political Websites During Election Period Russia Incorporating Cartoons and Animation in African Narrative Warfare Strategy Generative AI utilised to Fabricate Ukrainian Surrender Narratives Multiplying Inauthentic Networks Signal Rising Chinese Interference Risk to South Korea Generative AI Facilitates High Volume Low Quality Influence Operations Artificial Intelligence Behind Trend for Personalized Influence Campaigns Political Instability in Bulgaria Amplifies Vulnerability to Hostile Narratives Strategic Use of AI Suggested to Counter Venezuelan Regime Narratives Crowdsourced Context on Platforms Significantly Reduces Virality of Misleading Content LLMs Accelerate Convergence of Cybercrime and Information Threats [CRC Glossary] [Download Report] [ Report Highlights] Moscow shifts strategy, boosting state media funding by 54 percent to prioritize information warfare over conventional military spending. -Jamestown Foundation Pro-Russian hackers target Danish political party websites with DDoS attacks to manufacture confusion and disrupt the ongoing election period. - Danish Social Security Agency  Moscow weaponizes animated cartoons and historical grievances to bypass literacy barriers and reshape political alliances across the African continent.  - EU vs. Disinfo Kremlin-linked actors exploit OpenAI’s Sora 2 to generate realistic but fabricated footage of Ukrainian soldiers surrendering on the battlefield.  - NewsGuard Multiplication of Chinese linked sockpuppet accounts mark early signs of Chinese interference operations targeting Korean elections. - Doublethink Lab  High-volume "AI slop" floods social platforms, prioritizing quantity over sophistication in modern influence operations. -Graphika Artificial intelligence transforms information warfare into a precision weapon, enabling actors to deploy millions of personalized, microtargeted falsehoods simultaneously. - Foreign Affairs.com Prolonged political paralysis and high news avoidance in Bulgaria create a fertile environment for foreign and domestic hostile influence campaigns. - Disinfo Lab Experts suggest leveraging agentic AI to penetrate Venezuela’s closed information environment and deliver precision-targeted narratives to counter the regime. - CSIS Study finds crowd-sourced corrections on X significantly reduce misinformation by leveraging reputational pressure to prompt deletions. - University of Rochester The AI arms race enables malicious actors to deploy automated "swarms" and sophisticated cybercrime with minimal expertise. -Oxford Internet Institute [Weekly Review] 1. Russia Shifts Strategic Budget to Prioritize Digital Information Operations According to an article by the Jamestown Foundation , the Russian state is significantly reallocating resources from conventional military spending toward a comprehensive information warfare strategy, evidenced by a 54 percent increase in state media funding within its 2026 draft budget. The Kremlin is operationalizing an expansive propaganda ecosystem that integrates traditional state media and diplomatic channels with advanced digital tactics, including AI-generated content and coordinated online influence operations. These efforts are designed to shape the perceptions of domestic, Ukrainian, and Western audiences simultaneously. Domestically, the narrative warfare focuses on projecting military success and economic stability while denying war crimes. Externally, Kremlin-backed platforms are engaged in a hostile influence campaign (HIC) aimed at intimidating Western governments and undermining support for Ukraine. The analysis notes that these hybrid warfare tactics are becoming increasingly sophisticated, utilizing AI-generated fake videos and cloned social media accounts to erode Ukrainian morale. This strategic pivot highlights Russia's reliance on non-kinetic methods to offset conventional military stagnation, aiming to destabilize Western unity and manipulate public opinion through a synchronized combination of kinetic strikes and digital information operations. Source: Jamestown Foundation, 'Kremlin Shifts Focus to Information Warfare' Available Online:   https://jamestown.org/kremlin-shifts-focus-to-information-warfare/ Top Of Page 2. Pro-Russian Cyberattacks Target Danish Political Websites During Election Period Several political party websites in Denmark were temporarily disabled following a series of Distributed Denial-of-Service (DDoS) attacks on November 17. An official statement by The Danish Agency for Social Security links this disruptive activity to a pro-Russian hacker group, noting that while the attacks did not compromise sensitive data or election systems, they serve as a tactic to generate confusion. Authorities warn that these Cyfluence operations aim to amplify misleading narratives about stability during the municipal and regional election period. The timing of the disruption aligns with elevated threat assessments predicting that foreign actors would attempt to shape public perception by hindering access to political information. Although DDoS attacks are a recurring feature of the national cyber landscape, this specific campaign underscores how cyber threats are integrated into broader hostile influence campaigns. Officials emphasize that attack methods are continuously evolving, necessitating ongoing adaptations in cyber defenses to protect democratic processes and counter attempts to manipulate public confidence through digital interference. Source: The Danish Agency for Social Security, 'Several political party websites hit by DDoS attacks' Available Online: https://samsik.dk/artikler/2025/11/flere-partiers-hjemmesider-ramt-af-ddos-angreb/ Top Of Page 3. Russia Incorporating Cartoons and Animation in African Narrative Warfare Strategy Russian influence actors are increasingly exploiting historical grievances and anti-Western sentiment to reshape public perception in Africa, as detailed in an article by EU vs. Disinfo . Through projects such as TruthAfrica, analysts have documented a coordinated effort to disseminate narratives that position Moscow as a defender of traditional values while framing Western nations as neo-colonial aggressors. A central component of this hostile influence campaign is the deployment of memetic warfare tactics, specifically the use of animation and easily shareable media, to circumvent literacy barriers and linguistic differences across the continent. Cartoons that glorify Russian-linked paramilitary groups and vilify Western actors have emerged as potent vehicles for this propaganda. This content is designed to shift blame for regional instability away from jihadist groups and toward Western partners, thereby increasing local acceptance of Russian private military companies. The article highlights that these techniques are not limited to the African information environment; similar animated propaganda targeting Ukraine demonstrates that these digital methods are part of a global strategy. By leveraging these accessible media formats, Russian operators aim to effectively manipulate the information ecosystem and secure long-term geopolitical advantages. Source: EU vs. Disinfo, 'A Battle for Hearts and Minds: How Russian Propaganda Takes Over Africa' Available Online:   https://euvsdisinfo.eu/a-battle-for-hearts-and-minds-how-russian-propaganda-takes-over-africa/ Top Of Page 4. Generative AI utilised to Fabricate Ukrainian Surrender Narratives Kremlin-linked propagandists are increasingly leveraging advanced generative AI tools to produce fabricated video content depicting Ukrainian soldiers surrendering. As revealed in a NewsGuard report , actors have utilized OpenAI's Sora 2 text-to-video technology to create viral clips intended to erode trust in the Ukrainian Armed Forces during critical combat phases. Despite the realistic visual quality of this synthetic propaganda, experts have identified clear inconsistencies, including incorrect uniforms, distorted facial features, and mispronounced geographic names, confirming the footage is inauthentic. The investigation identified a specific cluster of videos circulating in November 2025, many of which retained traces of the AI tool’s watermarks. These clips were disseminated by anonymous accounts previously associated with pro-Kremlin influence operations, indicating a structured effort to amplify false narratives. Ukraine’s Center for Countering Disinformation has similarly flagged this activity as part of a coordinated network targeting foreign audiences. The campaign highlights the growing reliance on AI-generated media to manipulate perceptions of battlefield reality and demoralize target populations. Source: NewsGuard, 'Kremlin Propagandists Weaponize OpenAI’s' Available Online:   https://www.newsguardrealitycheck.com/p/kremlin-propagandists-weaponize-openais Top Of Page 5. Multiplying Inauthentic Networks Signal Rising Chinese Interference Risk to South Korea As published by Doublethink Lab , rising concerns regarding Chinese interference have triggered right-wing protests in South Korea, despite court rulings debunking claims of direct election manipulation. However, the publication confirms that Foreign Information Manipulation and Interference (FIMI) by actors linked to the People’s Republic of China is occurring, albeit on a limited scale. Investigations have uncovered evidence of fake Korean-language news websites and of coordinated comment manipulation on major digital platforms, both designed to influence online discourse. A specific tactic involves the use of fabricated social media personas, often sockpuppet accounts posing as young Korean women, which engage with pro-China content and amplify anti-U.S. narratives. While these networks have not yet executed large-scale coordinated campaigns, their presence represents a "canary in the coal mine" for future operations. The report suggests that while current reach is low, these subtle digital activities fit established patterns of PRC-aligned influence. Strengthening information literacy and monitoring for inauthentic behavior are cited as critical measures to prevent these nascent operations from escalating into broader societal destabilization. Source:  Doublethink Lab, 'Canaries in the Coal Mine: Cases of Foreign Information Manipulation in South Korea' Available Online:   https://medium.com/doublethinklab/canaries-in-the-coal-mine-cases-of-foreign-information-manipulation-in-south-korea-163217242330 Top Of Page 6. Generative AI Facilitates High Volume Low Quality Influence Operations Recent research by Graphika indicates that while generative AI has increased the speed and volume of influence operations, it has not yet resulted in the sophisticated, high-quality campaigns initially feared. Instead, the technology is enabling state and non-state actors to flood social platforms with "AI slop"—high-volume, low-quality content. Operations such as CopyCop, Doppelgänger, and Spamouflage are utilizing these tools to automate the creation of personas and the generation of text, effectively scaling their ability to mislead targeted audiences and amplify polarizing narratives without requiring significant human resources. Applying the Actor-Behavior-Content (ABCDE) framework, analysts observed that propagandists are delegating core tasks to AI, blurring the lines between professional and amateur operators. This shift allows for the saturation of the information environment with fabricated content, even if individual pieces remain relatively crude. The report notes that despite the surge in volume, engagement metrics suggest these AI-driven campaigns struggle to achieve organic reach. The primary threat identified is not the persuasiveness of the synthetic propaganda, but the sheer quantity of misleading material, which can distort public debate and overwhelm fact-checking capabilities within the digital information ecosystem. Source: Graphika, 'Cheap Tricks' Available Online:   https://public-assets.graphika.com/reports/Graphika_Report_Cheap_Tricks.pdf Top Of Page 7. Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes According to a report by Foreign Affairs , the integration of artificial intelligence is transforming information operations from crude mass persuasion into precise, adaptive campaigns capable of undermining democratic processes. Recent incidents, such as deepfake audio impersonating the U.S. Secretary of State and the large-scale psychological profiling conducted by China's GoLaxy, demonstrate how AI enables hostile actors to deploy microtargeted propaganda. This technological shift allows for the creation of convincing forgeries and the simultaneous dissemination of millions of targeted falsehoods, effectively supercharging the toolkit available for cognitive warfare. Case studies highlight the global reach of these tactics, from AI-generated hate speech in India to Russian-linked operations that disrupted Romania’s 2024 election. The report cites China’s "Uncle Spam" campaign as an example of how adversaries harvest vast amounts of personal data to refine their targeting strategies. Despite the escalation of these AI-enabled threats, U.S. defenses have been compromised by budget cuts to agencies responsible for countering malign information. Experts warn that without renewed investment and collaboration with tech platforms, the information environment will remain vulnerable to these sophisticated, automated forms of strategic deception and foreign interference. Source: Foreign Affairs, 'Artificial Intelligence Supercharging Disinformation Warfare' Available Online:   https://www.foreignaffairs.com/united-states/artificial-intelligence-supercharging-disinformation-warfare Top Of Page 8. Political Instability in Bulgaria Amplifies Vulnerability to Hostile Narratives Bulgaria’s prolonged political instability, characterized by fragmented parties and repeated no-confidence votes, has created an information environment highly susceptible to malign influence. A report by Disinfo Lab outlines how the split within the Movement for Rights and Freedoms (MRF) and the rise of nationalist factions have fueled public mistrust in democratic institutions. This institutional paralysis, combined with multiple snap elections, has weakened public confidence, making citizens more vulnerable to narrative warfare that exploits political fatigue. The situation is further aggravated by a deteriorating media landscape, where fragile press freedom and high rates of news avoidance drive users toward unverified content on social platforms. These conditions allow hostile influence campaigns (HICs), whether domestic or foreign, to effectively manipulate public sentiment regarding governance and corruption. The report notes that Bulgaria currently has the highest rate of news avoidance in Europe, creating a vacuum that is easily filled by information disorder. While the country has continued its integration into the EU, including Schengen entry, the prevalence of misleading narratives remains a destabilizing force. This environment of polarization serves to undermine the legitimacy of institutions and complicates Bulgaria's strategic communications regarding its European orientation. Source: Disinfo Lab, 'Disinfo landscape in Bulgaria' Available Online:   https://www.disinfo.eu/wp-content/uploads/2025/11/20251117_Disinfo-landscape-in-Bulgaria-V2-1.pdf Top Of Page 9. Strategic Use of AI Suggested to Counter Venezuelan Regime Narratives The deployment of a U.S. carrier strike group near Venezuela signals increased attention on the region, yet a CSIS commentary argues that the strategic outcome will likely be determined by information operations rather than kinetic force. The author suggests that Venezuela's tightly controlled information environment, characterized by digital surveillance and censorship, presents unique opportunities for soft warfare. With the Maduro regime facing deep unpopularity, U.S. actors could leverage agentic AI to monitor sentiment and deploy tailored strategic communications that align with local grievances, effectively bypassing the state’s narrative control while minimizing physical risks. The commentary emphasizes that AI-enabled tools, such as automated source monitoring and anomaly detection, can facilitate precision targeting in areas like Puerto Cabello. By analyzing AI-curated datasets from social platforms, analysts can detect subtle shifts in public mood that might otherwise go unnoticed. This approach posits that information warfare is becoming central to modern statecraft. The ethical application of AI-assisted messaging offers a method to pressure repressive regimes and shape public perception, advancing strategic objectives through the digital domain without the need to escalate into traditional armed conflict. Source: CSIS, 'How to Win the Information Game in Venezuela' Available Online:   https://www.csis.org/analysis/how-win-information-game-venezuela Top Of Page 10. Crowdsourced Context on Platforms Significantly Reduces Virality of Misleading Content New research published by the University of Rochester indicates that X's Community Notes system is an effective tool for Digital Impact on Discourse (DID), significantly curbing the spread of misleading content. The study found that posts receiving public notes, which add corrective context visible to all users, were 32 percent more likely to be deleted by their authors compared to posts receiving only private notes. This suggests that peer-generated corrections can outperform traditional algorithmic enforcement or expert fact-checking in influencing user behavior within the social media information ecosystem. The researchers identify reputational pressure as the primary mechanism driving these retractions, particularly among high-visibility or verified users who wish to maintain credibility. These findings have broader implications for countering information disorder, demonstrating that transparency-driven, community-based tools can improve information integrity without resorting to direct censorship. By leveraging users' collective action, platforms can create a scalable model to mitigate the impact of viral false narratives and foster a more resilient digital environment amid polarized online discourse. Source: University of Rochester, 'Effective Online Fact Checkers Are Peers' Available Online:   https://techxplore.com/news/2025-11-effective-online-fact-checkers-peers.html Top Of Page 11. LLMs Accelerate Convergence of Cybercrime and Information Threats The rapid acceleration of the AI "arms race," driven by major developers, is facilitating a new wave of sophisticated operations that blur the line between cybercrime and information warfare. According to an article by the Oxford Internet Institute , Large Language Models (LLMs) now enable malicious actors to execute personalized phishing campaigns and "perfect scams" with minimal technical expertise. Furthermore, the availability of custom malicious models, such as WormGPT, lowers the barrier to entry for conducting offensive cyber activities. Simultaneously, multi-agent AI systems can infiltrate online communities to coordinate inauthentic behavior and manufacture synthetic consensus, challenging traditional verification systems. The article notes that this environment is spurring rapid innovation in defensive measures, including AI-detection tools and provenance systems like C2PA, which aim to enhance transparency. The evolving landscape represents a paradigm shift where the focus must move from restricting access to AI tools—which are now widely available—to building a resilient information ecosystem capable of withstanding AI-driven manipulation. The analysis underscores the urgent need for coordinated action between policymakers and industry to counter these scalable, automated threats that combine the disruptive potential of cyber warfare with the deceptive power of synthetic propaganda. Source: Oxford Internet Institute, 'The LLM Arms Race' Available Online:   https://www.oii.ox.ac.uk/news-events/the-llm-arms-race/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page [Download Report] Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 10th-16th November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] Russian Network Strom-1516 Targets Kirk with Fabricated Video Armenian Election Targeted Early and Viciously by Russian Campaign China’s Coordinated Disinformation Amplifies Russia’s Hybrid War in Europe NewsGuard Finds Grok Still Spreading 2020 U.S. Election Falsehoods Grokipedia’s Reliance on Questionable Sources Risks Influencing Other AI Models Forged News Site Fuels Cross-Platform Influence Campaign Targeting Armenia Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes Proposed U.S. Legislation Seeks to Counter Foreign Election Influence Minor Digital Frictions Could Slow the Online Spread of Misinformation Democracy Shield launched by EU to ICounter Foreign Interference EU Initiative Trains Botswana Journalists in Counter-Disinformation [CRC Glossary] [Download Report] [ Report Highlights] Researchers have developed a proactive public defense tool that allows individuals to corrupt their own image data, thereby disrupting the supply chain for malicious deepfake generation - TechXplore A Carnegie Europe article argues that China has become the “weak link” in Europe’s strategy against Russia, actively undermining Western unity and sanctions enforcement - Carnegie Europe Research using computer models suggests that introducing small obstacles, or “digital friction,” into the social media sharing process can significantly slow the spread of misinformation - TechXplore A critical long-term threat of Grokipedia is that its poorly sourced articles could be ingested as training data, creating a self-reinforcing chain of systemic misinformation for future AI models - Techxplore Analysis of a viral video threat against a U.S. commentator revealed linguistic cues and production styles consistent with the known Russian influence actor Storm-1516/”CopyCop” - DisinfoWatch Moscow’s early and aggressive influence campaign targeting the 2026 election is seen as a response to Armenia’s geopolitical pivot toward Europe and away from its traditional alliance with Russia - NewsGuard Users were observed citing Grok’s AI-generated falsehoods about the 2020 election as “evidence” of fraud, demonstrating a pathway for laundering synthetic content into online discourse - NewsGuard The dissemination of a fabricated story about nuclear waste in Armenia followed a recurring pattern of Russian-linked TTPs, where disinformation is seeded on dubious foreign websites and amplified cross-platform - DFR Lab Election-integrity experts warn that without stronger federal regulations, adversarial nations could exploit U.S. non-profit networks to seed disinformation and manipulate public opinion - The Daily Wire • The EU’s new Democracy Shield initiative represents a strategic shift from reactive analysis to a proactive and aggressive defense against Foreign Information Manipulation and Interference (FIMI) - EU vs. Disinfo • The EU’s training of journalists in Botswana reflects a broader strategy of using targeted capacity-building to protect freedom of expression and counter disinformation in partner nations - EU Delegation to Botswana [Weekly Review] 1. Russian Network Strom-1516 Targets Kirk with Fabricated Video A report from DisinfoWatch has detailed a viral video that falsely depicted Ukrainian soldiers threatening a conservative American commentator. The video, which circulated on X, was found by BBC Verify and independent researchers to have no connection to Ukraine’s armed forces. Instead, the investigation revealed that linguistic cues, production style, and digital dissemination patterns were consistent with Russia’s Storm-1516/”CopyCop” network, which is known for producing staged propaganda using actors and fake uniforms. This hostile influence campaign aligns with the network’s established tactics, techniques, and procedures, which have been previously documented by Microsoft and French authorities in relation to fabricated content targeting Western political figures and elections. The video was initially shared by a partisan U.S. account before being amplified across Telegram and far-right websites, demonstrating a clear pathway for injecting foreign-produced disinformation into domestic political discourse. The incident serves as another example of a state-linked actor creating synthetic content for a targeted influence operation. Source:   DisinfoWatch, ‘Russia’s Storm-1516 Fabricates “Ukrainian” Threat to Erika Kirk ‘ Available Online: https://disinfowatch.org/disinfo/russias-storm-1516-fabricates-ukrainian-threat-to-erika-kirk/ Top Of Page 2. Armenian Election Targeted Early and Viciously by Russian Campaign Russia has initiated a hostile influence campaign targeting Armenia’s parliamentary elections more than a year in advance, an unusually early start for such operations. A NewsGuard investigation found that since April 2025, Moscow’s propaganda network has propagated at least 18 distinct fabricated narratives attacking Armenia’s pro-Western government. The campaign utilized digital tactics, including AI-generated news sites, coordinated social media networks, and manipulated videos distributed across 11 platforms in 8 languages, achieving an estimated 45 million views. This operation, attributed to Russian influence groups like Storm-1516, deploys narratives centered on corruption and the betrayal of national sovereignty, mirroring tactics previously used against other European nations. The campaign’s goal is to erode public confidence in the current leadership ahead of the 2026 vote, reflecting Moscow’s response to Armenia’s geopolitical shift toward Europe. Armenian officials have acknowledged the challenge of countering these accelerating propaganda attacks. Source:   NewsGuard, ‘Russia Targets Armenia’s Elections – Early and Viciously’ Available Online: https://www.newsguardtech.com/wp-content/uploads/2025/11/Armenia-Re-port-1.pdf Top Of Page 3. China’s Coordinated Disinformation Amplifies Russia’s Hybrid War in Europe An article from Carnegie Europe asserts that China is becoming a central actor in Russia’s hybrid war against Europe through expanded economic, technological, and informational support. The publication highlights the increasing coordination of disinformation strategies between Beijing and Moscow. This is most evident in Chinese campaigns, such as Spamouflage, which are subsequently amplified by Russian state media and diplomatic channels. Both state actors deploy synchronized narratives that blame the West for the war in Ukraine, leveraging digital tools like generative AI and fake news portals to manipulate public perception and reinforce mutual messaging. This coordinated information manipulation is identified as one component of a broader hybrid strategy that also includes cyberattacks and espionage. The article warns that this growing China-Russia disinformation ecosystem undermines Western unity and sanctions enforcement, ultimately enabling Moscow to prolong its aggression. It concludes that stronger, jointly enforced sanctions and deeper international cooperation are needed to counter this threat. Source:   Carnegie Europe, ‘China is the Weak Link in Europe’s Ukraine Strategy’ Available Online: https://carnegieendowment.org/europe/strategic-europe/2025/11/china-is-the-weak-link-in-europes-ukraine-strategy?lang=en Top Of Page 4. NewsGuard Finds Grok Still Spreading 2020 U.S. Election Falsehoods The AI chatbot Grok has confidently and incorrectly stated that Donald Trump won the 2020 U.S. presidential election in multiple instances. A NewsGuard report identified that although the chatbot initially provided the certified result, users who repeatedly prodded it were able to elicit false claims asserting a “legitimate win” for Trump. These outputs occurred despite the absence of evidence for widespread fraud and the election’s certification by all relevant authorities. The findings raise concerns about the platform’s potential to spread election denialism, especially given the high volume of user engagement with Grok as a fact-checking tool. NewsGuard also found examples of users citing the chatbot’s erroneous responses as evidence of election fraud, demonstrating how AI-generated falsehoods can be laundered into online discourse. This behavior is part of a pattern, as previous investigations found Grok spreading other unfounded claims. When asked for comment, Grok reaffirmed the correct election outcome but insisted no correction was necessary. Source:   NewsGuard, ‘X’s Grok Claims Trump Won the 2020 Election’ Available Online: https://www.newsguardrealitycheck.com/p/xs-grok-claims-trump-won-the-2020 Top Of Page 5. Grokipedia’s Reliance on Questionable Sources Risks Influencing Other AI Models According to a Techxplore article , the AI-generated encyclopedia Grokipedia, launched by Elon Musk’s xAI, poses a significant risk to the digital information environment due to its reliance on problematic sources. The platform uses its underlying AI model, Grok, to generate content with limited transparency, resulting in thousands of citations to “questionable” sources, particularly for politically contested topics. Critics cited in the article note that the content often aligns with right-wing perspectives and appears to downplay controversies involving Musk while legitimizing conspiracy theories. By visually mimicking a trusted resource like Wikipedia but using inferior sourcing, Grokipedia could erode public trust and mainstream fringe narratives. The most critical long-term threat identified is the potential for Grokipedia’s poorly sourced articles to be ingested as training data for future generative AI models. This could create a self-reinforcing chain of errors, systematically embedding misinformation and algorithmic bias into the global information infrastructure. Source:   Techxplore, ‘Musk’s Grokipedia leans on ‘questionable’ sources, study says’ Available Online: https://techxplore.com/news/2025-11-musk-grokipedia-sources.html Top Of Page 6. Forged News Site Fuels Cross-Platform Influence Campaign Targeting Armenia A coordinated disinformation campaign used a forged media outlet and impersonated NGOs to falsely claim France had dumped nuclear waste in Armenia. An investigation from DFRLab details how the operation began with a fabricated article on a fake website designed to mimic the legitimate news organization France 24. The narrative was then amplified by a website impersonating a real Armenian environmental NGO; this impersonator site appears to be a deep asset, having been revived after years of dormancy with backdated content to simulate legitimacy. The false claim was disseminated across social media by high-volume accounts, including bots and pro-Kremlin amplifiers, which targeted French politicians and Armenian communities. Azerbaijani state-aligned media further amplified the narrative. This operation follows a recurring pattern of hostile influence campaigns against Armenia that seed disinformation on dubious foreign websites impersonating legitimate media, a tactic consistent with Russia-linked operations like Storm-1516. Source : DFR Lab, ‘How a forged France 24 report fueled a radioactive lie concerning France-Armenia relations’ Available Online: https://dfrlab.org/2025/11/11/how-a-forged-france-24-report-fueled-a-radioactive-lie-concerning-france-armenia-relations/ Top Of Page 7. Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes TechXplore reports on the development of ‘Silverer,’ a defensive AI tool designed to combat the creation of malicious deepfake images and videos. Developed by researchers at Monash University and the Australian Federal Police, the tool employs a technique known as data poisoning. This method involves subtly altering an image’s pixels before it is uploaded online. While the alterations are invisible to the human eye, they are engineered to deceive the large machine learning models used by malicious actors to train their generative AI systems. The intended result is that any AI-generated content produced from this poisoned data will be of low quality, skewed, or completely unrecognizable. The ‘Silverer’ tool represents a significant shift toward a proactive defense, empowering the public to protect their images from being used in the datasets that fuel the creation of hyper-realistic synthetic media. By corrupting the data supply chain, the initiative aims to raise the costs and slow the production of malicious deepfakes. Source : TechXplore, ‘Poisoned pixels: New AI tool to fight malicious deepfake images’ Available Online: https://techxplore.com/news/2025-11-poisoned-pixels-ai-tool-malicious.html Top Of Page 8. Proposed U.S. Legislation Seeks to Counter Foreign Election Influence New legislation proposed by Senator Bill Hagerty aims to address federal loopholes that permit foreign nationals to indirectly influence U.S. elections. According to a an article by The Daily Wire , election-integrity experts warn that these gaps allow for covert political pressure and create vulnerabilities to foreign-funded issue campaigns that can distort policy debates. While direct contributions to candidates are illegal, foreign funding can still flow to ballot initiatives and voter registration drives. Advocacy groups argue that this involvement is extensive, with billions invested by overseas donors in U.S. policy organizations, funding litigation and lobbying that can shape public narratives. Such undisclosed funding can enable sophisticated influence strategies that appear domestic but are driven by foreign interests. Experts cited in the report warn that without stronger federal regulations, adversarial nations could exploit these same channels to seed misleading information, manipulate public opinion, and compromise the integrity of the American information environment. Source : The Daily Wire, ‘New Legislation Would Slam The Door On Foreign Influence Of Elections’ Available Online: https://www.dailywire.com/news/new-legisla-tion-would-slam-the-door-on-foreign-influence-of-elections?topStoryPosition=undefined&author=Leif+Le+Mahieu&category=Original&elementPosition=4&row=1&row-Headline=Top+Stories&rowType=Top+Stories&title=New+Legislation+Would+Slam+The+Door+On+Foreign+Influence+Of+Elections Top Of Page 9. Minor Digital Frictions Could Slow the Online Spread of Misinformation New research from the University of Copenhagen, reported by TechXplore , suggests that integrating small obstacles into the social media sharing process can significantly reduce the speed at which misinformation spreads. Using computer models simulating platforms like X and Mastodon, researchers found that minimal “digital friction,” such as brief pauses or pop-up prompts, encourages users to think twice before sharing sensational or false content, thereby slowing its overall diffusion across a network. The study also found that while friction alone can slow the spread, it does not necessarily improve the quality of shared content. However, when a brief learning component—such as a pop-up quiz about how misinformation works—was added to the friction, the model showed a notable improvement in the quality of information being shared. The researchers hope to test these interventions in real-world settings and collaborate with technology companies to evaluate their effectiveness in reducing engagement with low-quality posts. Source : TechXplore, ‘Small digital frictions can slow the spread of misinformation’ Available Online: https://techxplore.com/news/2025-11-small-digital-frictions-misinformation.html Top Of Page 10. Democracy Shield launched by EU to Counter Foreign Interference The European Union has established the European Democracy Shield to counter the threat of Foreign Information Manipulation and Interference (FIMI). An article from EU vs. Disinfo explains that the joint initiative by the European Commission and the European External Action Service addresses how Russia and other autocratic actors exploit open democratic information spaces as a “geopolitical battleground.” The Shield represents a strategic shift from reactive detection and analysis to a more proactive and aggressive defense against persistent FIMI campaigns and state propaganda. The initiative is designed to reinforce three pillars of governance: protecting the integrity of the information space, ensuring free and fair elections, and building societal resilience. Its external dimension will scale up counter-FIMI training through EU Delegations and deepen collaboration with international partners like the G7 and NATO. A core component of the new strategy is to “raise the costs” for FIMI actors by publicly exposing manipulators and imposing sanctions. Source : EU vs. Disinfo, ‘European Democracy Shield and EU Strategy for Civil Society pave the way for stronger and more resilient democracies’ Available Online: https://euvsdis-info.eu/the-european-democracy-shield/ Top Of Page 11. EU Initiative Trains Botswana Journalists in Counter-Disinformation A press release from the EU Delegation to Botswana details a training session held for local journalists to address the proliferation of misinformation and disinformation. The initiative, a collaboration with the fact-checking organization Africa Check, aimed to equip media representatives with the skills needed to counter manipulated information that erodes public trust. The core counter-strategy taught was rigorous fact-checking and verification, which was described by the Botswana Editors Forum Chairman as a “moral discipline” for safeguarding newsroom integrity. The workshop provided journalists with practical tools, such as reverse image search, and techniques to verify the authenticity of content in a complex information ecosystem. This initiative reflects the EU’s broader commitment to supporting democratic resilience and independent media through partnerships. By providing these essential skills, the EU aims to assist journalists in protecting freedom of expression and strengthening the local information environment against malign influence. Source: EU Delegation to Botswana, ‘Building media capacity and promoting democracy through fact-checking and verification’ Available Online: https://www.eeas.europa.eu/delegations/botswana/building-media-capacity-and-promoting-democracy-through-fact-checking-and-verification_en Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page [Download Report] Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 3rd-9th November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] 1. Kremlin-Linked Digital Campaigns Target Japanese Election and Corrupt AI Models 2. Russian Influence Operation "Storm-1516" Impersonates Journalists and Media 3. NATO Report Details Russia's Adaptive Strategy for Global Information Warfare 4. Russia's RT Uses Deepfakes and AI in Global Propaganda Pivot 5. Pro-Kremlin Channels Exploit Climate Change Discourse to Divide European Societies 6. The Valdai Discussion Club: Putin's Propaganda Soapbox 7. Canadian Province's Request Sparks National Call for China Interference Inquiry 8. China Uses "Sharp Power" to Deepen Influence in Western Balkans 9. Authoritarian Regimes Manipulate Context to Create Alternative Digital Realities 10. EU and UK Rules Target Manipulated Content in Global Conflicts [CRC Glossary] [Download Report] [ Report Highlights] Russian influence operations are now deliberately flooding the web with propaganda to corrupt the training data of AI language models. - Nippon Evidence of foreign influence is now surfacing even in Canada's smallest province, intensifying calls for a full national inquiry into Chinese operations. - The Hill Times A Russian operation codenamed "Storm-1516" is stealing journalists' identities to publish fake articles on fabricated news websites. - Euro News Authoritarian states are sculpting a "warped alternative reality" by manipulating context to exploit social media algorithms. - The Conversation New EU and UK laws aim to hold platforms accountable for amplifying propaganda and deleting evidence of human rights abuses. - European External Action Service (EEAS) A NATO report exposes the structure of Russia's influence machine, which pairs overt state media with deniable, covert "gray networks." - NATO StratCom COE To mark its 20th anniversary, RT released a deepfake video of U.S. news anchors admitting to serving government interests. - United24Media China is embedding its influence in the Balkans by creating deep technological dependency through surveillance and infrastructure. - Small Wars Journal Pro-Kremlin channels are weaponizing climate change, framing Europe's green energy policies as a form of "economic self-destruction" to divide societies. - EU vs. Disinfo In a major speech, Vladimir Putin claimed that Europe is fleeing “gender terrorism” as part of a campaign to undermine Western unity. - DisinfoWatch [Weekly Review] 1. Kremlin-Linked Digital Campaigns Target Japanese Election and Corrupt AI Models  Russian-linked influence operations targeted Japan's July 20 House of Councillors election using networks of bots and coordinated replies to amplify divisive content and sow doubt. An article  by Nippon revealed that a significant portion of hostile comments on domestic reporting originated from a Kremlin-aligned ecosystem, with one analysis finding that 32 percent of such comments on a single post were linked to Russia-aligned accounts. The hostile influence campaign relied on a high volume of small, inauthentic accounts to infiltrate online conversations. The operation also involved deliberately flooding the web with pro-Kremlin content to "groom" large language models and other AI tools. Japanese-language hubs republished hundreds of pro-Russia items daily, a strategy designed to bias search results and the training data used by AI chatbots. Audits reported that leading generative AI systems subsequently returned false or misleading information on controversial topics far more often, at an average rate of 35 percent. Source:  Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions', Available Online:   https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html Top Of Page 2. Russian Influence Operation "Storm-1516" Impersonates Journalists and Media According to a report from the NATO Strategic Communications Centre of Excellence , Russia has developed a comprehensive and adaptive communications strategy that integrates state-controlled media, covert influence networks, and digital campaigns to manipulate global narratives. The publication, titled "The Collage of Kremlin ComStrat," reveals how Moscow combines traditional propaganda with modern hybrid tactics, including AI-driven content and coordinated online amplification through proxy media outlets. This approach aims to sow distrust in Western institutions and shape perceptions of geopolitical events like the war in Ukraine. The Kremlin's strategy emphasizes psychological influence and information saturation, using overt channels like RT and Sputnik alongside covert networks on social media. The report underscores that these information operations are not isolated events but part of a long-term, state-sponsored effort to weaken support for Ukraine and amplify polarization in Western societies. By blending intelligence tradecraft with digital information warfare, Russia's communication strategy demonstrates how manipulated information has become a core instrument of state power. Source:  Euro News , 'False claims and stolen bylines: The Russian propaganda strategy haunting the newsroom' Available Online:   https://www.euronews.com/my-europe/2025/11/04/false-claims-and-stolen-bylines-the-russian-propaganda-strategy-haunting-the-newsroom Top Of Page 3. NATO Report Details Russia's Adaptive Strategy for Global Information Warfare Two decades after its launch, Russia's state-controlled media outlet RT has fully transformed into a tool for global hostile influence campaigns. A report from United24Media  highlights how, to mark its 20th anniversary, RT released a deepfake video using AI to impersonate prominent U.S. news anchors, falsely showing them admitting to serving U.S. government interests. This synthetic propaganda is emblematic of RT's pivot toward audiences in the Middle East, Africa, and Latin America—regions less affected by Western sanctions. Despite being banned across Europe and North America, RT has adapted its operations, functioning as what its editor-in-chief calls an "information guerrilla." The channel now utilizes mirror sites, front companies, and alternative platforms like Rumble and VK to continue its reach. It also employs AI to create fake journalists, clone voices, and automate the dissemination of propaganda. Former branches, including RT France and RT Germany, continue to shape local discourse through sympathetic media figures, ensuring the persistence of their information operations. Source:  NATO StratCom COE, 'The Collage of the Kremlin's Communication Strategy' Available Online:   https://stratcomcoe.org/publications/the-collage-of-the-kremlins-communication-strategy/324 Top Of Page 4. Russia's RT Uses Deepfakes and AI in Global Propaganda Pivot Two decades after its launch, Russia's state-controlled media outlet RT has fully transformed into a tool for global hostile influence campaigns. A report  from United24Media highlights how, to mark its 20th anniversary, RT released a deepfake video using AI to impersonate prominent U.S. news anchors, falsely showing them admitting to serving U.S. government interests. This synthetic propaganda is emblematic of RT's pivot toward audiences in the Middle East, Africa, and Latin America—regions less affected by Western sanctions. Despite being banned across Europe and North America, RT has adapted its operations, functioning as what its editor-in-chief calls an "information guerrilla." The channel now utilizes mirror sites, front companies, and alternative platforms like Rumble and VK to continue its reach. It also employs AI to create fake journalists, clone voices, and automate the dissemination of propaganda. Former branches, including RT France and RT Germany, continue to shape local discourse through sympathetic media figures, ensuring the persistence of their information operations. Source:  United24Media, '20 Years of RT: How Russia's Propaganda Hydra Survived the Ban' Available Online:   https://united24media.com/anti-fake/20-years-of-rt-how-russias-propaganda-hydra-survived-the-ban-13121 Top Of Page 5. Pro-Kremlin Channels Exploit Climate Change Discourse to Divide European Societies As Europe faces worsening climate disasters, pro-Kremlin channels are actively manipulating climate discourse to undermine trust in Western institutions. A report  from EU vs. Disinfo  explains that while Russia's official media acknowledges climate science, its broader information networks push climate denial and distort facts. For Kremlin propagandists, climate change is a strategic weapon used to divide societies and weaken democratic consensus. Their narratives often link EU green energy transitions and sanctions against Russia to "industrial decline," framing Europe's environmental efforts as economic self-destruction. These misleading claims are designed to erode public support for sanctions and renewable energy by exploiting legitimate economic fears. Through coordinated messaging, Moscow’s information operations also smear climate science as a "religion" and attack political leaders who address environmental realities. This strategy is part of a broader effort to portray Russia as a more responsible global actor than the EU while advancing its geopolitical goals. Source:  EU vs. Disinfo (the publisher), 'Sneaky heat: the Kremlin uses climate change to push its favourite FIMI narratives' (the original source title in quotes) Available Online:   https://euvsdisinfo.eu/sneaky-heat-the-kremlin-uses-climate-change-to-push-its-favourite-fimi-narratives/ Top Of Page 6. The Valdai Discussion Club: Putin's Propaganda Soapbox An article in  DisinfoWatch  examines how Vladimir Putin used the 2025 Valdai Discussion Club forum to advance propagandistic narratives, portraying the West as culturally collapsing and Russia as a moral alternative. The speech highlighted specific claims, such as Europe fleeing “gender terrorism” and NATO hysterically militarizing, to reframe defense and diplomacy narratives. By weaponizing culture-war rhetoric, Moscow seeks to undermine Western unity and credibility, especially among vulnerable audiences. The use of the Kremlin-backed Valdai platform and state media like RT ensures these messages are amplified globally, contributing to Russia’s ongoing information warfare campaign. The Valdai Discussion Club, a Moscow-based think tank, has long served as a key venue where Putin and Kremlin officials outline Russia’s ideological and geopolitical positions to both domestic and international audiences, making it a central component in their strategic communications. Source : DisinfoWatch, 'DisinfoDigest: Decoding Putin's Valdai Speech' Available Online:   https://disinfowatch.org/disinfodigest-decoding-putins-valdai-speech/ Top Of Page 7. Canadian Province's Request Sparks National Call for China Interference Inquiry Growing calls for a national inquiry into China's interference in Canada have followed Prince Edward Island (PEI) Premier Rob Lantz's request for a federal investigation into local groups allegedly linked to Beijing's United Front network. A report by The Hill Times  notes that this appeal follows revelations from a recent book and a media investigation exposing how Chinese state-affiliated organizations may be influencing Canadian institutions. The report argues that only a full-scale, independent national inquiry, paired with a criminal investigation, can uncover the extent of these hostile influence campaigns. Despite repeated intelligence warnings about election interference, diaspora intimidation, and espionage, federal responses have been described as fragmented and politically cautious. P.E.I.'s call for an inquiry is being viewed as a national call to action, demonstrating that even provinces removed from the country's geopolitical epicenters are experiencing the effects of foreign influence. Source : DisinfoWatch, 'DisinfoDigest: Decoding Putin's Valdai Speech' Available Online:   https://disinfowatch.org/disinfodigest-decoding-putins-valdai-speech/ Top Of Page 8. China Uses "Sharp Power" to Deepen Influence in Western Balkans As published  by the Small Wars Journal , China is deepening its presence in the Western Balkans through a blend of defense cooperation, technological dependence, and information manipulation, an approach described as "sharp power." Unlike overt tactics, Beijing's influence relies on subtle yet pervasive methods, with Serbia becoming the focal point of its regional strategy. The country has welcomed Chinese weapon systems, joint military exercises, and advanced surveillance technology that embed long-term dependencies. Chinese state media and local affiliates amplify pro-Beijing narratives through content-sharing agreements and educational programs that promote authoritarian governance models. Through control of digital infrastructure and surveillance systems via companies like Huawei, China not only gains access to critical data but also reinforces its influence over local governments and media ecosystems. This networked approach combines information manipulation with economic leverage, making democratic institutions more vulnerable to external control. Source : Small Wars Journal (the publisher), 'China's Rising Influence in the Western Balkans and How the West Should Respond' (the original source title in quotes) Available Online:   https://smallwarsjournal.com/2025/11/05/chinas-rising-influence-in-the-western-balkans/ Top Of Page 9. Authoritarian Regimes Manipulate Context to Create Alternative Digital Realities An article  by The Conversation explains how authoritarian regimes, particularly Russia and China, are perfecting a form of information operation that relies on manipulating context and selective truth rather than outright falsehoods. By amplifying strategically chosen facts while omitting others, these governments create a misleading "alternative reality" that portrays Western democracies as unstable and hypocritical. This strategy is executed through state-run media, influencer networks, and coordinated bot activity across social media platforms, ensuring that distorted narratives infiltrate the news feeds of both domestic and foreign audiences. The analysis argues that this is an adaptive, data-driven campaign designed to exploit the mechanics of modern social media algorithms. This form of narrative warfare reinforces cynicism and polarization, weakening trust in journalism, democratic governance, and the concept of a shared truth. The broader implication is the gradual normalization of authoritarian narratives within global discourse and the erosion of the common factual foundation necessary for democratic societies to function. Source : The Conversation, 'How authoritarian states sculpt a warped alternative reality in our news feeds' Available Online:   https://theconversation.com/how-authoritarian-states-sculpt-a-warped-alternative-reality-in-our-news-feeds-266092 Top Of Page 10. EU and UK Rules Target Manipulated Content in Global Conflicts According to a policy brief   by the  European External Action Service (EEAS ), manipulated information has become a key strategic weapon in modern conflicts, employed by state and non-state actors to disseminate propaganda and erode trust. Online platforms amplify these risks through algorithmic promotion of harmful content, while weak moderation in conflict zones allows hate speech and foreign information manipulation to proliferate. The brief highlights the European Union's Digital Services Act (DSA) and the United Kingdom's Online Safety Act (OSA) as emerging regulatory tools to counter these threats. These laws require platforms to assess and mitigate systemic risks, including those from hostile influence campaigns and foreign interference, through crisis response mechanisms and transparency requirements. By applying these frameworks with a conflict-sensitive approach, the EU and UK can strengthen information integrity, protect diaspora communities, and set global standards for platform accountability in times of conflict. Source : European External Action Service (EEAS), 'Assessing and Mitigating Conflict-Related Online Risks: Challenges for Governments, Regulators and Online Platforms ‘Available Online:   https://www.isdglobal.org/isd-publications/assessing-and-mitigating-conflict-related-online-risks-challenges-for-governments-regulators-and-online-platforms/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page [Download Report] Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 27th October - 2nd November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] 1. Russian Influence Operations Target Japan's Election and Poison AI Training Data 2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy 3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy 4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative 5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data 6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda 7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland 8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations 9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions [CRC Glossary] [Download Report] [ Report Highlights] An independent analysis found that nearly one-third of hostile online comments during Japan's election were linked to Russia-aligned accounts. - Nippon A coordinated smear campaign has falsely attributed 14 international properties to President Zelenskyy, including estates once owned by Nazi figures. - NewsGuard's Reality Check Moscow is exploiting climate issues as a front in its hybrid war, aiming to fracture EU consensus and protect its fossil fuel economy. - EUvsDisinfo Beijing is justifying its censorship of domestic grievances by framing online dissent as a form of "Western ideological infiltration." - Jamestown When asked about the war in Ukraine, major AI chatbots cited sanctioned Russian state media in nearly 18 percent of responses. - WIRED Despite moderation policies, top AI video tools are generating synthetic propaganda that included Holocaust denial and violent imagery. - Cybernews  Influence campaigns are "pivoting" entire networks of old accounts, turning COVID-era anti-vaccine profiles into tools for anti-Ukrainian messaging. - EU DisinfoLab As the Voice of America retreats from the global stage, Russian and Chinese state media are expanding operations to fill the information void. - GIOR [Weekly Review] 1. Russian Influence Operations Target Japan's Election and Poison AI Training Data An article from Nippon has revealed that Russian-linked information operations targeted Japan’s House of Councillors election by using networks of bots and trolls to sow doubt and amplify divisive narratives. The hostile influence campaign relied on a large number of small, inauthentic accounts to infiltrate online conversations and amplify pro-Kremlin messaging. An independent analysis by international affairs expert Ichihara Maiko identified that approximately 32 percent of hostile comments on a single post were connected to Russia-aligned accounts. Beyond direct engagement, the operation also sought to contaminate the information ecosystem by deliberately flooding the web with pro-Kremlin content to "groom" large language models. Japanese-language hubs, such as Pravda Nihon, republished hundreds of pro-Russia items daily. This strategy is designed to bias search results through query manipulation and pollute the training data used by AI chatbots, leading to the proliferation of AI slop. Independent audits confirmed that leading generative AI systems now return false or misleading information on controversial topics more frequently. Source: Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions' Available Online:   https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html Top of Page 2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy Russian state media and affiliated online networks have fabricated a series of false claims as part of a smear campaign accusing Ukrainian President Volodymyr Zelenskyy of owning a real estate empire worth over $682 million. A NewsGuard's Reality Check investigation revealed that the latest hoax alleged Zelenskyy purchased a $79 million ranch in Wyoming, a claim originating from a fake website mimicking a legitimate U.S. real estate firm. This marks the 14th property Russia has falsely attributed to the Ukrainian leader. The disinformation is disseminated using a network of fake websites, often featuring forged documents, before being amplified across major social media platforms like X, Facebook, and TikTok. Propagandists have even alleged that Zelenskyy’s supposed purchases included properties once owned by Nazi figures. Despite repeated debunking, the false narratives continue to circulate widely. Both NewsGuard and Forbes have confirmed that Zelenskyy's actual assets are valued under $20 million, finding no evidence of misused Western aid. Source: NewsGuard's Reality Check, NewsGuard, Forbes, ‘Russian Fake: Mapping Zelensky’s Made-Up $682 Million Real Estate Portfolio’ Available Online:   https://www.newsguardrealitycheck.com/p/russian-fake-mapping-zelenskys-made Top of Page 3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy The Kremlin is conducting information operations that weaponize climate change narratives to advance its geopolitical goals and undermine European unity. An article by EUvsDisinfo explains that these campaigns are a component of Russia’s broader hybrid warfare strategy against the West. Pro-Kremlin media outlets deliberately frame the European Union's Green Deal as an elitist policy designed to harm key sectors like agriculture, using claims that it is "killing farmers." This tactic of cognitive warfare aims to erode public support for environmental initiatives, which are seen by Moscow as a direct threat to its fossil fuel-dependent economy and its ability to exert energy-related pressure on Europe. By portraying decarbonization as self-destructive, the disinformation seeks to fracture social consensus within the EU, protect Russian energy exports, and weaken the bloc's resolve on sanctions and energy independence. Source: EUvsDisinfo, 'Weaponising climate change to undermine the West' Available Online:   https://euvsdisinfo.eu/weaponising-climate-change-to-undermine-the-west/ Top of Page 4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative The Cyberspace Administration of China (CAC) has launched a new "clean and bright" campaign that redefines online criticism and social frustration as "negative energy" that endangers national security. An article published by The Jamestown Foundation's China Brief explains that the campaign targets posts discussing unemployment, gender inequality, and social anxiety, portraying them as products of "Western ideological infiltration." This strategy reframes censorship as a necessary defense against cognitive warfare, empowering regulators to erase narratives that challenge the Chinese Communist Party's (CCP) image of a harmonious society. This effort is a deepening of the CCP's comprehensive system of propaganda and ideological management, or Xuanchuan. Influencers discussing youth job struggles have already been banned, reflecting the government's push to enforce an "authorized reality." By linking social stability to "total national security," Beijing normalizes censorship as a security measure and may be creating an exportable model of digital authoritarianism for other governments seeking to justify repression. Source: Jamestown: article, "Beijing's War on 'Negative Energy' Available Online:   https://jamestown.org/program/beijings-war-on-negative-energy/ Top of Page 5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data Generative AI systems are proving vulnerable to manipulation by Russian information warfare tactics, with leading chatbots frequently reproducing content from sanctioned state media. A study by the Institute for Strategic Dialogue (ISD), covered by WIRED, found that AI models exhibit a form of confirmation bias, delivering more pro-Kremlin content when users enter biased or manipulative prompts. This vulnerability is being actively exploited by Russian networks like the "Pravda" operation, which are deliberately working to "poison" the data that large language models (LLMs) are trained on. By flooding the information ecosystem with false narratives, these actors ensure their propaganda is ingested and later presented as factual by Western AI tools. The findings highlight a significant challenge for platform regulation, as the very architecture of current AI systems can be turned into a vector for disseminating state-sponsored disinformation. Source: WIRED, 'Chatbots Are Pushing Sanctioned Russian Propaganda' Available Online:   https://www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/ Top of Page 6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda The proliferation of synthetic propaganda is being accelerated by the failure of leading AI video generators to block the creation of extremist and hateful content. A new study from the Anti-Defamation League (ADL) found that top platforms produced antisemitic content, including Holocaust denial and violent tropes, in at least 40% of test cases when prompted with hateful text. Cybernews reports that despite stated moderation policies, these systems consistently failed to filter out harmful narratives, demonstrating a significant vulnerability in their design. The ADL warns that this capability not only allows malicious actors to create high volumes of disinformation but also poses a direct threat to historical memory and online safety. The findings illustrate how AI tools, trained on vast and often unfiltered datasets from the internet, can become powerful engines for amplifying and normalizing extremist ideologies. Source: Cybernews, Anti-Defamation League (ADL), 'Popular AI video generators amplify antisemitic tropes​, Available Online:   https://cybernews.com/ai-news/ai-videos-antisemitism/ Top of Page 7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland A factsheet published by EU DisinfoLab details how Poland's information space has been shaped by recurring disinformation waves pushed by far-right activists, politicized media, and Russia-aligned networks. The hostile influence campaigns have focused on anti-immigrant, anti-vaccine, anti-Ukrainian, and culture-war themes. Common tactics include "narrative pivoting," where repurposed accounts from the COVID era were switched to anti-Ukrainian messaging, and the amplification of rumors through bot and troll activity. Fabricated materials, such as a forged ministry letter and an AI-generated video, have also been used to inflame grievances. These information operations aim to polarize society, degrade trust in institutions, and distort policy debates on migration, public health, and EU agreements. Russia's invasion of Ukraine served as an accelerant, with Kremlin-linked narratives exploiting economic strains and election cycles. The response remains fragmented, and the politicization of public broadcasting risks laundering these narratives into the mainstream, contributing to widespread information disorder. Source: EU DisinfoLab, 'The disinformation landscape in Poland' Available Online: https://www.disinfo.eu/publications/disinformation-landscape-in-poland/ Top of Page 8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations The scaling back of U.S. international broadcasting is weakening American soft power and ceding narrative control to adversarial states in the global information war. An article in the Global Influence Operations Report (GIOR) details how the operational reductions at Voice of America (VOA) are creating a strategic vacuum that is being actively filled by Russia’s RT/Sputnik apparatus and the China Media Group. These state-sponsored actors are expanding their own information operations into regions where VOA was once a primary source of independent news. This shift represents a significant setback for U.S. strategic communications, as it removes a credible voice from contested information ecosystems and emboldens authoritarian regimes. By relinquishing its role in these environments, the U.S. allows hostile actors to more easily shape perceptions and advance their geopolitical objectives without counterbalance. Source: GIOR, 'Voice of America Shutdown Benefits Russia, China: GOP Warns - Global Influence Operations Report' Available Online:   https://www.global-influence-ops.com/voice-of-america-shutdown-benefits-russia-china-gop-warns/ Top of Page 9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions Blockchain analysis has revealed a coordinated effort to manipulate political prediction markets, representing a novel vector for platform-enabled foreign interference. According to a report in The New York Post, investigators found that a small number of digital wallets, funded overwhelmingly from offshore exchanges in China and the Middle East, were responsible for a disproportionate volume of bets on a New York City mayoral candidate. This activity, which appears automated and is not financially rational, constitutes a form of digital astroturfing designed to artificially inflate the candidate's perceived support. Experts warn that because media outlets and campaigns often cite these markets as indicators of public sentiment, such manipulation can distort the political narrative and potentially discourage voter turnout by creating a false sense of inevitability. The incident raises serious questions about the integrity of data from unregulated financial platforms in an electoral context. Source: The New York Post, ‘Foreign betting markets could influence NYC election — as data shows pro-Mamdani bets from China, Middle East skewing odds’ Available Online:   https://nypost.com/2025/10/28/business/pro-mamdani-bets-from-china-middle-east-skewing-market-odds-experts/?utm_campaign=nypost&utm_medium=social&utm_source=twitter Top of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website   Top of Page [Download Report] Top of Page

bottom of page