top of page

Search CRC

151 results found with an empty search

  • Cyber-based hostile influence campaigns 9th - 15th February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Meta's Community Notes System Showed Limits in Countering Misleading Content X X Users Asked Grok to Unblur Photos of Children from Epstein Files Grok Raised Regulatory Concerns About Misinformation and Platform Risk Iranian Officials Received Premium Services on X During Protest Crackdown [State Actors] Russia   The Russian Federation Used Video Games to Spread State Propaganda Russian Influence Operation Targeted Ukrainians During the Winter Olympics Check First Unveiled GRU's Information Operations Troops with OSINT and Phaleristics France's Municipal Elections and the Risk of Russian Influence Disinformation Continued After the New START Treaty Expired Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors [General Reports] False Allegations Labeled Epstein an Israeli Spy Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation Foreign Hybrid Campaigns Intensify Across the European Union TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland [Appendix - Frameworks to Counter Disinformation] Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI EU Launches Initiative to Counter Online Polarization and Disinformation [CRC Glossary] [ Report Highlights] An article by Bellingcat reports that, after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, users on X began asking Grok to “unblur” images in which the faces of children and women had been redacted for privacy. As published by Rand , Controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. NewsGuard’s Reality Check reports that a Kremlin-linked influence operation known as Matryoshka has circulated at least 28 fabricated news reports targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. A report by the Ukrainian Centre for Strategic Communications and the NATO Strategic Communications Centre of Excellence concludes that coordinated, state-aligned Russian actors conducted structured, cross-platform disinformation campaigns that employed forged content, narrative laundering, and synchronized amplification to undermine Ukraine and advance the Kremlin's geopolitical objectives. The latest Institute for Strategic Dialogue (ISD) publication finds that Russia, alongside China and Iran, is conducting coordinated hybrid campaigns across the EU that combine disinformation, cyberattacks, political subversion, and kinetic actions to undermine democratic institutions, inflame societal divisions, and weaken European unity. The 2026 Munich Security Index, as covered by Infosecurity Magazine , shows that G7 nations rank cyberattacks as their top security threat for the second consecutive year, with disinformation campaigns also identified as a leading concern, underscoring heightened awareness of digital and information-based hostile influence risks. [Social Media Platforms] Facebook Meta's Community Notes System Showed Limits in Countering Misleading Content An article by Politico  reports that, in the wake of the fatal shootings of Alex Pretti and Nicole Renee Good in Minneapolis, Meta’s new volunteer-driven fact-checking system faced a major test as misleading and AI-generated posts flooded Facebook, Instagram, and Threads. The company adopted this model after its CEO, Mark Zuckerberg, ended partnerships with professional U.S. fact-checkers, citing perceived political bias. Inspired by X’s Community Notes system, Meta decided to rely on volunteers to propose and rate notes that add context to questionable posts. Only notes that receive broad support across differing viewpoints are published, meaning users typically see comments that reflect cross-partisan agreement. However, behind the scenes, the process is often marked by partisan conflict. A review of Meta’s closed rollout found that many proposed notes were themselves politically charged, echoing concerns raised by professional fact-checkers that volunteers may be motivated by ideology rather than neutrality. While overtly one-sided notes are usually filtered out by low ratings, critics question whether the system can effectively counter misinformation, especially on platforms such as Facebook, where content spreads within private networks rather than in high-visibility public debates. Meta said the program is expanding rapidly, with tens of thousands of contributors and thousands of published notes. Yet only a small fraction of proposed notes ultimately appear publicly. Source:  Politico, A. Ng., Meta’s answer to ‘politically biased’ fact-checkers: Let them fight . [online] Published 10 February 2026. Available at: https://www.politico.com/newsletters/digital-future-daily/2026/02/10/metas-answer-to-politically-biased-fact-checkers-let-them-fight-00773633 Top Of Page X X Users Asked Grok to Unblur Photos of Children from Epstein Files In coverage by Bellingcat , users on X were reported to have asked Grok to “unblur” images after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, in which the faces of children and women had been redacted for privacy. The documents were already criticized by survivors, 18 of whom issued a joint statement condemning the release for exposing identifying details. Thousands of documents were later removed due to flawed redactions that reportedly compromised the identities of nearly 100 survivors. Between the 30th of January and the 5th of February 2026, researchers reviewed 31 requests in which Grok was asked to remove redactions or identify individuals in protected images. Although Grok often acknowledged that minors’ faces were blurred for legal and ethical reasons, it still generated AI images in response to most requests, producing fabricated faces that ranged from realistic to clearly distorted. Some of these posts attracted millions of views, amplified by X’s monetization system. Following media inquiries, stricter guardrails appeared to be implemented. In subsequent days, Grok largely refused such requests or instead generated unrelated images. X did not respond to requests for comment. Grok has also been used to create manipulated images tied to other incidents, sometimes falsely identifying individuals. Investigations have been launched in countries including the UK and France over the chatbot’s role in generating non-consensual sexual deepfakes, including child abuse imagery. Source:  Bellingcat, K. Koltai, Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children . [online] Published 10 February 2026. Available at: https://www.bellingcat.com/news/2026/02/10/grok-epstein-photos/ Top Of Page Grok Raised Regulatory Concerns About Misinformation and Platform Risk As published by Rand , controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. The case emerged at a critical moment when UK and EU digital regulations shifted from voluntary compliance to active enforcement. Regulators increasingly rejected claims that harmful outputs were isolated glitches, instead focusing on whether platforms had proactively identified and mitigated foreseeable systemic risks, including misinformation, abuse, and radicalization. Generative AI poses distinct risks of misinformation because it engages users in personalized, iterative conversations that, over time, can reinforce false narratives or extremist views. Unlike standalone AI tools, Grok operates within X, where its conversational responses can be rapidly amplified and repurposed. The interactions occur privately and cumulatively, making harm less visible, harder to evidence, and more difficult to audit using tools designed for posts, shares, or recommendations. The report concluded that Grok raises fundamental questions about whether platforms can credibly self-govern where systemic risks are foreseeable, and whether governments can meaningfully enforce laws designed to protect users, democratic processes, and the integrity of the information environment in a fragmented, cross-border digital ecosystem. Source:  RAND, D. Toro, Grok Isn’t a Glitch—It Is a Regulatory Reckoning . [online] Published 9 February 2026. Available at: https://www.rand.org/pubs/commentary/2026/02/grok-isnt-a-glitch-it-is-a-regulatory-reckoning.html Top Of Page Iranian Officials Received Premium Services on X During Protest Crackdown The Tech Transparency Project   reports that X provided paid premium subscriptions to dozens of Iranian officials, state agencies, and state-controlled media outlets, including individuals subject to U.S. sanctions. These premium accounts, marked with blue checkmarks, offered enhanced features, including increased visibility, longer posts, and monetization tools. Many of these accounts were used to spread government messaging and propaganda during nationwide protests and internet blackouts in Iran, including threats of harsher repression against demonstrators. The findings raised concerns that X may have violated US sanctions, which allow free access to communication platforms but prohibit charging sanctioned entities for additional services. Some premium badges were removed after media inquiries, but others remained active. The report highlighted that Elon Musk appeared to show support for the recent anti-government protests in Iran, taunting Khamenei on X, changing the Iranian flag emoji on X to the pre-revolutionary version, and allowing Iranians to access his Starlink satellite service for free amid an internet blackout in the country. Source : Tech Transparency Project, Iranian Officials Get Premium Treatment on X Amid Protester Crackdown . [online] Published 12 February 2026. Available at: https://www.techtransparencyproject.org/articles/iranian-officials-get-premium-treatment-on-x-amid-protester-crackdown Top Of Page [State Actors] Russia The Russian Federation Used Video Games to Spread State Propaganda A report by EU VS Disinfo  revealed that the Russian Federation recognized the potential of video games as a tool of ideological influence and used them to spread propaganda. Unlike passive media, games immerse players in interactive worlds in which ideological messages are embedded in narratives and mechanics. With 3.3 billion players globally, many aged 18–34, games can shape perceptions through community dynamics, repetition of symbols, and the illusion of personal choice. The Kremlin has systematically integrated the gaming industry into its state information strategy under a model of “digital authoritarianism.” Through state funding, censorship, and the development of domestic platforms and engines, Russian authorities have promoted “patriotic” content aligned with official narratives. Organisations such as the Institute for Internet Development provide substantial subsidies to ideologically aligned projects, while youth movements such as “Yunarmiya” incorporate military-themed games into their activities. Some titles, reportedly recommended for cadets, recreate battles in Ukraine with overt pro-Kremlin symbolism, presenting war as heroic and justified. Gaming platforms such as Steam further enable the spread of Z-propaganda and extremist content within echo chambers, normalizing militarism and dehumanizing narratives under the guise of humor or community culture. Source:  EU VS Disinfo, A. Vyshniakova, New weapon in the shadows: how the Kremlin uses video games for war propaganda . [online] Published 11 February 2026. Available at: https://euvsdisinfo.eu/new-weapon-in-the-shadows-how-the-kremlin-uses-video-games-for-war-propaganda/ Top Of Page Russian Influence Operation Targeted Ukrainians During the Winter Olympics Findings by NewsGuard’s Reality Check  show that a Kremlin-linked influence operation known as Matryoshka circulated at least 28 fabricated news reports between the 30th of January and the 10th, of February 2026, targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. The fake materials, shared mainly on Telegram and viewed approximately two million times, mimicked respected outlets such as Reuters, Euronews, CBC, and Italian institutions. Among the allegations were claims that Ukrainian staff spat in food, that anti-doping rules were relaxed for Ukrainian athletes, and that Ukrainians were segregated in the Olympic Village due to aggressive behavior—assertions denied by official sources. Other viral claims accused Ukrainians of vandalizing Rome’s Colosseum, staging obscene protests, creating fraudulent hotel booking websites, or participating in the Olympics to avoid military service. As the Games approached, the disinformation gained broader traction, spreading beyond pro-Kremlin Telegram channels to multilingual websites in the Pravda network and even being repeated by mainstream Russian media. Source:  NewsGuard, A. Lee. Russia Targets Ukraine at the Winter Olympics with Flood of Fabricated News Reports . [online] Published 10 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russias-olympian-disinformation-game Top Of Page CheckFirst Unveiled GRU's Information Operations Troops with OSINT and Phaleristics Building on a previous investigation into the FSB’s 16th Centre, CheckFirst's   report applied OSINT and heraldry, the study of military decorations, to uncover the structure of the GRU’s Information Operations Troops (VIO). By analyzing 118 photographs of insignia, patches, and pennants linked to the GRU, researchers reconstructed key elements of the VIO’s organization, chain of command, and likely geographic footprint. Although the VIO has been publicly acknowledged since 2014, its internal structure and missions have remained opaque. The study suggested that the VIO was formalized around 2014, likely under Military Unit 55111, consolidating pre-existing GRU units into a unified command. The VIO comprises at least fifteen units spanning three core specializations: encryption and cryptanalysis, computer network operations, and psychological operations. Some units, such as 26165 and 74455, previously linked to the APT28 and Sandworm cyber groups, are well documented, while others, including Units 20766, 48707, and 20978, have received little or no prior public attention. While identifying operational locations proved challenging, the researchers triangulated data from investigative journalism, leaked materials, Russian public records, and social media platforms to assess probable sites. The report concluded that combining phaleristics with OSINT offers a replicable and effective method for illuminating clandestine Russian intelligence structures, helping to clarify the scale and evolution of the GRU’s information and cyber warfare capabilities amid ongoing conflict with Ukraine and increased operations targeting Europe. Sources:   CheckFirst, Unveiling FSB’s 16th Center SIGINT Capabilities . [online] Published July 2025. Available at: https://checkfirst.network/wp-content/uploads/2025/07/OSINT_Phaleristics_Unveiling_FSB_16th_Center_SIGINT_Capabilities.pdf CheckFirst, OSINT & Phaleristics: Unveiling GRU’s Information Operations Troops (VIO) . [online] Published February 2026. Available at: https://checkfirst.network/2026/02/OSINT_Phaleristics_Unveiling_GRUs_Information_Operations_Troops.pdf Top Of Page France's Municipal Elections and the Risk of Russian Influence An article published by Le Monde  reports that a network of more than 80 fake French-language news websites, identified by Reporters Without Borders and reportedly linked to the Russian group Storm-1516, has raised concerns about potential foreign interference ahead of France’s March 2026 municipal elections. These sites imitate local media and promote pro-Russian narratives. Similar tactics have been observed in countries such as Moldova, where pro-Russian disinformation targeted legislative elections, though pro-European candidates ultimately prevailed. Political choices made by major metropolitan areas are of interest to major foreign powers, sometimes for primarily economic reasons. However, these elections are also particularly difficult for foreign actors to grasp. Russia’s past information campaigns in France have focused more on national themes, such as criticizing French leadership or promoting narratives of Western decline, rather than attempting to shape specific local races, including high-profile contests in Paris involving allies and opponents of Emmanuel Macron. In practice, most manipulation in municipal elections has come from domestic actors rather than foreign states. Cases in multiple countries have involved candidates spreading false information, using doctored media, or engaging in questionable campaign funding. Source:  Le Monde, D. Leloup, France's municipal elections at risk of foreign interference . [online] Published 10 February 2026. Available at: https://www.lemonde.fr/en/opinion/article/2026/02/10/france-s-municipal-elections-at-risk-of-foreign-interference_6750325_23.html Top Of Page Disinformation Continued After the New START Treaty Expired EU VS Disinfo   reports that the New START Treaty expired on 6 February 2026, bringing to an end the last remaining nuclear arms control agreement between Russia and the United States. In response, pro-Kremlin information campaigns sought to deflect blame from Moscow by portraying Russia as a responsible actor whose extension efforts were ignored by Washington, while warning of nuclear catastrophe and promoting the idea of a new multipolar nuclear order. These narratives aimed to frame the treaty’s collapse as inevitable and as the result of Western actions, despite the Kremlin’s own role in weakening the agreement over several years. The treaty, which entered into force in 2011, limited strategic nuclear arsenals and included transparency measures such as on-site inspections. However, Russia suspended inspections in 2022, citing Western sanctions after it invaded Ukraine, and formally halted its participation in 2023. At the same time, pro-Kremlin outlets intensified messaging accusing Western countries and NATO of provoking nuclear confrontation, spreading false claims about Ukrainian nuclear threats, biological weapons, and Western sabotage. Source: EU VS Disinfo, As New START ends, disinformation about it continues . [online] Published 13 February 2026. Available at: https://euvsdisinfo.eu/as-new-start-ends-disinformation-about-it-continues/ Top Of Page Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors The joint report by the Ukrainian Centre for Strategic Communications  and the NATO Strategic Communications Centre   of Excellence  develops and tests a structured Information Influence Attribution Framework (IIAF) to assess Russian information influence operations targeting Ukraine and neighbouring regions. Focusing on campaigns linked to Russian state and state-aligned actors, including RT, Sputnik, pro-Kremlin Telegram networks, and proxy outlets, the report documents coordinated disinformation efforts built on falsified content, impersonation of trusted media brands, source laundering, and synchronized cross-platform amplification. Case studies demonstrate how Russian actors deploy forged documents, deceptively edited images and videos, bot networks, and multilingual content tailored to specific audiences to advance narratives portraying Ukraine as corrupt, unstable, or manipulated by the West. The analysis shows that these campaigns rely on layered tactics: technical infrastructure reuse (shared domains, hosting, analytics IDs), behavioural coordination (near-simultaneous reposting, template-based comment flooding, and tasking via channels such as the “Digital Army of Russia”), and contextual alignment with Kremlin geopolitical objectives. Disinformation narratives are strategically timed to exploit political events, social tensions, and policy debates. In contrast, narrative laundering techniques obscure their origins by staging dissemination across inauthentic and overtly state-linked media ecosystems. Applying a spectrum of state responsibility, the report concludes that several examined operations fall between “state-shaped” and “state-coordinated,” reflecting high-confidence assessments of structured, state-aligned influence activity rather than organic or rogue behaviour. Source:  The Ukrainian Centre for Strategic Communications & NATO Strategic Communications Centre of Excellence, V. Smith & J. Pamment & S. Dikhtiarenko & B. Heap & D. Tsurtsumia & A. Maunder, Attributing Russian Information Influence Operations . [online] Published January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Attribution_Russian_Information_Influence.pdf?zoom=page-fit Top Of Page [General Reports] False Allegations Labeled Epstein an Israeli Spy In its latest debunking , NewsGuard’s Reality Check  examines false claims that emerged after the January 2026 release of millions of Justice Department documents related to Jeffrey Epstein, alleging that the files proved he was an Israeli spy. The allegation centered on an October 2020 FBI document included in the release, which summarized statements from an unnamed source who described Epstein as a “co-opted Mossad agent.” Anti-Israel accounts on X amplified screenshots of the document, asserting it confirmed that Epstein had trained as a spy under former Israeli Prime Minister Ehud Barak. Posts spreading the claim gained millions of views and tens of thousands of engagements. However, the document in question is an FD-1023 form, which records raw, unverified information from confidential sources and does not constitute confirmed findings or investigative conclusions. The FBI has explicitly stated that such forms do not validate or establish the credibility of the claims they contain. No credible evidence has emerged showing that Epstein was affiliated with Israeli intelligence. In response to similar allegations made previously, former Israeli Prime Minister Naftali Bennett publicly denied that Epstein had ever worked for the Mossad. Source : NewsGuard, J. Schofield, Debunk: Unverified Allegations Used to Label Epstein an Israeli Spy . [online] Published 9 February 2026. Available at: https://www.newsguardrealitycheck.com/p/was-epstein-an-israeli-spy-newsguards Top Of Page Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy Following the Justice Department’s January 2026 release of documents related to Jeffrey Epstein, NewsGuard’s Reality Check   reports that the long-debunked “Pizzagate” conspiracy theory, falsely claiming that top Democrats ran a child sex-trafficking ring out of Comet Ping Pong, has resurfaced. Online users cited hundreds of references to the word “pizza” in the files, alleging that it was coded language for abuse. Social media posts promoting the claim surged dramatically, and the commentator Tucker Carlson amplified the theory in a YouTube episode, suggesting it might be real despite having previously rejected it. However, a review of the documents found no evidence supporting the conspiracy. References to “pizza” were largely mundane, such as casual emails discussing restaurants or kitchen equipment. The pizzeria and its owner were mentioned only briefly, including in an article about the conspiracy itself and an unverified tip. Authorities have repeatedly debunked allegations that figures such as Hillary Clinton were involved. Although “pizza” appears in 842 documents, the files contain more references to other foods, such as “salad” and “soup.” The conspiracy has previously had dangerous real-world consequences, including a 2016 incident in which an armed man entered the restaurant believing he was rescuing children. Source : NewsGuard, M. Wollen & S. Howard, Epstein Document Dump Breathes New Life into Long-Debunked Pizzagate Conspiracy — Pushed Now By Tucker Carlson . [online] Published 11 February 2026. Available at: https://www.newsguardrealitycheck.com/p/epstein-files-revive-pizzagate-with Top Of Page Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes An article by The Jamestown Foundation  reports that, following alleged Pakistani airstrikes near Kabul in October 2025 targeting leaders of Tehreek-e-Taliban Pakistan (TTP), the Taliban launched a coordinated digital campaign to counter Pakistan’s accusations and reshape regional narratives. Using social media, affiliated news outlets, militant songs, and propaganda videos, the Taliban sought to portray Pakistan as the aggressor while denying claims that it sheltered militant groups. A widely circulated video featuring TTP leader Mufti Noor Wali Mehsud claimed he was not in Afghanistan, aiming to undermine Pakistan’s justification for the strikes and weaken its credibility. Territorial narratives rejecting the Durand Line border and circulated maps depicting parts of Pakistan as belonging to a so-called Greater Afghanistan were also promoted, reinforcing nationalist and revisionist claims. Source : The Jamestown Foundation, R. Nasar, The Afghan Taliban’s ‘Digital War’ Against Pakistan . [online] Published 12 February 2026. Available at: https://jamestown.org/the-afghan-talibans-digital-war-against-pakistan// Top Of Page India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation According to a TechXplore article , India has amended its 2021 information technology regulations to tighten oversight of artificial intelligence–generated content, requiring social media platforms to clearly and permanently label synthetic media and comply with government takedown requests within three hours, replacing the previous 36-hour window. The updated rules, effective February 20, apply broadly to content “created, generated, modified, or altered” through computer resources, excluding routine or good-faith editing. Synthetic data is defined as material that appears real or is likely to be perceived as indistinguishable from an actual person or event, and it directly targets the growing spread of AI-generated misinformation among India’s more than one billion internet users. The regulations mandate that platforms such as X, Instagram, and Facebook deploy automated tools to prevent the dissemination of illegal content, including false documents and sexually abusive material, and ensure that AI-manipulated content carries indelible markings that cannot be removed or suppressed. The government has also operationalized enforcement through the Sahyog portal, which automates and streamlines the submission of official takedown notices. The announcement precedes a major global AI summit in New Delhi, underscoring India’s intent to assert regulatory control over synthetic media and to hold platforms accountable amid rapid AI expansion and rising disinformation risks. Source : TechXplore, A. Zinin, India ramps up AI rules for social media platforms . [online] Published 10 February 2026. Available at: https://techxplore.com/news/2026-02-india-ramps-ai-social-media.html Top Of Page Foreign Hybrid Campaigns Intensify Across the European Union The latest Institute for Strategic Dialogue (ISD)   publication documents a sustained campaign of hybrid activity targeting all 27 EU member states since Russia’s full-scale invasion of Ukraine in February 2022. Russia is identified as the primary threat actor in most cases, with China and Iran also implicated. The report categorizes hybrid threats into information operations, cyber operations, kinetic actions, political and civil society subversion, and malign finance. Across the cases, adversarial states deploy coordinated disinformation campaigns, inauthentic news sites, bot amplification networks, deceptive branding, and impersonation tactics to manipulate public opinion, inflame ethnic and political tensions, interfere in elections, and erode trust in democratic institutions. These efforts are often synchronized with cyberattacks, espionage, and covert financing of political actors to maximize destabilizing impact. The analysis highlights a notable escalation in kinetic and disruptive tactics accompanying information manipulation, including sabotage of critical infrastructure, arson, drone incursions, assassination plots, and the recruitment of local proxies through encrypted platforms. Many incidents combine physical acts, such as vandalism or migrant weaponization, with coordinated narrative amplification through state media, Telegram channels, and bot networks to magnify psychological and political effects. The report concludes that hybrid operations increasingly blur the line between foreign and domestic actors, leveraging criminal networks, diaspora communities, and local intermediaries to obscure attribution while advancing strategic goals of undermining European unity, weakening institutional trust, and challenging liberal democratic governance. Source : Institute for Strategic Dialogue, K. Sikora & L. Savoia & B. Schafer, Europe’s Other Battlefields: Foreign Hybrid Threats in the EU . [online] Published 12 February 2026. Available at: https://www.isdglobal.org/publication/europes-other-battlefields-foreign-hybrid-threats-in-the-eu/ Top Of Page TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar A comparative study by Doublethink Lab  examines TikTok’s impact on adolescents in Taiwan and Myanmar, highlighting its role as both a vector for disinformation and a platform vulnerable to algorithmically amplified influence operations. In Myanmar, following the 2021 military coup and restrictions on Facebook and VPNs, propaganda and disinformation campaigns shifted toward Telegram and TikTok, which were perceived as having weaker moderation. Youth reported exposure to political propaganda, hate speech, and misinformation, often responding by skipping or reporting content but facing structural challenges in verification. In Taiwan, concerns extend beyond harmful content to the platform’s potential role in amplifying narratives aligned with the People’s Republic of China (PRC). During the 2024 presidential election, adolescents reported increased exposure to short, decontextualized political videos that criticized Taiwan’s democratic institutions and subtly promoted favorable perceptions of PRC governance. A 2025 nationwide study cited in the report found that active TikTok users showed higher agreement with conspiracy-like and PRC-aligned narratives, suggesting a measurable relationship between platform engagement and increased populist skepticism toward democratic institutions. The report highlights TikTok’s highly personalized recommendation algorithm as a central driver of user exposure. Adolescents in both contexts described the platform as “precise” and immersive, with rapid adaptation to user behavior reinforcing content themes and potentially narrowing informational diversity. In Taiwan, researchers documented algorithmic promotion of borderline sexually suggestive content involving minors, alongside accounts linked to potential exploitation and scam networks, while in Myanmar, the primary risks centered on political manipulation and harassment. Across both contexts, the study identifies algorithmic amplification, subtle embedding of political messaging within entertainment content, and the use of emotionally charged, fragmented narratives as key tactics. The findings underscore how platform design, limited transparency, and weak regulatory oversight can facilitate information manipulation, normalize harmful behaviors, and erode digital literacy, particularly among adolescents in politically sensitive environments. Source : Doublethink Lab, E. Hsu & Iris & P. S. S. Thant, Algorithms, Digital literacy, and Social influence: TikTok use among youths of Taiwan and Myanmar . [online] Published 13 February 2026. Available at: https://medium.com/doublethinklab/algorithms-digital-literacy-and-social-influence-tiktok-use-among-youths-of-taiwan-and-myanmar-5ac4ee947b8c Top Of Page G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index As reported by Infosecurity Magazine , the Munich Security Index (MSI) 2026, released at the Munich Security Conference, identifies cyberattacks as the top perceived national security risk among G7 countries for the second consecutive year, followed by economic or financial crises and disinformation campaigns from adversaries. The ranking reflects a sustained increase in cyber threats in G7 risk assessments, with particularly great concern in Germany, the UK, and Japan. Disinformation campaigns are widely recognized as a major hostile tactic, a core security concern alongside cyber operations, indicating growing awareness of information manipulation as a strategic threat vector. In contrast, among Brazil, India, China, and South Africa (BICS), cyber threats have declined in perceived urgency, while disinformation ranks seventh, suggesting differing threat prioritization across geopolitical blocs. The report is based on a November 2025 survey of more than 11,000 respondents across G7 and BICS countries, employing representative sampling and multiple evaluative criteria to assess perceived risk severity and imminence. While most countries reported declining threat perceptions overall, concerns have intensified in the UK, the US, and India, particularly regarding economic instability, democratic fragility, civil unrest, and political violence, factors that can intersect with cyber and disinformation campaigns in destabilizing democratic systems. The authors caution that polling in autocratic contexts, particularly in China, may limit respondents’ willingness to express views freely, highlighting methodological constraints in assessing perceptions of information and cyber threats across political systems. Source:  Infosecurity Magazine, K. Poireault, Munich Security Conference: Cyber Threats Lead G7 Risk Index, Disinformation Ranks Third . [online] Published 13 February 2026. Available at: https://www.infosecurity-magazine.com/news/munich-security-index-cyberattacks/ Top Of Page Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland An investigation by the DFRLab  found that citizen patrols organized along the Polish-German border in 2025 were accompanied by a coordinated online disinformation campaign that amplified anti-migration rhetoric, legitimized vigilante activity, and portrayed the Polish government as weak. Inauthentic Facebook pages, AI-generated images, recycled protest photos, and copy-paste narratives were used to stoke fear and frame migrants as a violent “invasion.” Right-wing actors, including members of the Confederation Party and Law and Justice (PiS), boosted this content, at times sharing AI-manipulated visuals and unverified claims. Political posts amplified incidents, such as a soldier’s injury later clarified as accidental, to fuel calls for violence against migrants and accusations that border officials were “traitors.” The investigation identified coordinated networks of Facebook pages posting identical content within seconds or minutes of one another, often with large follower counts but minimal engagement, suggesting likely inauthentic amplification. The PATRIOCI Facebook group emerged as a key amplification node, administered by pages impersonating media outlets and linked to previously dormant or disinformation-associated entities, including accounts that had shared content from Russian state-controlled RT. These networks promoted articles from outlets such as Tysol[.]pl in synchronized bursts and targeted Polish border guards with harassment and doxxing-style identification efforts. AI-generated banners depicting aggressive “foreign” figures and violent imagery were used to mobilize offline protests, while the absence of platform labeling for synthetic media enabled emotionally manipulative visuals to circulate unchecked. The report concludes that coordinated digital manipulation, leveraging algorithmic amplification, opaque platform governance, and political endorsement, distorted Poland’s migration debate, normalized extremist rhetoric, and blurred the line between grassroots activism and orchestrated influence operations. Source : DFRLab, G. Gigitashvili & S. Gelava, Inauthentic networks fuel Poland’s border vigilante movement . [online] Published 11 February 2026. Available at: https://dfrlab.org/2026/02/11/inauthentic-networks-fuel-polands-border-vigilante-movement/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI Based on a design thinking workshop, a paper published by DoubleThink Lab  argued that participatory design methods can play a constructive, bounded role in counter-FIMI policy development. The workshop, which involved civil society, defense practitioners, and journalists, noted that traditional top-down, reactive policies often fail to keep pace with fast-moving influence operations. Instead, participatory and human-centered approaches can help policymakers better understand the emotional, institutional, and operational pressures caused by FIMI, including declining trust, decision-making under uncertainty, and professional burnout. The workshop reframed FIMI not only as a problem of false content but also asa broader human and systems problem. Several tensions, such as speed versus due process, control versus trust, and short-term response versus long-term sustainability, appeared across all groups. Based on these insights, the paper proposed exploratory policy recommendations using a four-part framework: Education, Engineering, Enforcement, and Ethics (4E). These include improving media and crisis literacy, investing in shared verification tools and secure communication systems, clarifying institutional roles, strengthening legal protections, and ensuring responses respect rights and wellbeing. The paper concluded that design thinking can play a valuable but limited role in counter-FIMI policymaking. While it does not measure effectiveness or replace legal and evidence-based analysis, it helps policymakers surface hidden assumptions, understand trade-offs, and design more adaptable and human-centered responses. The recommendations were presented as starting points that require further testing, legal review, and empirical validation. Source : Doublethink Lab, D. V. Ligot, Crafting Policy Recommendations for Countering Disinformation and FIMI . [online] Published 11 February 2026. Available at: https://medium.com/doublethinklab/crafting-policy-recommendations-for-countering-disinformation-and-fimi-952825b9aa2b Top Of Page EU Launches Initiative to Counter Online Polarization and Disinformation The European Commission  has launched a €785,000 pilot call for proposals to strengthen social cohesion and improve understanding of online political polarisation, recognizing the growing threat that digitally amplified division poses to democratic resilience in the European Union. The initiative highlights how algorithm-driven platforms, while central to civic engagement, can also amplify extreme, divisive, or manipulative content, creating conditions in which coordinated malicious influence operations and disinformation campaigns can thrive. Such dynamics risk eroding trust in democratic institutions and distorting public understanding of societal issues. The project seeks to build cross-sector and cross-border partnerships to design, test, and scale mitigation strategies that counter harmful polarization without restricting freedom of expression. Proposed activities may include multilingual data collection; the development of AI- and IT-based tools; practitioner toolkits; policy and technical interventions; and the establishment of durable stakeholder networks. By engaging civil society, researchers, media actors, and technology companies, the initiative aims to produce actionable responses that align with the European Democracy Shield and reinforce long-term democratic resilience while safeguarding open and pluralistic debate. Source : European Commission, New call for proposals to advance social cohesion amid online polarisation . [online] Published 12 February 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/new-call-proposals-advance-social-cohesion-face-online-polarisation Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 2nd February - 8th February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Pro-Kremlin Network Promoted Alberta Separatism to U.S. Audiences Italy Accused Russia-Linked Hackers of Cyberattacks Ahead of 2026 Winter Olympics Russia intensified disinformation as the EU tightened scrutiny of its shadow fleet [General Reports] France Caught Between Russian and U.S.-Linked Disinformation Campaigns Nearly Half of Americans Believed Top False Claims in 2025 Social Media Users Circulated Fake Images Linking Politicians to Epstein Files Pro-IS Propaganda Circulated Online Following Syrian Detention Camp Takeover Digital Risks for the 2026 Winter Olympics Gabbard’s Seizure of Voting Machines Raised Fears of Election Interference The Rise of Violent Rhetoric Targeting US Public Officials Foreign State Disinformation and Cyber Influence Threaten Norway’s Information Space in 2026 [CRC Glossary] [ Report Highlights] An article by EU VS Disinfo showed that Russia has intensified disinformation efforts as the EU tightened scrutiny of its so-called "shadow fleet", hundreds of ageing oil tankers that operate under false flags to evade sanctions on Russian oil exports. According to an article by Le Monde , France has increasingly become a target of overlapping disinformation efforts from pro-Russian networks and Trump-aligned American accounts, creating a complex and unprecedented challenge. Ahead of the Milano Cortina 2026 Winter Olympics, Graphika analyzed online conversations from the past 60 days to assess potential digital risks surrounding the Games and highlighted how major sporting events attract influence operations and hacktivist activity. New analysis by the Institute for Strategic Dialogue (ISD)  found that violent online rhetoric targeting US public officials more than tripled between 2021 and 2025. The Norwegian Police Security Service’s (PST)  2026 “National Threat Assessment 2026” warns that Russia, China, and Iran are intensifying cyber-enabled disinformation, influence operations, and proxy activities to undermine public trust, suppress dissent, and exploit extremist and digital ecosystems within the Norwegian information space. [State Actors] Russia Pro-Kremlin Network Promoted Alberta Separatism to U.S. Audiences As reported by DisinfoWatch , a pro-Kremlin outlet in the Pravda News Network amplified claims that Alberta was moving toward independence and that its secession would deliver "two red senators and electoral votes" to the United States. Identified by France’s Viginum agency as part of the Russian “Portal Kombat” disinformation ecosystem, the network repackaged real but limited developments, including meetings between Alberta separatist figures and U.S. officials, into a misleading narrative. While such meetings did occur, U.S. officials have denied offering any support or commitments. Alberta is a Canadian province, not a U.S. state, and there is no legal or political pathway for it to gain U.S. senators or electoral votes. Alberta is also not institutionally “moving toward independence.” Public support remains a minority position, and current activity is limited to the early stages of a referendum petition process. Even a hypothetical referendum would not result in immediate independence, as Canadian law requires a clear majority, a clear question, and negotiations. Source:  DisinfoWatch. Moscow to MAGA: Pro-Kremlin Pravda Network Amplifying Alberta Separatism . [online] Published 2 February 2026. Available at: https://disinfowatch.org/disinfo/moscow-to-maga-pro-kremlin-pravda-network-amplifying-alberta-separatism/ Top Of Page Italy Accused Russia-Linked Hackers of Cyberattacks Ahead of 2026 Winter Olympics An article by The Record  reports that Foreign Minister Antonio Tajani said Italy has blocked a wave of cyberattacks of “Russian origin” targeting its diplomatic missions abroad and infrastructure linked to the 2026 Winter Olympics. The attempted attacks originated in Washington and extended to consulates in Sydney, Toronto, and Paris, as well as to hotels associated with the Games in Milan and the Alpine resort of Cortina d’Ampezzo. Around 120 targets were hit in total, but authorities reported no significant disruption. The pro-Russian hacker group NoName057(16) claimed responsibility, describing the campaign as retaliation for Italy’s support for Ukraine. Active since Russia’s full-scale invasion of Ukraine in 2022, the group has repeatedly targeted European countries backing Kyiv using relatively simple but disruptive distributed DoS attacks. Italian officials noted that Russia has been barred from competing as a nation in the 2026 Winter Olympics, a context that echoes previous incidents in which Russia-linked hackers targeted countries hosting major sporting events. Russian state officials have not publicly commented on the allegations. Source:  The Record, D. Antoniuk. Italy blames Russia-linked hackers for cyberattacks ahead of Winter Olympics . [online] Published 5 February 2026. Available at: https://therecord.media/italy-blames-russia-linked-hackers-winter-games-cyberattack Top Of Page Russia intensified disinformation as the EU tightened scrutiny of its shadow fleet EUvsDisinfo reports that Russia has intensified disinformation efforts as the EU tightened scrutiny of its so-called “shadow fleet”, hundreds of ageing oil tankers that operate under false flags to evade sanctions on Russian oil exports. These vessels, often with obscured ownership and inadequate insurance, transport oil and other goods to countries such as China and India, and are also suspected of carrying stolen Ukrainian property or military cargo. EU member states expanded sanctions, now covering nearly 600 vessels, and debated an even tougher 20th sanctions package. Recent interceptions of suspected sanctions-busting tankers in the North Sea and Mediterranean, including a January seizure by the French navy, have triggered a wave of false Kremlin-backed narratives. Pro-Kremlin outlets have portrayed lawful inspections and seizures as “piracy” or a covert naval blockade, falsely claiming Europe is provoking open confrontation. In parallel, deterrence messaging has become increasingly explicit, hinting at retaliation against Western shipping or undersea infrastructure and warning of instability in the Baltic Sea—claims that have no basis in reality, as neither the EU nor NATO is preparing a blockade or military escalation. Source:  EUvsDisinfo. Sailing under false flag: Moscow’s ‘shadow fleet’ meets Europe’s resolve . [online] Published 6 February 2026. Available at: https://euvsdisinfo.eu/sailing-under-false-flag-moscows-shadow-fleet-meets-europes-resolve/ Top Of Page [General Reports] France Caught Between Russian and U.S.-Linked Disinformation Campaigns France has increasingly become a target of overlapping disinformation efforts from pro-Russian networks and Trump-aligned American accounts, as detailed by Le Monde , creating a complex and unprecedented challenge. Since early 2026, Russian-linked actors have continued to circulate fabricated stories aimed at discrediting Ukraine and France, including a false claim that French and Ukrainian officials embezzled billions of euros intended for Rafale fighter jets. Investigations by French and international fact-checkers confirmed the story was entirely fabricated and likely originated from “Storm-1516”, a Russian propaganda group active in France since 2025. At the same time, French authorities have had to counter misleading narratives spreading from U.S. sources, including false claims about President Emmanuel Macron’s role in drug pricing and France’s military record in Afghanistan. This dual threat poses a new challenge for France, as Russia and the United States rely on radically different methods. Russian disinformation operates as a long-term, organised background campaign, often using fake news sites and AI-generated content to subtly influence public opinion. By contrast, Trump-aligned messaging relies on high-visibility, immediate amplification through personal social media accounts, normalising distortions of fact through repetition and provocation. Adding to the challenge is a convergence between pro-Russian and MAGA-aligned narratives, which share anti-elite, anti-EU, anti-Macron, and anti-Ukraine themes and often relay each other’s claims. French authorities have responded by more actively debunking falsehoods on social media, including through the Foreign Ministry’s French Response account and direct interventions from the Élysée. Source:  Le Monde, W. Audureau. France faces a crossfire of Russian and American disinformation . [online] Published 2 February 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/02/02/france-faces-a-crossfire-of-russian-and-american-disinformation_6750034_8.html Top Of Page Nearly Half of Americans Believed Top False Claims in 2025 NewsGuard’s Reality Gap   Index   found that, from June to December 2025, an average of 46 percent of Americans believed at least one major false claim circulating in the news. In the first six months of tracking, belief levels averaged 50 percent before dipping slightly in December. Fluctuations largely reflected the virality and sensationalism of particular hoaxes, with peaks in midsummer driven by widely shared false claims, such as reports that President Donald Trump had declared martial law in Washington, D.C., or that a Florida detention center was surrounded by an alligator-filled moat. The index, based on monthly YouGov surveys, measured the share of Americans who believed at least one of the top three false claims identified each month by NewsGuard. The surveys also revealed widespread confusion around AI-generated and manipulated media. Large shares of respondents either believed in or were unsure about the authenticity of fake images, videos, and audio, including AI-generated visuals falsely linking Trump to Jeffrey Epstein and fabricated audio clips purporting to capture him berating Cabinet members. Differences by age and political affiliation were present but modest. Younger adults were more likely to express uncertainty, while older respondents showed greater confidence but were not consistently better at identifying false claims. Across all age groups, only about 8–9 percent correctly identified all false claims in a given month. Politically, Republicans were slightly more likely than Democrats and Independents to believe at least one false claim, but the overall difficulty in consistently spotting misinformation cut across party lines. Source:  NewsGuard Reality Check, S. Tanner. Nearly Half of Americans in 2025 Believed False Claims Across Seven Months of Surveys . [online] Published 2 February 2026. Available at: https://www.newsguardrealitycheck.com/p/nearly-half-of-americans-believed Top Of Page Social Media Users Circulated Fake Images Linking Politicians to Epstein Files Following the U.S. Department of Justice’s release of millions of documents related to Jeffrey Epstein in January 2026, NewsGuard’s Reality Check   reported that social media users began circulating AI-generated and digitally manipulated images falsely linking prominent politicians to the convicted sex offender. NewsGuard identified at least seven such images, which together amassed more than 21 million views on X. The images targeted figures across the political spectrum, including New York City Mayor Zohran Mamdani, former South Carolina governor Nikki Haley, and Venezuelan opposition leader Maria Corina Machado, with users claiming the materials were part of the newly released Epstein files. In one widely shared case, fabricated images purported to show Epstein posing with Mamdani as a child and with his mother, filmmaker Mira Nair. Although DOJ files confirm Nair once attended an event hosted by Epstein associate Ghislaine Maxwell, NewsGuard found the images themselves were AI-generated, bearing digital watermarks from Google’s image-generation tools. Other posts falsely claimed Haley had emailed Epstein to arrange flights with her children, but no such message exists in the DOJ database, and the alleged screenshot contained clear inconsistencies. Similarly, an image linking Machado to Epstein was shown to be a manipulated version of an authentic photo that did not include her. As described in a related article , a “red-teaming” audit by NewsGuard  of three leading AI image generators found that X’s Grok Imagine consistently produced false images depicting prominent politicians hanging out with convicted sex offender Jeffrey Epstein. Google Gemini’s Nano Banana Pro produced convincing visuals as well, but only after some prodding, while OpenAI’s ChatGPT flatly declined all prompts seeking to place Epstein with public figures. Sources: NewsGuard Reality Check, L. Marchl & J. Schofield & I. Chomnalez. AI and Manipulated Images Falsely Implicate Politicians in the Epstein Files . [online] Published 3 February 2026. Available at: https://www.newsguardrealitycheck.com/p/these-shocking-images-were-not-in NewsGuard Reality Check, I. Chomnalez. Prompted to Create Photos of Politicians with Jeffrey Epstein, X’s ‘Grok Imagine’ Produced Convincing Fakes in Seconds, Google’s Gemini Hesitated but Then Complied, while OpenAI’s ChatGPT Refused . [online] Published 5 February 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-tools-willingly-generate-fake Top Of Page Pro-IS Propaganda Circulated Online Following Syrian Detention Camp Takeover An investigation by the Centre for Information Resilience (CIR)  found a sharp increase in pro-Islamic State (IS) activity on Facebook following the Syrian government’s takeover of the Al-Hol detention camp in late January 2026. After Kurdish-led forces withdrew from north-eastern Syria, including Al-Hol, IS supporters used social media to spread propaganda and coordinate logistical and financial assistance for detainees attempting to escape. The activity emerged within days of the transfer of control and coincided with President Ahmed al-Sharaa’s efforts to reassert state authority one year after the fall of the Assad regime. CIR documented dozens of Facebook accounts openly calling for violence against Syrian government forces and urging armed resistance. Some posts explicitly encouraged supporters to attack security personnel, while others focused on practical support, such as requests for vehicles to transport detainees’ families out of the camp. Financial appeals were also widespread, with pro-IS users soliciting donations from supporters abroad and directing them to private Telegram channels for coordination. Source: CIR Centre for Information Resilience. Pro-IS propaganda circulates online following Syrian detention camp takeover . [online] Published 5 February 2026. Available at: https://www.info-res.org/cir/articles/pro-is-propaganda-circulates-online-following-syrian-detention-camp-takeover/ Top Of Page Digital Risks for the 2026 Winter Olympics Ahead of the Milano Cortina 2026 Winter Olympics, the Graphika  Team highlighted in a Graphika Blog post that it analyzed online conversations from the past 60 days to assess potential digital risks surrounding the Games and how major sporting events attract influence operations and hacktivist activity. While online discussion remained largely neutral or supportive of the Games, the shift toward controversy-focused narratives created openings for coordinated influence campaigns. IO actors use high-profile events to further their ideological messages or attract engagement. Graphika hasn’t yet seen any known IOs weighing in on the upcoming Games, but will continue to monitor signals related to this threat, as the monitoring and public reporting during the 2024 Paris Games led them to expect that IOs will exploit the Milan Cortina Games. Potential threats include the spread of false or misleading claims about political, security, and even accommodation-related developments, which may be supported by AI-generated content. Additionally, no major hacktivist campaigns have yet targeted the 2026 Winter Olympics. Past events indicate a strong likelihood of cyber disruptions, data leaks, doxing, and DDoS attacks. During the Paris 2024 Olympics, “pro-Russia” and “anti-Zionist” hacktivist groups leaked sensitive athlete data and attacked national websites. Politico   reported that Italian authorities have already thwarted hacking attempts against several embassies and Olympic venues and attributed a series of DDoS attacks to a pro-Kremlin gang that claimed responsibility on Telegram. Security groups tracking these threats have also warned that hackers may be preparing to spread disinformation online to discredit the Games or to launch further cyberattacks on Olympic sites ahead of the Opening Ceremony. The Italian National Cybersecurity Agency (ACN) has embedded experts inside the Milano-Cortina security center, with additional staff at ACN HQ, while coordinating with the IOC, CISA, and other international partners to share real-time threat intelligence. Source: Graphika Blog, The Graphika Team. Slippery Slopes: Online Risk and the 2026 Winter Olympics . [online] Published 3 February 2026. Available at: https://graphika.com/posts/slippery-slopes-online-risk-and-the-2026-winter-olympics Politico, M. Miller & D. Nickel. ‘Irresistible stage’: Olympic authorities on high alert for cyberattacks as winter games kick off . [online] Published 6 February 2026. Available at: https://www.politico.com/news/2026/02/06/olympics-high-alert-cyberattacks-00767434 Top Of Page Gabbard’s Seizure of Voting Machines Raised Fears of Election Interference As reported by Politico , while the U.S. midterm elections approached, President Donald Trump intensified questioning election integrity, reviving baseless claims about a "rigged" 2020 vote and urging federal involvement in election administration. These concerns have deepened following revelations that Director of National Intelligence Tulsi Gabbard oversaw the seizure of voting machines in Puerto Rico and appeared at an FBI raid tied to past election fraud allegations. Democratic lawmakers argued that these actions blur the line between foreign intelligence oversight and domestic election matters, raising concerns that the administration may be laying the groundwork to challenge future election outcomes. Gabbard’s office claimed the investigation uncovered serious cybersecurity vulnerabilities, including cellular modems that could connect machines to foreign networks. However, election security experts and bipartisan lawmakers noted that such vulnerabilities have been publicly documented for years and are extremely difficult to exploit without physical access. They emphasized there is no evidence these issues have ever altered U.S. election results. Critics also pointed to recent cuts to federal programs designed to protect election infrastructure and counter foreign threats to U.S. democracy. Source: Politico, M. Miller & D. Nickel. Gabbard’s seizure of voting machines heightens concerns of midterm meddling . [online] Published 6 February 2026. Available at: https://www.politico.com/news/2026/02/06/gabbard-trump-election-midterms-voting-integrity-00769768 Top Of Page The Rise of Violent Rhetoric Targeting US Public Officials New analysis by the Institute for Strategic Dialogue (ISD)  found that violent online rhetoric targeting US public officials more than tripled between 2021 and 2025, rising by 241 percent overall with a median monthly increase of 5 percent. Threats intensified around major political events and continued to rise after the 2024 election and presidential transition. Republicans were disproportionately targeted, driven largely by threats against President Donald Trump, who accounted for 47 percent of all violent rhetoric in the dataset. While threats against Democrats also rose, the growth rate for Republicans was far steeper, especially following the July 2024 assassination attempt on Trump. The study found that most violent rhetoric did not originate from organized extremist groups but from partisan individuals reacting to high-profile news and political controversy. Only a small fraction of cases were linked to extremist organizations, pointing instead to a shift toward decentralized, post-organizational political violence. Inflammatory statements by public officials themselves often acted as catalysts, with supporters directing threats at perceived opponents and fueling cycles of escalation across party lines. Real-world violence further intensified this dynamic, triggering waves of online threats, celebration of attacks, or calls for retaliation. Violent rhetoric remained widespread across major platforms, much of it still accessible despite clear policy violations. Platform dynamics shaped who was targeted: left-leaning platforms such as Reddit and Bluesky hosted more threats against right-wing officials, while X contained a disproportionate share of threats against left-wing figures. Source: ISD, N. Doctor & K. Keneally & C. Zoschak. ‘Tick tock traitor:’ The rise of violent rhetoric targeting US public officials . [online] Published 3 February 2026. Available at: https://www.isdglobal.org/digital-dispatch/tick-tock-traitor-the-rise-of-violent-rhetoric-targeting-us-public-officials/ Top Of Page Foreign State Disinformation and Cyber Influence Threaten Norway’s Information Space in 2026 The Norwegian Police Security Service’s (PST)   National Threat Assessment 2026   outlines a deteriorating security and information environment in Norway, driven largely by hostile state actors employing influence operations, cyber activity, and disinformation as core tools. Russia, China, and Iran are identified as the primary actors, with Russia expected to intensify cyber and influence operations aimed at weakening Norwegian public support for Ukraine, NATO, and Western cooperation, including through targeted disinformation, proxy sabotage, and recruitment via digital platforms. China’s activities are assessed as increasingly cyber-centric, combining cyber espionage, covert influence, and transnational repression to silence critics of the Chinese Communist Party, including through malware disguised as legitimate apps and pressure on diaspora communities. Iran is expected to continue intelligence and influence operations using proxy actors and criminal networks, alongside cyber intrusions and intimidation campaigns targeting dissidents, journalists, and Western- or Israeli-linked interests in Norway. The report emphasizes that disinformation and influence operations increasingly intersect with extremist ecosystems and digital radicalisation dynamics. Foreign state actors exploit online platforms, fake accounts, AI-generated content, and proxy networks to spread polarising narratives, undermine trust in democratic institutions, and amplify conspiracy theories, sometimes leveraging existing right-wing or anti-government extremist forums as dissemination channels. These activities contribute to a more diffuse and unpredictable threat landscape, where state-driven influence operations, cyber-enabled repression, and extremist propaganda reinforce one another. The assessment highlights that such tactics risk eroding public trust, increasing societal polarisation, and complicating attribution, thereby strengthening hostile actors’ ability to manipulate the information space while remaining below the threshold of overt conflict. Source:  Norwegian Police Security Service (PST). National Threat Assessment 2026 . [online] Published February 2026. Available at: https://www.pst.no/wp-content/uploads/2026/02/National-Threat-Assessment-2026.pdf [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 26th January - 1st February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network Russian Pravda Network Amplified Alberta Secessionist Content Kremlin claimed Russia and the US won’t talk to EU politicians EU Sanctioned Six Individuals Over Russian Information Manipulation Activities Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High The War in Ukraine Fake Videos Targeted Ukrainian Refugees in France China China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania Pro-China AI Videos Falsely Claim Taiwanese Support for Unification [General Reports] Disinformation Surrounding the Shooting of Alex Pretti   Disinformation As a Systemic Threat to Democratic Resilience Disinformation Vortex Around Minnesota ICE Protests ICE Surveillance Sparks Online Counterattacks [Appendix - Frameworks to Counter Disinformation] UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence Commercialized Social Media Manipulation and Disinformation Amplification EU-Supported Media Literacy Effort Against Disinformation in Kosovo [CRC Glossary] [ Report Highlights] The European External Action Service (EEAS)  reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. In its annual report, Latvia’s “Constitution Protection Bureau” (SAB) , said 2025 marked an all-time high in registered cyber threats targeting the country, while Russia remained the primary source of cyber risk. A report by NATO Strategic Communications Centre of Excellence  examined how Beijing has sought to shape the Nordic-Baltic information environment, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. According to an EU VS Disinfo  article, Global risk assessments increasingly identify foreign information manipulation, disinformation, and misinformation as structural threats that undermine democracy, human rights, economic stability, and crisis response, making investment in trustworthy public-interest media and stronger policy safeguards essential to protecting information integrity. A Politico  article describes a growing digital struggle in which expanded federal surveillance and data use for deportations is met by activist and hacker efforts to track, leak, and disrupt ICE operations, fueling an increasingly contested and misinformation-prone information environment. An article from the NATO Strategic Communications Centre of Excellence explains how coordinated actors exploit social media platforms through fake accounts, automated amplification, and targeted narrative tactics to manipulate public opinion, spread disinformation, and undermine trust in democratic information environments. [State Actors] Russia Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos NewsGuard’s Reality Check   reveals how a pro-Kremlin influence operation sought to undermine the European Union during and immediately after the World Economic Forum in Davos by circulating fabricated videos that impersonated trusted outlets. The fake content included a fabricated Reuters video alleging that France and Germany were preparing to leave the EU, a bogus Gallup report claiming that most young Europeans supported exiting the union, and a falsified Economist video attributing anti-EU remarks to former German Chancellor Angela Merkel. All of these claims were demonstrably false, and none appeared on the outlets’ official platforms. The individuals cited consistently expressed strong pro-EU positions. Reuters explicitly confirmed that the video attributed to it was fake, while independent polling showed strong EU support among European youth. The campaign also promoted additional fabricated quotes, anti-EU statements attributed to global leaders, claims about fabricated anti-EU protests, and reports of escalating tensions among EU member states. Although individual videos achieved only modest reach, collectively they garnered thousands of views across platforms such as Telegram, where audiences have limited tools for verifying authenticity. Source:  NewsGuard, A. Lee. Post Davos, Russian Influence Operation Uses Phony Videos Impersonating Reuters, Gallup and the Economist to Foment EU Discord . [online] Published 28 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russians-seed-fake-reports-to-sow Top Of Page NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network Yevgeny Shevchenko was designated by NewsGuard’s Reality Check  as its 2025 Disinformer of the Year for his role in building the Pravda network, one of the most prolific pro-Kremlin disinformation operations globally. The network comprises hundreds of automated, news-style websites published in 49 languages, collectively producing approximately 6.3 million articles in 2025 alone. These sites repeatedly amplified false claims aligned with Kremlin narratives, targeting topics such as the war in Ukraine, European and U.S. elections, public health, and geopolitics, while using domain names designed to appear legitimate and local. A key impact of the Pravda network was its success in polluting search results and influencing generative AI systems. NewsGuard audits found that while some AI systems successfully debunked false claims sourced directly from Pravda articles, others reproduced them at significant rates. Shevchenko, a Crimea-based web entrepreneur and founder of the company TigerWeb, has kept a low public profile despite the network’s reach. The Pravda network expanded rapidly after Russia’s full-scale invasion of Ukraine in 2022 and was sanctioned by the European Union in July 2025 for coordinated information manipulation. Source:  NewsGuard, A. Lee & E. Maitland. NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network . [online] Published 27 January 2026. Available at: https://www.newsguardrealitycheck.com/p/newsguards-2025-disinformer-of-the Top Of Page Russian Pravda Network Amplified Alberta Secessionist Content As reported by DisinfoWatch , Russia’s Pravda News Network published what appeared to be an AI-generated video promoting an Alberta secessionist rally scheduled for Monday, January 26, on its Russian VK social media platform. The Pravda network, also known as Portal Kombat, was first identified by France’s VIGINUM agency as a coordinated pro-Kremlin disinformation ecosystem that aggregates and republishes content from Russian state media, official channels, and aligned online sources rather than producing original reporting. The network operates a Canada-focused site that republishes material daily, drawing heavily from outlets such as RT, TASS, and the Russian Embassy in Canada. Canadian civil society group Cyber Alberta has warned that Pravda is targeting Canadian interests. Source:  DisinfoWatch, Russian Pravda News Platform Amplifying Alberta Secessionist Events . [online] Published 25 January 2026. Available at: https://disinfowatch.org/disinfo/russian-pravda-news-platform-amplifying-alberta-secessionist-events/ Top Of Page Kremlin Claimed Russia and the US won’t talk to EU politicians DisinfoWatch   documents how Russian state outlet RT reported claims by Kremlin spokesperson Dmitry Peskov asserting that Russia would not engage with EU foreign policy chief Kaja Kallas and that it was “obvious” the United States would also refuse to engage with her. The claim sought to widen EU–US fractures amid UAE-hosted talks. Kallas is the EU’s High Representative for Foreign Affairs and Security Policy and a Commission Vice-President, a role appointed through formal EU processes and documented in EU institutional records. The report relied on delegitimizing language and provided no evidence to support the assertion that Washington shared Moscow’s position. Source:  DisinfoWatch, Kremlin Claims Russia and US won’t talk to EU . [online] Published 26 January 2026. Available at: https://disinfowatch.org/disinfo/kremlin-claims-russia-and-us-wont-talk-to-eu/ Top Of Page EU Sanctioned Six Individuals Over Russian Information Manipulation Activities The European External Action Service (EEAS) reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. Those sanctioned include prominent Russian television presenters Dmitry Guberniev, Ekaterina Andreeva, and Maria Sittel; propagandist Pavel Zarubin; actor Roman Chumakov; and ballet dancer Sergey Polunin. The Council said these figures have actively promoted pro-Kremlin disinformation, anti-Ukraine and anti-Western narratives, and, in some cases, helped raise funds for the Russian armed forces, directly contributing to Russia’s war effort against Ukraine. With this decision, EU restrictive measures now apply to 65 individuals and 17 entities. Sanctions include asset freezes, bans on EU citizens and companies providing funds or economic resources to those listed, and travel restrictions preventing entry into or transit through EU territory. The legal acts formalizing the measures have been published in the Official Journal of the European Union. Source:  EEAS, Russian hybrid threats: Council sanctions six individuals over information manipulation activities . [online] Published 30 January 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-council-sanctions-six-individuals-over-information-manipulation-activities_en Top Of Page Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High In its annual report , Latvia’s Constitution Protection Bureau  ( SAB) , said that 2025 marked an all-time high in the number of registered cyber threats targeting the country, while Russia remained the primary source of cyber risk.  While most incidents involved cybercrime and digital fraud, state-linked threats remained elevated, particularly from Russia. From a national security perspective, the most significant risks included intrusion attempts, malware, system compromises, and distributed DDoS attacks. Latvian authorities noted that effective defensive measures, particularly by CERT, were in place. LV helped limit the impact of many attacks, including during politically sensitive events such as municipal elections. A key concern highlighted in the report was the growing role of Russian hacktivists, who have demonstrated both intent and capability to target critical and industrial systems across Latvia and other Western countries. These actors aim to disrupt essential services, intimidate populations, punish support for Ukraine, and deter further assistance. Examples cited included hacktivist attacks on operational technologies, such as dams and power plants, in Norway and Poland, where weak security controls enabled attackers to manipulate industrial control systems and, in one case, shut down a hydroelectric facility. Although Latvia has so far avoided major incidents affecting critical infrastructure, vulnerabilities in operational technologies remain a significant risk. Russian DDoS campaigns continued to target Latvian government institutions, municipalities, and critical infrastructure, often timed to coincide with political decisions or symbolic events. In most cases, DDoS attacks had little or no effect on services’ availability. To counter this threat, Latvia has invested in centralized, state-funded DDoS protection for public institutions and strengthened oversight of ICT critical infrastructure through new cybersecurity laws and regulations. Source:  SAB, Annual Report 2025 . [online] Published January 2026. Available at: https://www.sab.gov.lv/files/uploads/2026/01/SABs-annual-report_2025_ENG.pdf Top Of Page War in Ukraine Fake Videos Targeted Ukrainian Refugees in France StopFake   reports that a series of fake videos on Telegram falsely alleged that Ukrainian refugees in France committed mass crimes, including murder, theft, drug distribution, and even terrorism. These videos, which mimicked the logos and formats of prominent French and international media outlets, including Le Parisien, Le Figaro, L’Équipe, Reuters, Le Point, and Euronews, were part of a coordinated disinformation campaign. Analysis showed that the videos were released over a short period, from January 12 to 16, 2026, via at least three anonymous Telegram channels, and were then widely amplified across pro-Russian networks. Fact-checks confirmed that none of the alleged crimes or news stories were real. Official websites, social media accounts, and publications from the cited media outlets contain no reports that match the videos’ claims. France also does not maintain official statistics on crime specifically among Ukrainian refugees, and available data suggest that Ukrainians do not pose a higher criminal threat than other migrant groups. Source:  StopFake, Фейк: Мировые СМИ сообщили о массовых преступлениях украинских беженцев во Франции . [online] Published 28 January 2026. Available at: https://www.stopfake.org/ru/fejk-mirovye-smi-soobshhili-o-massovyh-prestupleniyah-ukrainskih-bezhentsev-vo-frantsii/ Top Of Page China China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania The NATO Strategic Communications Centre of Excellence   examined , in a report on China’s influence in the Nordic–Baltic information environment, how Beijing has sought to shape the region’s information space, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. It mapped China’s objectives (protecting core interests, acquiring technology, and improving perceptions), described eight avenues of influence, and analysed official PRC frames and their resonance in local media using country case studies of Lithuania and Denmark. In Lithuania, relations with China deteriorated sharply after 2019, culminating in Vilnius's withdrawal from the China-CEEC format and the authorization of Taiwan to open a representative office under its own name. China responded with economic pressure and a coordinated diplomatic and information campaign. However, the study found that China’s influence in Lithuania’s media space remained limited. Chinese narratives had little resonance, partly due to the absence of strong Chinese media channels and partly because Beijing’s coercive tactics proved counterproductive, reinforcing public skepticism rather than shaping opinion. Media debates largely reflected domestic political dissatisfaction and broader geopolitical shocks, particularly Russia’s invasion of Ukraine, rather than successful Chinese messaging. In Denmark, the report identified a different pattern, described as “Shadow Wolf Warrior” diplomacy. Rather than relying on aggressive public messaging, China relied more on backstage influence through elite networks, business ties, and United Front activities, while maintaining a low public profile. Although Danish public discourse was generally skeptical of Chinese frames, and official messaging failed to gain broad traction, the report warned that China’s covert channels and long-standing institutional ties create a durable influence. Source: NATO Strategic Communications Centre of Excellence, M. Lanteigne & L. Stünkel & K. Andrijauskas & A. K. Jakobsson. China’s Influence in the Nordic – Baltic Information Environment: Denmark and Lithuania . [online] Published 28 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Chinas-Influence-in-the-Nordic-Baltic---Denmark-Lithuania-FINAL-FILE.pdf?zoom=page-fit Top Of Page Pro-China AI Videos Falsely Claim Taiwanese Support for Unification NewsGuard’s Reality Check  has tracked how, since December 2025, pro-China sources have circulated AI-generated videos showing people purportedly from Taiwan speaking Mandarin with authentic Taiwanese accents and calling for unification with China. One account on the Chinese platform RedNote, “Taiwanese come home,” posted 35 such videos featuring teachers, doctors, police officers, firefighters, and students, garnering over 21,000 likes. NewsGuard confirmed that the videos were AI-generated using OpenAI’s Sora 2 tool, and that the individuals depicted do not exist or were misrepresented. For example, Taipei Municipal Chien Kuo High School, shown in one video, has never admitted female students, and the National Taiwan University professors depicted in another video were entirely fabricated. These AI-generated videos are part of a broader cognitive warfare effort by China, which has commissioned companies like Magic Data and iFlytek to create voice databases of native Taiwanese speakers in Mandarin, Hokkien, and Hakka. These databases are intended to lend authenticity to fabricated pro-China messaging. Source:  NewsGuard, C. Lin. Pro-China AI-Generated Videos Use Databanks of Taiwanese Accents to Fake Calls for Taiwan-China Unification . [online] Published 29 January 2026. Available at: https://www.newsguardrealitycheck.com/p/pro-china-ai-fakes-a-taiwanese-accent   Top Of Page [General Reports] Disinformation Surrounding the Shooting of Alex Pretti  NewsGuard’s Reality Check   designated the claim that Alex Pretti pulled a gun on federal agents before being fatally shot in Minneapolis in January 2026 as its “False Claim of the Week,” citing its rapid spread, high engagement, and promotion by high-profile figures. Following the January 2024 shooting, Trump administration officials and conservative commentators alleged that Pretti brandished a firearm and posed an imminent threat, framing the killing as justified. Statements from the Department of Homeland Security, Homeland Security Secretary Kristi Noem, and White House aide Stephen Miller were widely echoed across social media and partisan websites, drawing millions of views. However, a detailed review of eyewitness video footage from five angles by NewsGuard, alongside reporting from major outlets including Reuters, CNN, The New York Times, and ABC News, found no evidence that Pretti pulled or reached for a gun before he was shot. The footage showed Pretti holding a phone, with no weapon visible, as officers confronted him. He was pepper-sprayed, tackled, and pinned to the ground before an officer removed a concealed handgun from his waistband. Authorities later confirmed that Pretti was legally carrying a concealed firearm with a permit, but video analysis indicated it remained holstered and hidden throughout the initial encounter. Additionally, NewsGuard’s Reality Check   reported that shortly after the shooting, an AI-manipulated image circulated widely on social media, falsely claiming to show Pretti holding a gun at the moment he was shot, with posts reaching millions of views within hours. Investigators and journalists confirmed the image was fabricated. Sources:   NewsGuard, C. Vercellone, Reality Check. Debunk: Pretti Didn’t Pull Out a Firearm, Contrary to the Trump Administration’s Claims . [online] Published 30 January 2026. Available at: https://www.newsguardrealitycheck.com/p/did-alex-pretti-brandish-a-gun-newsguards NewsGuard, M. Calamaio, Reality Check. AI-Manipulated Image Cited as False Evidence that Victim in Latest ICE Shooting Was Brandishing a Gun . [online] Published 26 January 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-manipulated-image-shows-gun-not Top Of Page Disinformation As a Systemic Threat to Democratic Resilience EUvsDisinfo   argues that recent global risk assessments characterize foreign information manipulation and interference (FIMI), disinformation, and misinformation as systemic threats that undermine democratic resilience worldwide. Reports from the World Economic Forum, the United Nations, and the European External Action Service highlight how these campaigns deepen societal divides, erode trust in institutions, and weaken crisis response by undermining the shared evidence base required for collective decision-making. The Human Rights Council further warns that FIMI increasingly targets marginalized communities, independent media, and human rights defenders, demonstrating that information disorder is not a marginal issue but a global risk multiplier affecting governance, security, and social cohesion. The article also emphasizes the economic and structural damage caused by disinformation, noting that misleading narratives can destabilize markets, distort financial expectations, and erode long-term investment and policy stability. Conflict-related and climate-focused manipulation campaigns can incite hatred, obstruct humanitarian efforts, and delay sustainable development by casting doubt on scientific consensus or promoting false solutions. As a safeguard, the article emphasizes robust public-interest media ecosystems, independent journalism, and media literacy, alongside initiatives such as the European Democracy Shield, which seeks to strengthen election integrity and counter FIMI through detection, cooperation, and proactive investment in trustworthy information spaces. Source:  EUvsDisinfo, FIMI and disinformation as global threats . [online] Published 30 January 2026. Available at: https://euvsdisinfo.eu/fimi-and-disinformation-as-global-threats/ Top Of Page Disinformation Vortex Around Minnesota ICE Protests A podcast episode of Uncanny Valley   by Wired   describes a fragmented, high-tempo information environment surrounding intensified ICE activity in Minnesota, in which far-right and pro-administration messaging rapidly shaped and distorted public understanding of events. Hosts discuss how a right-wing influencer, Nick Shirley, promoted an unproven claim that Somali-run daycare centers in Minneapolis misappropriated millions in a Medicaid-related fraud narrative; they link this amplification to subsequent harassment and violence dynamics, including an attack on Rep. Ilhan Omar and attempts to frame the incident as staged. They also describe a rolling cycle of narrative shifts and reputational smears after the killing of protester Alex Pretti, moving from claims of an assassination attempt and “terrorist” labeling to alternative insinuations (e.g., immigration status) and finally to blame-shifting arguments about protest behavior, illustrating a “spin-to-fit” approach that prioritizes ideological utility over verifiable facts, including repetition by figures at the highest levels of the administration. The episode then broadens to platform governance and credibility crises as accelerants for misinformation and perceived censorship. It highlights how users interpreted TikTok outages and content-performance changes as politically motivated suppression following a U.S. ownership restructuring, noting that distrust is compounded by opaque, personalized recommendation systems that are difficult to audit externally. The hosts suggest that even subtle algorithmic tweaks can influence which narratives gain traction without leaving clear evidence. They further note that TikTok’s updated terms request more granular location permissions and enable the broader collection of user input for AI features, raising concerns about surveillance, targeting, and the erosion of user trust at a moment when many already suspect political capture of major information channels. Source:  WIRED, B. Barrett & Z. Schiffer, & T. Marchman. Uncanny Valley: Minneapolis Misinformation, TikTok’s New Owners, and Moltbot Hype . [online] Published 29 January 2026. Available at: https://www.wired.com/story/uncanny-valley-podcast-ice-minneapolis-tiktok-moltbot/ Top Of Page ICE Surveillance Sparks Online Counterattacks Politico   reports an escalating digital information conflict surrounding the Trump administration’s mass deportation agenda, in which federal agencies have expanded domestic surveillance capabilities while online activists and hacker groups deploy countermeasures to track and expose immigration enforcement operations. ICE has reportedly increased its use of advanced surveillance tools and data access, including contracts with firms such as Paragon and Palantir, forensic phone-cracking technologies, facial recognition systems, and data brokers collecting sensitive personal information. The administration has also granted ICE access to large federal datasets from agencies like the IRS, Medicaid, and Social Security. In response, activists have used encrypted messaging platforms, social media, and community-built tools to report raid locations, map surveillance infrastructure, and identify agents, while cybercriminal collectives have escalated tactics by leaking names and personal details of ICE and DHS officials online. The article highlights how these developments create fertile ground for disinformation, coercive influence, and contested narratives over legitimacy and safety. Digital tools intended to document or resist enforcement actions have prompted aggressive efforts by the government and major technology companies to suppress information sharing, including app removals, Meta’s content restrictions, and federal investigations into encrypted communications. Officials have framed ICE-tracking platforms as threats to agent security, while critics argue these actions represent intimidation and censorship aimed at silencing opposition. The environment is characterized by breaches, doxxing, surveillance expansion, and attempts to control online discourse, illustrating how both state and non-state actors use digital tactics to influence public perception, disrupt organizing, and shape the information space around immigration enforcement. Source : Politico, D. Nickel & A. Ng. ICE has expanded its mass surveillance efforts. Online activists are fighting back. [online] Published 29 January 2026. Available at: https://www.politico.com/news/2026/01/29/ice-tracking-tools-protesters-00755703 Top Of Page [Appendix - Frameworks to Counter Disinformation] UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence Warnings from UK security leaders, reported by The Record , highlighted that Britain risks exposing itself to cyberattacks, sabotage, and disinformation campaigns unless it develops credible offensive deterrence alongside defensive resilience. Former national security adviser Lord Sedwill told a parliamentary hearing that resilience measures alone would not discourage hostile states. The warnings came as ministers defended plans agreed at last year’s NATO summit to raise total security spending to 5 percent of GDP within a decade, including 1.5 percent for indirect defense and resilience such as cybersecurity. Committee members questioned whether this resilience funding would deliver new capabilities or merely repackage existing spending, given the lack of clear NATO definitions. Sedwill expressed concern that creative accounting could undermine the effort, urging ministers to clarify the additional capacity that would be delivered in the coming years. Ministers acknowledged that cyber incidents and hybrid attacks below the threshold of armed conflict are already having serious strategic effects. The government plans to publish a revised National Cyber Action Plan, shifting from a strategic framework to an operational plan focused on countering threats, strengthening resilience, and supporting economic growth. Source:  The Record, A. Martin. UK leaders warned country risks 'absorbing' cyber and hybrid attacks without offensive deterrence . [online] Published 28 January 2026. Available at: https://therecord.media/uk-government-warned-cyber-hybrid-threats-offensive-operations Top Of Page Commercialized Social Media Manipulation and Disinformation Amplification An experiment published in 2025 by NATO Strategic Communications Centre of Excellence  examined how major social media platforms detect and counter commercially purchased inauthentic engagement. Despite regulatory advances such as the EU Digital Services Act, manipulation services remain widely accessible and inexpensive, allowing actors to buy fake likes, comments, shares, and followers at scale. The study found that more than 30,000 inauthentic accounts generated more than 100,000 units of engagement, with enforcement varying significantly across platforms: X and YouTube removed a larger share of fake activity, whereas Instagram, TikTok, and others left most purchased engagement intact. The experiment also showed that manipulation is not confined to organic posts, as paid advertising systems can be exploited to distribute inauthentic narratives to targeted audiences. The report highlights a shift toward more sophisticated influence tactics, including AI-enabled bot networks designed to blend into authentic conversations rather than relying on overt spam. These bots increasingly amplify politically sensitive and military-related narratives, including pro-Kremlin and pro-China themes, while commercial providers use cryptocurrency payments to obscure traceability and sustain a resilient ecosystem of manipulation. Overall, the findings underscore how hostile actors can exploit low-cost, automated tools to shape discourse, erode trust, and embed disinformation within legitimate online communities, emphasizing the need for behavioural detection, financial disruption, and stronger cross-platform accountability. Source: NATO Strategic Communications Centre of Excellence , Social Media Manipulation for Sale: 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement. [online] Published 30 January 2026. Available at: https://stratcomcoe.org/publications/social-media-manipulation-for-sale-2025-experiment-on-platform-capabilities-to-detect-and-counter-inauthentic-social-media-engagement/338 Top Of Page EU-Supported Media Literacy Effort Against Disinformation in Kosovo As published by the EEAS , digital and AI-driven technologies have increased vulnerability to misinformation and information manipulation across the Western Balkans, prompting the European Union and its local European Houses to support public resilience initiatives. At the launch of the exhibition The Glass Room: Misinformation Edition in Pristina, EU Ambassador Airo Orav emphasized the shared challenge of safeguarding societies from disinformation and the EU’s commitment to equipping citizens with tools to recognize and counter misleading narratives. With EU support, the exhibition toured multiple towns in Kosovo between October and December 2025, using posters, interactive applications, and animations to explain how misinformation spreads, why it is persuasive, and how everyday online behaviors such as clicks, likes, and shares amplify false content. The tour also included capacity-building workshops led by experts Kreshnik Gashi and Darko Dimitrijević, which addressed risks such as deepfakes, algorithmic bias, and the influence of digital design on public opinion, while promoting source verification and privacy awareness as key defenses against hostile information dynamics. Source: EEAS, Press and information team of the EU Office/EU Special Representative in Kosovo. The Kosovo Journey of The Glass Room – Misinformation Exhibition Edition . [online] Published 29 January 2026. Available at: https://www.eeas.europa.eu/delegations/kosovo/kosovo-journey-glass-room-%E2%80%93-misinformation-exhibition-edition_en [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 19th-25th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Grok’s Deepfake Scandal and Its Regulatory Conclusions [State Actors] Russia   Pro-Russian Monuments Were Used to Manipulate History and Justify War Russian Hacktivists Increased Disruptive Cyber Pressure on UK Organizations Disinformation in Lavrov’s January Press Conference Kremlin Disinformation Mechanism and Debunk The War in Ukraine Disinformation Targeting Greenland and Ukraine Aid Disinformation Around Russia’s Winter Attacks on Ukraine Iran Iranian State TV Feed Reportedly Hijacked to Air Anti-Regime Messages [AI Related Articles] The Silicon Gaze and Geographic Bias in AI Systems AI Realistic Deepfake Videos New Influence Risks Follow OpenAI’s Move to Ads [Cyfluence Attacks] Sandworm-Linked Cyberattack Targets Poland’s Energy Infrastructure [General Reports] Anti-Israel Hoaxes Targeted Israeli Citizens Abroad Old Military Exercise Footage Misrepresented as US Seizure of Russia-Linked Tanker False Claims Misrepresented European Response to U.S. Remarks on Greenland Psychological Defence Against Hostile Disinformation and Influence [Appendix - Frameworks to Counter Disinformation] Countering Information Influence Operations in the Nordic-Baltic Region [CRC Glossary] [ Report Highlights] A report by EU vs Disinfo examined how Russia used cultural heritage as a tool of FIMI, a strategy described as Cultural Heritage Exploitation. Infosecurity Magazine reports that UK authorities warned of increased disruptive cyber activity by Russian-aligned hacktivist groups against UK organizations. Oxford and Kentucky researchers introduced the concept of the “silicon gaze” to explain how large language models, such as ChatGPT, systematically reproduce and amplify long-standing spatial inequalities. According to Infosecurity   Magazine , ESET attributes a late-December 2025 cyberattack on Poland’s energy infrastructure to the GRU-linked Sandworm group. The attack involved DynoWiper malware but caused no confirmed disruption. Poland is moving ahead with stricter cybersecurity regulations in response. The Psychological Defence Agency (PDA) has developed an approach to counter disinformation and hostile influence in democratic societies, which are often amplified through social media and AI. It focuses on a whole-of-society psychological defence that strengthens citizen resilience, media and information literacy, and institutional readiness, favoring transparency, fact-based communication, and attribution over censorship. A report by NATO`s Strategic Communications Centre of Excellence examined how the eight Nordic-Baltic countries (NB8) have developed strategies to counter information influence operations amid a deteriorating security environment. [Social Media Platforms] X Grok’s Deepfake Scandal and Its Regulatory Conclusions Following the latest events regarding Grok, in which the platform allowed users to create realistic, fake sexual images (for further information, see W02 January 2026 Cyfluence Report here ), TechXplore   reported that Musk has promised the UK authorities that it will block such capabilities to comply with local law. However, this response appears limited to the UK and came only after investigations were launched. Current laws in jurisdictions such as New Zealand also struggle to address these problems, particularly when images are fully synthetic. Voluntary codes and light-touch regulation allow platforms to claim compliance while still enabling predictable misuse of AI tools. Nevertheless, the article argued that focusing only on criminalizing users comes too late and fails to address the systemic roots of AI-enabled disinformation. Without robust regulation that compels companies to assess risks and implement safeguards, platforms will continue to prioritize speed and engagement over safety. Source:  TechXplore, C. Mudgway & A. Lensen. Sexualized deepfakes on X are a sign of things to come. NZ law is already way behind. [online] Published 20 January 2026. Available at: https://techxplore.com/news/2026-01-sexualized-deepfakes-nz-law.html Top Of Page [State Actors] Russia Pro-Russian Monuments Were Used to Manipulate History and Justify War A report by EU vs Disinfo  examined how Russia used cultural heritage as a tool of FIMI, in a strategy described as Cultural Heritage Exploitation (CHX). Through CHX, pro-Kremlin historical narratives are deliberately embedded in physical cultural objects, especially monuments, to legitimize Russia’s war against Ukraine and its territorial claims. These monuments promote recurring propaganda themes, including claims that Ukraine is controlled by Nazis, that Russian-speaking communities are under threat, and that Ukrainian statehood is artificial. Thousands of new pro-Russian monuments have been erected in Russia, occupied Ukrainian territories, and other regions, often placed in highly visible public spaces such as parks, schools, and religious sites. Research showed these monuments are not isolated acts but part of a coordinated system involving Kremlin-linked monument manufacturers, the Russian military, the Russian Orthodox Church, and local authorities. Beyond their physical presence, these monuments have been digitally amplified through state media, diplomatic channels, exhibitions abroad, and social media. Source:  EU vs Disinfo, F. Greenland. Built to lie: how new pro-Russian monuments exploit cultural heritage.  [online] Published 20 January 2026. Available at: https://euvsdisinfo.eu/built-to-lie-how-new-pro-russian-monuments-exploit-cultural-heritage/ Top Of Page Russian Hacktivists Increased Disruptive Cyber Pressure on UK Organizations Infosecurity Magazine reported that UK authorities warned Russian-aligned hacktivist groups had intensified disruptive cyber activity against UK organizations, with local government bodies and critical national infrastructure as key targets. A new alert from the UK National Cyber Security Centre (NCSC) stated that these actors focused primarily on DoS attacks designed to overwhelm websites and online systems, temporarily taking services offline and limiting public access to essential functions.The NCSC noted that the attacks are ideologically motivated rather than financially driven, linked to perceived Western support for Ukraine. One prominent actor, NoName057(16), has been active since 2022 and has repeatedly targeted UK local authorities, NATO members, and other European states perceived as opposing Russian geopolitical interests. The group coordinated activities via Telegram channels and distributed tools and tactics on platforms such as GitHub, enabling supporters to participate in coordinated disruption efforts. Gary Barlet, Public Sector CTO at Illumio, stated that modern supply chains and critical infrastructure are deeply interconnected, making disruption easier than ever. The agency recommends that organizations review their DoS protections, improve resilience, and engage with its heightened cyber threat collection efforts. The Record also noted that the warning echoed a separate advisory issued in December by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), which said pro-Russian hacktivists were conducting opportunistic, low-impact attacks against global critical infrastructure. While these operations are typically less sophisticated than those conducted by state-sponsored hackers, U.S. officials cautioned that they can still cause real-world disruption and, in some cases, physical damage. Sectors targeted have included water and food systems, agriculture, and energy. Sources: Infosecurity Magazine, A. Mascellino. Russian Hacktivists Intensify Disruptive Cyber Pressure on UK Orgs.  [online] Published 19 January 2026. Available at: https://www.infosecurity-magazine.com/news/russia-cyber-pressure-uk-orgs/ The Record, D. Antoniuk. UK warns of sustained cyberthreat from pro-Russian hacktivists.  [online] Published 20 January 2026. Available at: https://therecord.media/uk-ncsc-warning-russia-aligned-hacktivist-groups Top Of Page Disinformation in Lavrov’s January Press Conference  In its analysis of Russia’s information tactics, EU vs Disinfo framed Foreign Minister Sergey Lavrov’s January 2026 press conference as less a diplomatic briefing and more a coordinated exercise in foreign information manipulation and interference. Over several hours, he recycled well-known Kremlin narratives aimed at undermining trust in the EU, portraying Europe as aggressive and unstable, and presenting Russia as a besieged but lawful actor. Personal attacks on European leaders, rather than engagement with policy, were used to undermine public confidence in democratic decision-making and to frame EU leadership as reckless and dangerous. Lavrov relied heavily on distortion and false equivalence. He invoked Nazi-era imagery to discredit Germany’s defensive military posture, falsely depicted the EU as a coercive and illegitimate institution, and repeated long-debunked claims about Ukraine being run by Nazis, repressing Russian culture, and provoking the war. Beyond the EU, Lavrov targeted the Baltic states and Moldova with narratives designed to delegitimise their security concerns and European aspirations, while also adopting anti-colonial language to justify Russia’s own territorial expansion in Ukraine. Source:  EU vs Disinfo. Lavrov’s 2026 presser: a three-hour FIMI offensive against Europe and its leaders. [online] Published 23 January 2026. Available at: https://euvsdisinfo.eu/lavrovs-2026-presser-a-three-hour-fimi-offensive-against-europe-and-its-leaders/ Top Of Page Kremlin Disinformation Mechanism and Debunk NATO`s   Strategic Communications Centre of Excellence research shows that Russian propaganda thrives on contradiction, presenting Russia at once as a defender of conservative Christian Europe and as an anti-colonial champion of the Global South. Studies focusing on Bulgaria show that exposing these logical inconsistencies is not enough: disinformation succeeds not because it is coherent, but because it resonates emotionally with specific social groups. Rather than persuading through facts, Kremlin narratives exploit grievances, identity politics, and distrust of liberal elites, making simple fact-checking an inadequate response. A key disinformation technique is what researchers call “confirmation through denial”, where denials are framed in ways that indirectly reinforce threats or culpability. Kremlin messaging is also rarely openly pro-Russian; instead, it embeds itself within national-populist and anti-democratic movements across Europe. Its core goal is not to promote Russia, but to sow polarisation, weaken trust in democratic institutions, and link toxic identity narratives to political actors hostile to liberal democracy and European unity. The Bulgarian case also highlights how social media algorithms, opaque networks of websites, and resistance to regulation amplify disinformation, while efforts to govern the digital space are reframed as “censorship”. Source:  NATO Strategic Communications Centre of Excellence, M. Iakimova & B. Dimitrova & V. Dimitrova & O. Nanova & L. Donchev & V. Valkanov & D. Vatsov, Defence Strategic Communications, Volume 16. Bulgaria as a Russian Propaganda Laboratory.  [online] Published 2025. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/DSC_NATO_journal_V16_05_Milena-Iakimova.pdf?zoom=page-fit Top Of Page War in Ukraine Disinformation Targeting Greenland and Ukraine Aid As stated in an article by NewsGuard , Pro-Kremlin sources are spreading fabricated videos that imitate trusted European and international media to claim that support for Ukraine has left Europe too weak to defend Greenland from supposed U.S. annexation plans. These fake reports exploit recent remarks by U.S. President Donald Trump about acquiring Greenland, using them as a hook to advance disinformation that portrays European assistance to Ukraine as reckless and self-damaging. The false content includes manipulated videos allegedly from Danish broadcaster TV2 Nord, France’s Le Figaro , and the U.S. think tank Institute for the Study of War. In each case, the videos were entirely fabricated. The outlets never published the reports, and the individuals quoted never made the statements attributed to them. Linguistic inconsistencies, direct denials from media editors, and the absence of any credible records confirm that the material is fake. Despite this, the claims were widely amplified by the Russian state and pro-Kremlin media networks. Source:  NewsGuard, E. Maitland. Pro-Kremlin Sources Impersonate Credible Media to Suggest Western Support for Ukraine Harms Europe and Greenland.  [online] Published 21 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russian-fakes-make-the-most-of-europes Top Of Page Disinformation Around Russia’s Winter Attacks on Ukraine EU vs. Disinfo   reported that as Russia intensified winter strikes on Ukraine’s energy infrastructure, causing blackouts and heating shortages during extreme cold, pro-Kremlin information channels launched a parallel disinformation campaign. While Russian missiles and drones were the direct cause of the outages, these narratives sought to obscure responsibility by downplaying the attacks and shifting blame onto Ukraine’s leadership for “refusing peace.” A core tactic has been the inversion of responsibility. Pro-Russian commentators express performative sympathy for civilians while portraying the destruction of energy infrastructure as a legitimate tool to force Ukraine into surrender. By describing the war as a “technical” or “energy” problem and avoiding mention of Russian strikes, these narratives whitewash the causes of the crisis and present capitulation on Moscow’s terms as the only path to relief. At the same time, propaganda outlets openly weaponize hardship, celebrating the impact of cold and blackouts as pressure on Ukrainian society. Western reporting on civilian suffering is selectively quoted or distorted to suggest that Ukrainians are losing the will to resist and are ready to accept territorial concessions. Source:  EU vs Disinfo. Weaponising winter: how pro-Russian outlets justify strikes against Ukrainian energy infrastructure. [online] Published 23 January 2026. Available at: https://euvsdisinfo.eu/weaponising-winter-how-pro-russian-outlets-justify-strikes-against-ukrainian-energy-infrastructure/ Top Of Page Iran Iranian State TV Feed Reportedly Hijacked to Air Anti-Regime Messages The Record   stated that several Iranian state television channels were briefly hijacked on 18 January 2026, interrupting regular programming to broadcast protest footage and anti-regime messages. The affected channels, transmitted via the Badr satellite and used by Iran’s state broadcaster for provincial stations, reportedly aired Farsi-language calls for continued demonstrations, footage of solidarity rallies abroad, and statements attributed to international supporters. The broadcast also featured Reza Pahlavi, the U.S.-based son of Iran’s last shah, urging further protests and calling on Iran’s military and security forces to back demonstrators. Clips of the incident were shared by Iran International, local media, and Pahlavi’s media team, though the footage could not be independently verified. Reports indicate that the unauthorized transmission lasted approximately 10 minutes, and neither the group responsible nor Iranian authorities have publicly commented on the incident. The apparent breach came amid unrest that began in late December 2025, following a severe economic downturn marked by high inflation and rising food prices. Protests have increasingly targeted alleged government corruption and mismanagement, with some demonstrators calling for political change. The government also imposed a near-total shutdown of internet and mobile communications for nearly two weeks, though restrictions may soon be eased. Source:  The Record, D. Antoniuk. Iranian state TV feed reportedly hijacked to air anti-regime messages.  [online] Published 19 January 2026. Available at: https://therecord.media/iran-state-television-reported-hack-opposition Top Of Page [AI Related Articles] The Silicon Gaze and Geographic Bias in AI Systems Drawing on a new analysis by Oxford and Kentucky researchers , the concept of the “silicon gaze” was introduced to explain how large language models (LLMs), such as ChatGPT, systematically reproduce and amplify long-standing spatial inequalities. Based on a large-scale audit of more than 20 million geographically framed queries, the study showed that bias in generative AI is not a technical flaw that can be easily corrected, but a structural feature rooted in historically uneven data production, model design, and platform power. Because LLMs are trained primarily on English-language and Global North–centric sources, they tend to privilege already visible, wealthy, and well-documented places while rendering others vague, stereotyped, or invisible. The authors developed a five-part typology to describe how these biases operate: availability bias (uneven data coverage), pattern bias (reliance on frequent language associations), averaging bias (flattening complexity into dominant narratives), trope bias (recycling cultural stereotypes), and proxy bias (substituting measurable indicators for subjective qualities). Together, these mechanisms shape how AI evaluates places across scales, from countries to neighborhoods, often reinforcing racial, economic, and geographic hierarchies. Rankings that appear neutral or authoritative, such as those that label places as “smarter”, “happier”, or “more cultured”, frequently reflect dominant media narratives and global metrics rather than lived realities. The report concluded that addressing these issues requires moving beyond technical debiasing toward a power-aware, relational understanding of AI systems. Bias in LLMs reflects centuries of unequal knowledge production and cannot be solved by more data alone. Instead, the authors call for greater transparency, independent auditing, stronger governance, and critical literacy among users to question what, and who, AI systems leave out. Without such shifts, generative AI risks hard-coding historical inequalities into everyday decision-making, shaping how places are imagined, valued, and acted upon in the future. Source:  SAGE Journals, F. W. Kerche & M. Zook & M. Graham. The silicon gaze: A typology of biases and inequality in LLMs through the lens of place.  [online] Published 20 January 2026. Available at: https://journals.sagepub.com/doi/10.1177/29768624251408919 Top Of Page AI Realistic Deepfake Videos NewsGuard  testing demonstrated that OpenAI’s video generator Sora can produce fake videos so convincing that both humans and leading AI chatbots struggle to identify them as false. When Sora videos had their watermarks removed, ChatGPT, Grok, and Gemini failed to recognize them as AI-generated in most cases, often confidently describing fabricated scenes as real and supported by “news sources.” This creates a serious disinformation risk, as realistic AI videos can easily circulate without clear signals that they are fake. Even when Sora’s watermark was visible, detection was inconsistent. While watermarks improved accuracy, ChatGPT and Grok still misidentified some videos, and all three systems sometimes treated fabricated footage as legitimate reporting. Compounding the problem, watermark removal tools are widely available, making it easy to strip away the main visual indicator that a video is AI-generated. As a result, false videos promoting provably untrue claims can spread while appearing credible to both users and automated systems. The tests also revealed a broader transparency issue. The chatbots rarely warned users that they lack reliable AI-detection capabilities and instead often expressed unwarranted confidence in their incorrect assessments. Source: NewsGuard, I. Chomnalez & L. Marchl. AI Fools Itself: Top Chatbots Don’t Recognize AI-Generated Videos.  [online] Published 22 January 2026. Available at: https://www.newsguardtech.com/special-reports/top-ai-chatbots-dont-recognize-ai-generated-videos/ Top Of Page New Influence Risks Follow OpenAI’s Move to Ads As TechXplore   highlights , OpenAI raised new influence risks by announcing plans to introduce advertising in ChatGPT in the United States, showing ads to users on the free version and the low-cost Go tier, while excluding Pro, Business, and Enterprise subscribers. The company said ads will be clearly separated from chatbot responses, will not affect answers, and will respect user privacy, including opt-outs for personalized ads and bans on ads for minors or around sensitive topics. Still, the move has raised concerns among some users. The key question is whether OpenAI’s voluntary safeguards will hold once advertising becomes central to its business. Running large-scale AI systems is extremely costly, and OpenAI expects significant losses over the coming years. History has shown that targeted advertising is the most scalable way to monetize large user bases, as seen with Google and Facebook. Experience has also shown how commercial pressure can erode self-imposed guardrails, leading to harmful outcomes such as political manipulation, data misuse, and the promotion of damaging products. Concerns are heightened because chatbots are used in more personal and intimate ways than social media. Ads placed alongside private, trust-based conversations may exert a stronger influence on user decisions, even if formally separated from AI responses. The report argued that this creates a structural conflict between public interest and advertising incentives, raising the risk of manipulation by advertisers, scammers, or political actors. Source:  TechXplore, R. F. Ciriello & K. Backholer. OpenAI will put ads in ChatGPT, opening a new door for dangerous influence. [online] Published 25 January 2026. Available at: https://techxplore.com/news/2026-01-openai-ads-chatgpt-door-dangerous.html Top Of Page [Cyfluence Attacks] Sandworm-Linked Cyberattack Targets Poland’s Energy Infrastructure In an Infosecurity Magazine  article, a coordinated cyberattack targeting Poland’s energy infrastructure on 29th–30th of December 2025 has been attributed by ESET to the Russia-aligned APT group Sandworm (also tracked as UAC-0113, APT44, and Seashell Blizzard), which is believed to be linked to Russia’s military intelligence service, the GRU. ESET reported that the attackers deployed a data-wiping malware strain known as DynoWiper, though the company stated it was not aware of any successful disruption to energy services. ESET assessed the attribution with medium confidence, citing a strong overlap between the malware’s behavior and Sandworm’s previous wiper-related tactics, techniques, and procedures. The incident remains under investigation, but ESET suggested the timing may have been deliberate, coinciding with the 10th anniversary of Sandworm’s 2015 cyberattack on Ukraine’s power grid. Polish Prime Minister Donald Tusk stated that Poland successfully repelled the late-2025 attack and that critical infrastructure was not threatened, while also emphasizing the government’s push to finalize the National Cybersecurity System Act, which implements NIS2 to strengthen risk management, IT/OT security, and incident response. The report notes that the attack apparently targeted two combined heat and power plants and a renewable energy system, reflecting continued Russian-linked focus on energy assets as a high-impact target set. Source:  Infosecurity Magazine, P. Muncaster. Wiper Attack on Polish Power Grid Linked to Russia’s Sandworm.  [online] Published 26 January 2026. Available at: https://www.infosecurity-magazine.com/news/wiper-attack-polish-power-grid/ Top Of Page [General Reports] Anti-Israel Hoaxes Targeted Israeli Citizens Abroad According to a report by NewsGuard's Reality Check , since March 2025, anti-Israel disinformation has increasingly shifted from focusing on the war in Gaza to targeting Israeli citizens abroad on a personal level. At least eight false narratives have circulated widely on social media, particularly on X, collectively generating more than 16 million views. Several viral posts falsely claimed that the Russian government has begun deporting all Israeli citizens from Russia, and that Japan has banned Israeli tourists from entering the country. In reality, no such policies exist, and none of the claims were supported by official statements or credible media reporting. Similar false narratives have also referenced supposed travel bans or punitive measures targeting Israelis in China, France, Spain, and South Korea. Other hoaxes alleged criminal behavior by Israeli tourists abroad, including a widely shared claim that Israelis were involved in an attempted rape in Thailand. Thai authorities confirmed that the suspects were not Israeli. Another narrative falsely claimed that Israeli tourists set fire to a Christmas tree in China. NewsGuard sent an email to X, the platform on which these false claims have spread most, regarding these findings but did not receive a response. Source:  NewsGuard’s Reality Check, M. Calamaio & G. Pozzi. False Claims Target Israeli Citizens Abroad with Fabricated Travel Bans, Deportations, and Misconduct Allegations.  [online] Published. Available at: https://www.newsguardrealitycheck.com/p/anti-israel-hoaxes-get-personal Top Of Page Old Military Exercise Footage Misrepresented as US Seizure of Russia-Linked Tanker Yahoo News   described how two videos widely shared on social media and presented as footage of a recent US boarding of a Russia-linked oil tanker were actually recordings of earlier US military exercises. Reverse-image and keyword searches traced one clip to a casualty-evacuation drill during Exercise Balikatan 25 in Subic Bay, and the other to a SEAL demonstration during the Titans of the Sea Presidential Review. The circulating videos, however, differ from the grainy footage of the tanker's seizure published by DVIDS and by US Department of Homeland Security Secretary Kristi Noem on her official X account , and archival checks showed the viral material predated the operation by months. The mis-captioned videos spread across X, TikTok, Instagram, and other platforms shortly after U.S. authorities said they had seized the tanker in the North Atlantic for alleged sanctions violations. Washington said the tanker was part of a shadow fleet that carries oil for countries such as Venezuela, Russia, and Iran in violation of US sanctions, and seized it despite the ship being escorted by the Russian navy. Source:   Yahoo News, S. Heung. Months-old videos show US military exercises, not seizure of Russia-linked vessel. [online] Published 21 January 2026. Available at: https://www.yahoo.com/news/articles/months-old-videos-show-us-024450959.html?guccounter=2   X, K. Noem. Post on X (@Sec_Noem).  [online] Published 7 January 2026. Available at: https://x.com/Sec_Noem/status/2008907194691915945 Top Of Page False Claims Misrepresented European Response to U.S. Remarks on Greenland NewsGuard’s Reality Check: “False Claim of the Week”  focused on a widely shared false narrative alleging that Italian Prime Minister Giorgia Meloni said Europe would shut U.S. military bases, scrap trade agreements, and boycott McDonald’s in response to U.S. threats to acquire Greenland. The claim, which circulated widely on social media, gained millions of views after anti-Trump users misrepresented a short video clip from Meloni’s press conference on the 9th of January 2026. In reality, Meloni’s comments were rhetorical questions intended to criticize commentators who advocated distancing Italy from its allies. At the press conference, she reaffirmed Italy’s commitment to the European Union and the Atlantic alliance, emphasizing continued cooperation with the United States. NewsGuard found no credible news reports that Italy is considering leaving NATO, closing U.S. bases, or encouraging the boycott of McDonald’s. Source : NewsGuard’s Reality Check. Europe Shutting U.S. Bases, Scrapping Trade Deals and Boycotting McDonald’s? NewsGuard’s False Claim of the Week.  [online] Published 23 January 2026. Available at: https://www.newsguardrealitycheck.com/p/europe-shutting-us-bases-scrapping Top Of Page Psychological Defence Against Hostile Disinformation and Influence Psychological Defence Agency (PDA)   presents Psychological Defence and Information Influence  as a comprehensive framework for understanding and countering hostile information activities aimed at undermining democratic societies. It explains how foreign and domestic actors exploit openness, free expression, cognitive biases, media systems, and emerging technologies, particularly social media and artificial intelligence, to conduct disinformation, propaganda, and broader malign influence operations. These campaigns aim to distort public debate, polarize societies, erode trust in institutions, and weaken democratic decision-making by manipulating attention, emotions, and perceptions rather than relying solely on overt coercion. Drawing on theory, case studies, and policy practice, especially in the Swedish and Nordic context, the article outlines how democratic states can respond without undermining their own values. It emphasizes a whole-of-society psychological defence, combining citizen resilience, media and information literacy, institutional preparedness, intelligence coordination, and legally grounded countermeasures. Rather than censorship or state control of truth, the approach prioritizes transparency, factual communication, attribution of hostile actors, and ethical deterrence, framing the defence against disinformation as a long-term effort to strengthen democratic norms, societal trust, and resistance to hostile influence. Source:  Psychological Defence Agency, J. Falkheimer & J. Pamment. Psychological Defence and Information Influence – A Textbook on Theory and Practice.  [online] Published 2026. Available at: https://mpf.se/psychological-defence-agency/publications/archive/2026-01-19-psychological-defence-and-information-influence----a-textbook-on-theory-and-practice Top Of Page [Appendix - Frameworks to Counter Disinformation] Countering Information Influence Operations in the Nordic-Baltic Region A report by NATO Strategic Communications Centre of Excellence examined how the eight Nordic-Baltic countries (NB8) have developed strategies to counter information influence operations amid a deteriorating security environment. Drawing on shared democratic values, historical experience with foreign interference, and close regional cooperation, the countries have built national frameworks that emphasize resilience, public awareness, and coordinated responses. Since 2014, and especially following Russia’s full-scale invasion of Ukraine, information influence operations (IIO) have increasingly been paired with cyberattacks, sabotage, and pressure on critical infrastructure, making them a central national and regional security concern. Across the region, resilience is the cornerstone of counter-IIO efforts. This includes educating citizens through media literacy and critical thinking, strengthening institutional preparedness, and involving civil society, independent media, and fact-checking organizations. Defence and intelligence bodies conduct monitoring and situational analysis, while governments rely on a mix of strategic communications, case-by-case responses, and, increasingly, legal and regulatory measures such as sanctions, media restrictions, and amendments to the criminal code. Attribution and response thresholds vary by country, but international cooperation through NATO, the EU, and bilateral partnerships is a consistent feature. The report concluded that the NB8 countries are among the most advanced globally in countering IIO, largely due to their whole-of-society approach and strong coordination culture. However, challenges remain, including resource constraints, fragmented legal tools, duplication across international forums, and the need to balance free expression with disruptive measures. The authors identified significant potential for deeper regional cooperation, including joint training, shared capability development, and coordinated response projects, to further strengthen resilience against increasingly sophisticated influence operations. Source:  NATO Strategic Communications Centre of Excellence, J. Lindgren & J. Pamment & A. Palmer & S. Svetoka & E. Lange-Ionatamišvili. Countering Information Influence Operations in the Nordic-Baltic Region.  [online] Published 22 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Countering-IIOs-in-the-Nordic-Baltic-FINAL-FILE.pdf?zoom=page-fit Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 22dn - 28th December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Meta Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations [State Actors] Russia Disinformation in Putin’s “Direct Line” Broadcast Russia’s Historical Disinformation Campaign Against Finland The War in Ukraine The Expansion of Russia’s AI-Driven Disinformation Campaigns Disinformation Campaigns Targeting Ukrainian Refugees False Claims About Microchipped Vaccines in Ukraine Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland China China’s Cognitive Warfare Strategy Seeks Narrative Dominance Through Disinformation China Index Events Highlight PRC Influence [AI Related Articles] AI and LLM Disinformation Risks   [Appendix - Frameworks to Counter Disinformation] EU Sanctions Target Diana Panchenko  U.S. Action Targets Foreign Disinformation and Censorship Agents US Judge Blocks Deportation of Anti-Disinformation Advocate The WikiSignals Project Strengthens Wikipedia Against Disinformation EU Sanctions Act against Russian Aggression Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation EU - Western Balkans Conference Strengthens Fight Against Disinformatio n [CRC Glossary] [ Report Highlights] Meta’s Threat Report finds that state-linked actors continue to run coordinated disinformation campaigns using fake media, local intermediaries, and AI tools to scale deceptive narratives and evade detection. According to the Cyfluence Research Center report, China conducts coordinated cognitive warfare by pairing gray-zone military actions with disinformation, legal manipulation, and psychological operations to reshape narratives of sovereignty and achieve strategic dominance without open conflict. A report by the Psychological Defence Agency described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. An article by Stop Fake outlined a Russian disinformation strategy that portrays Ukrainians abroad as traitors or threats. As published by DoubleThink Lab , the China Index Regional Event Fund, launched in 2025, has supported local events worldwide to discuss Chinese influence and disinformation, helping communities understand the PRC’s global strategies. The European Union announced it has renewed its restrictive measures against Russia for six more months, until July 31,   2026, in response to Moscow’s ongoing destabilization of Ukraine. [Social Media Platforms] Meta Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations Meta’s   Q2–Q3 2025 Adversarial Threat Report  outlines how hostile actors continue to exploit online platforms through coordinated inauthentic behavior (CIB), fraud, and AI-enabled manipulation, with a particular focus on state-linked influence operations and transnational criminal networks. The report documents how adversaries use false identities, deceptive media outlets, and coordinated amplification to mislead audiences, obscure attribution, and advance geopolitical or financial objectives, while deliberately adapting tactics to evade detection. The report highlights several state-linked influence campaigns, including a long-running Iranian operation known as Endless Mayfly , attributed to the International Union of Virtual Media , which used fake news sites, impersonated journalists, and cross-platform amplification to launder misleading narratives into public discourse. It also details Russian-linked operations that increasingly rely on authentic local intermediaries, particularly in sub-Saharan Africa, where freelancers and local media managers were covertly used to promote pro-Russian narratives and undermine Western partnerships—blurring the line between authentic and inauthentic behavior. In parallel, Meta reports that AI has become an accelerant rather than a fundamentally new threat, enabling adversaries to scale deception through AI-generated personas, content, translations, and impersonation while maintaining familiar behavioral patterns. These techniques are used across influence operations and scams, including celebrity impersonation and the production of fake news, reinforcing the need for behavior-based detection, cross-sector intelligence sharing, and coordinated defenses to counter evolving disinformation and hybrid threats. Source: Meta, Adversarial Threat Report: Second–Third Quarter . [online] Published December 2025. Available at: https://transparency.meta.com/sr/Q2-Q3-2025-Adversarial-threat-report/ Top Of Page [State Actors] Russia Disinformation in Putin’s “Direct Line” Broadcast According to an article  by EUvsDisinfo , Russia’s annual “Direct Line with Vladimir Putin,” first held in 2001, functions as a central instrument of state-led disinformation. In the broadcast that aired on December 19, Putin created an illusion of openness while legitimizing false narratives at the highest political level. Emotional appeals, personalized anecdotes, and claims of hands-on leadership were used to humanize the president and distract from systemic problems. At the same time, inconvenient realities were downplayed or presented as already resolved. A key tactic was the management of criticism through false equivalence and projection. By allowing Western journalists to ask questions, the Kremlin sought to showcase what it claimed was “free debate,” while dismissing criticism as biased and hostile. Putin repeated long-standing disinformation narratives about Ukraine, portraying Russia as seeking peace, shifting blame for the war onto Ukraine, the EU, and NATO, and insisting that Russia did not start the conflict. He also relied on cherry-picked facts and outright denial to claim military success and economic resilience, despite sanctions and battlefield realities. Assertions that Ukraine lacks the capacity to resist, that Russian advances are inevitable, and that Western actions amount to “robbery” or provocation were used to reinforce a victimhood narrative. Source:  EUvsDisinfo, Controlled questions, crafted lies: inside Putin’s year-end messaging machine . [online] Published 23 December 2025. Available at: https://euvsdisinfo.eu/controlled-questions-crafted-lies-inside-putins-year-end-messaging-machine/ Top Of Page Russia’s Historical Disinformation Campaign Against Finland A report  by the Psychological Defence Agency  described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. Russian authorities and state-controlled media portray Finland as a Nazi collaborator responsible for war crimes, reviving and distorting Second World War narratives to undermine Finland’s legitimacy, damage its international reputation, and justify future pressure or aggression. This historical revisionism is embedded in Russia’s broader hybrid warfare, in which disinformation, fear-mongering, and victim narratives are coordinated from the highest political level and amplified through state media such as “TASS”. A central element of the campaign is “lawfare”: Russian courts and prosecutors issue rulings and indictments that falsely accuse Finland of genocide and complicity in the siege of Leningrad. These legal actions create a façade of legitimacy for potential reparations or territorial claims. They are reinforced by physical acts, including the vandalism and removal of Finnish memorials on Russian territory, as well as staged provocations involving nationalist activists. The campaign follows an escalatory logic in which rhetoric, legal measures, and physical acts combine to create conditions for renewed aggression. The report concluded that history is no longer treated as an academic debate but as an active weapon of security policy, used to mobilize the Russian population, portray Russia as a victim, and frame Finland as an enemy. Countering this threat requires coordinated, long-term, and fact-based international responses, as fragmented or reactive approaches allow disinformation to set the terms of historical and political debate. Source:  Psychological Defence Agency Report Series, P. Oksanen, History as a Battlefield: Russia’s Information War Against Finland 2025 . [online] Published 2025. Available at: https://mpf.se/download/18.6888ebfe19b2bdfbd24ae4/1766154667213/History_as_a_Battlefield-tga.pdf Top Of Page The War in Ukraine The Expansion of Russia’s AI-Driven Disinformation Campaigns NewsGuard has debunked  400 false claims related to the Russia–Ukraine war promoted by Russian actors. The most prominent of these operations is Storm-1516, which Microsoft and NewsGuard have identified as the fastest-growing and most prolific campaign. Rather than relying on sanctioned outlets like RT and Sputnik, the Kremlin is increasingly using unofficial, harder-to-trace networks to spread false narratives. Storm-1516 specialises in accusing Ukraine and its allies of corruption and criminal activity, using AI-generated news websites, deepfake videos, and fake social media accounts. These false claims often achieve substantial reach, drawing millions of views across platforms such as X and Telegram. The campaign has rapidly scaled its output since 2023, surpassing not only other influence operations but also RT and Sputnik combined in the number of false claims spread in 2025. Similar operations, such as Matryoshka and the Foundation to Battle Injustice, further amplify disinformation through fake media outlets. A defining feature of this new phase is the use of AI to increase speed, scale, and credibility. Storm-1516 has impersonated trusted news organisations, fabricated official-looking videos, and exploited real corruption investigations to inject false claims about President Volodymyr Zelensky. Source:  NewsGuard Reality Check, E. Maitland & A. Lee, 400 and Counting: A Russian Influence Operation Overtakes Official State Media in Spreading Russia-Ukraine False Claims . [online] Published 22 December 2025. Available at: https://www.newsguardrealitycheck.com/p/a-russian-influence-operation-surpasses Top Of Page Disinformation Campaigns Targeting Ukrainian Refugees An article  published by StopFake outlines a long-standing Russian disinformation strategy that depicts Ukrainians abroad as traitors or threats—an approach with roots in Soviet-era propaganda that intensified after Russia’s full-scale invasion of Ukraine. As millions of Ukrainians were forced to flee, Russian information attacks increasingly shifted blame onto the victims of the war, using fabricated stories to distort public perception. StopFake reports a sharp rise in such narratives since 2022, describing a sustained effort to damage Ukraine’s image and weaken international support. Russian disinformation about Ukrainian refugees tends to follow three core narratives: portraying them as criminals, as an economic and social burden, or as people who will be forcibly deported and sent back to fight. These claims commonly rely on invented stories, manipulated photos and videos, and forged websites that impersonate trusted Western media—a tactic associated with the “Doppelganger” operation. Refugees are falsely accused of violence, spreading disease, abusing welfare systems, or receiving preferential treatment, and many of these stories are amplified by far-right actors and fringe outlets to gain political traction. Over time, the emphasis of these narratives has shifted from petty crime stories in 2022 to economic strain and “war fatigue” messaging in 2023, and then to fears of forced mobilisation in 2024–2025. Countries hosting large numbers of Ukrainians, especially Poland and Germany, have been frequent targets, with disinformation shaping public debate and, in some cases, election campaigning. Source:  StopFake, Как Россия ведет войну против украинских беженцев . [online] Published 23 December 2025. Available at: https://www.stopfake.org/ru/kak-rossiya-vedet-vojnu-protiv-ukrainskih-bezhentsev/ Top Of Page False Claims About Microchipped Vaccines in Ukraine NewsGuard’s analysis  described a fabricated “investigation” by the so-called Foundation to Battle Injustice, which claimed that Ukrainian authorities are injecting soldiers with COVID-19 vaccines containing microchips that can remotely paralyse or kill them. Widely circulated by pro-Kremlin social media accounts in December 2025, the allegation asserted that President Volodymyr Zelensky had issued secret orders and cited an unnamed “leak” as evidence. In reality, the Foundation to Battle Injustice is a known front with documented ties to pro-Kremlin influence efforts and a record of spreading disinformation about Ukraine. There is no evidence that Ukraine mandates COVID-19 vaccination for soldiers, nor that vaccines can contain microchips. Medical experts and health authorities have repeatedly stated that such technology does not exist, and that vaccine components are far smaller than any microchip. Ukrainian officials have explicitly denied the claim, calling it false. In addition, a member of the press department of the Ukrainian President’s Office told NewsGuard in an email: “This information is fake.” Source:  NewsGuard Reality Check, I. Blachez, Russia Fabricates Report that Ukraine Injects Soldiers with Vaccines Containing Microchips that Can Kill Them for Disobeying Orders . [online] Published 23 December 2025. Available at: https://www.newsguardrealitycheck.com/p/no-ukraine-doesnt-have-a-microchip Top Of Page Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland A DFRLab   report detailed a surge in anti-Ukrainian disinformation in Poland following two triggering events: President Karol Nawrocki’s veto of expanded benefits for Ukrainian refugees and Russian drone incursions into Polish airspace. Both events were exploited by coordinated online networks that pushed misleading narratives portraying Ukrainians as economic burdens and security threats, while downplaying or denying Russia’s role. These information attacks sought to fracture Polish public support for Ukraine and weaken the Polish–Ukrainian alliance at a critical moment. Despite robust economic data showing that most Ukrainians in Poland are employed and contribute far more in taxes than they receive in benefits, false claims depicting refugees as “freeloaders,” “parasites,” or criminals spread widely on social media. Investigations identified coordinated amplification by networks of Facebook pages, fringe groups, clickbait websites, and monetized platforms, often using synchronized posting, misleading page names, and AI-generated videos. These narratives were also injected into non-political spaces—such as hobby and lifestyle groups—making them harder to detect and counter. Disinformation intensified further after Russian drones entered Polish airspace, with conspiratorial narratives falsely blaming Ukraine for staging provocations to drag Poland and NATO into the war. These claims circulated across Facebook, TikTok, and the comment sections of major Polish media outlets, often originating from suspicious or inauthentic accounts. One example involved the site udostepnij[.]pl and a network of Facebook pages with innocuous, non-political names that simultaneously shared identical posts about a deportation case, exaggerating and misrepresenting the facts. With hundreds of thousands of followers, these pages systematically amplified similar content, monetized audience attention through ads and YouTube, and redirected users to clusters of right-wing websites.. Source:  DFRLab, G. Gigitashvili & S. Gelava, How social media manipulation fuels anti-Ukraine sentiment in Poland . [online] Published 21 December 2025. Available at: https://dfrlab.org/2025/12/21/how-social-media-manipulation Top Of Page China China’s Cognitive Warfare Strategy Seeks Narrative Dominance Coordinated Disinformation The Cyfluence Research Center (CRC)  has published a piece  by Athena Tong  that casts China’s doctrine of cognitive warfare as a tightly integrated form of hostile influence—melding physical actions with sustained narrative manipulation to reshape perceptions of sovereignty and legitimacy across East Asia. Maritime manoeuvres in the South China Sea, the East China Sea, and around Taiwan are embedded in coordinated foreign information manipulation and interference (FIMI) campaigns, using state media, legal instruments, environmental claims, and diplomatic messaging to portray Beijing as restrained, lawful, and environmentally responsible while framing rivals as provocateurs or foreign proxies. The piece explains how maritime incidents are systematically distorted to advance strategic narratives. Collisions, water-cannon attacks, and blockades are rapidly reframed through manipulated images, recycled footage, and fabricated environmental scandals—such as claims of ecological harm caused by Philippine forces—to blur accountability and confuse audiences. This narrative engineering is reinforced through lawfare, including the designation of disputed features such as Scarborough Shoal as Chinese “nature reserves” and selective appeals to international law at the United Nations to recast sovereignty disputes as matters of development and stewardship rather than territorial rights. Beyond the South China Sea, Beijing employs historical revisionism, digital disinformation, and psychological operations to erode Japanese and Taiwanese sovereignty narratives. Coordinated social media campaigns amplify divisive local grievances in Okinawa, seed fake grassroots movements, and link disputes over the Senkaku Islands to Taiwan’s status, while misrepresenting UN Resolution 2758 to legitimise claims over Taiwan. Collectively, these tactics constitute a comprehensive political-warfare playbook that combines media warfare, lawfare, and psychological pressure to normalise grey-zone aggression, weaken the adversary's resolve, and establish narrative dominance without direct military confrontation. Source:  Cyfluence Research, A. Tong, Cognitive Warfare Masterclass: China’s Doctrine for Strategic Narrative Superiority . [online] Published 29 December 2025. Available at: https://www.cyfluence-research.org/post/cognitive-warfare-masterclass-china-s-doctrine-for-strategic-narrative-superiority Top Of Page China Index Events Highlight PRC Influence As published  by DoubleThink  Lab, the China Index Regional Event Fund, launched in 2025 , has supported local events worldwide to foster discussion about Chinese influence and disinformation, helping communities better understand the PRC’s global strategies. Events ranged from policy roundtables and workshops for journalists to university sessions for students. In the UK, discussions focused on how the Chinese Communist Party's influence affects higher education, including concerns about surveillance and self-censorship among Chinese students. Meanwhile, Italian forums highlighted the PRC’s impact on politics, media, and democracy in Europe. In South and Southeast Asia, workshops emphasised the role of online information operations and PRC-backed initiatives in shaping public opinion and reinforcing control. At universities in India, Thailand, and Pakistan, participants learned to detect patterns of influence, assess risks linked to initiatives such as the China–Pakistan Economic Corridor, and understand how surveillance technology and economic dependencies can be used to shape behaviour. Discussions highlighted how disinformation and selective narratives are deployed to manipulate populations. Across all events, the China Index facilitated evidence-based discussions on disinformation and broader PRC influence, illustrating how academic research can inform public debate, policymaking, and media coverage. The initiative aimed to strengthen regional understanding of authoritarian digital strategies, raise awareness of state-driven propaganda, and provide practical tools to detect and mitigate the effects of information operations globally. Source:  Medium (Doublethink Lab), From Global Insight to Local Action: The 2025 China Index Regional Event Fund Report . [online] Published 23 December 2025. Available at: https://medium.com/doublethinklab/from-global-insight-to-local-action-the-2025-china-index-regional-event-fund-report-f7f1bacd6800 [AI Related Articles] AI and LLM Disinformation Risks Concerns about digital misinformation have flared up again following Meta’s decision to wind down professional fact-checking, a development covered by TechXplore  (for further information, see the W02 January 2025 Cyfluence Report here ). Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. These models increasingly generate news summaries, headlines, and content across platforms before traditional moderation can respond, shaping not only what information people see but also how it is framed. Even when the underlying facts are accurate, LLMs can foreground certain viewpoints while downplaying others, creating communication bias that quietly steers opinions and amplifies particular narratives at the expense of others. This bias is distinct from outright disinformation, but it can produce similar effects on public understanding. Research suggests that LLMs exhibit persona-based steerability, aligning outputs with perceived user expectations. For example, the same query about legislation may yield responses tailored to an environmental activist versus a business owner—emphasising different angles while remaining factually accurate. When a small number of dominant companies control most widely used AI models, even minor biases can scale rapidly, shaping public discourse without users fully realizing the extent of that influence. Current regulatory frameworks, such as the EU AI Act and the Digital Services Act, prioritise transparency, pre-launch audits, and post-deployment oversight. However, they remain limited in their ability to address subtle communication bias. Effective mitigation, therefore, requires more than regulation: fostering competition, transparency, and meaningful user participation is essential to ensuring that AI systems reflect diverse perspectives. Source:  TechXplore, A. Kuenzler, People are getting their news from AI—and it's altering their views . [online] Published 22 December 2025. Available at: https://techxplore.com/news/2025-12-people-news-ai-views.html Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Sanctions Target Diana Panchenko EUvsDisinfo   notes  that the European Union has introduced a new round of sanctions aimed at individuals and entities linked to coordinated Russian disinformation and cyber operations (for further information, see the W51 December 2025 Cyfluence Report here ). One of the listed individuals is Diana Panchenko, a Ukrainian-born journalist whom EUvsDisinfo describes as promoting pro-Kremlin, anti-Ukrainian, and anti-NATO narratives while presenting this material as “investigative journalism” in ways intended to mislead audiences and lend credibility to false claims. Her output blends the aesthetics of professional news production with content filmed in Russian-occupied territories, amplifying narratives aligned with Russia’s war messaging. She uses platforms such as Telegram and X to reach large audiences, including English-speaking users. She relies on techniques often associated with influence operations, clickbait framing, coordinated amplification, and the systematic promotion of pro-Russian accounts, rather than on conventional independent reporting. Ukrainian authorities and disinformation researchers have warned that her content contributes to justifications for Russian abuses and undermines Ukraine’s security and stability. Source:  EUvsDisinfo, From “journalism” to FIMI: EU sanctions Diana Panchenko . [online] Published 22 December 2025. Available at: https://euvsdisinfo.eu/from-journalism-to-fimi-eu-sanctions-diana-panchenko/ Top Of Page U.S. Action Targets Foreign Disinformation and Censorship Agents The U.S. State Department  has imposed  restrictions on five individuals identified as leading organised efforts to pressure American platforms to censor, demonetise, or suppress viewpoints they oppose. These actors—including radical activists and weaponised NGOs—have advanced foreign-backed censorship campaigns targeting American speakers and companies. By imposing these restrictions, the State Department aims to prevent foreign disinformation networks from influencing domestic discourse and manipulating online platforms. In some cases, the Department of Homeland Security can initiate removal proceedings under U.S. law against individuals engaged in such activities, underscoring the seriousness with which the U.S. treats extraterritorial disinformation and censorship campaigns. By curbing the reach of these actors, the U.S. seeks to safeguard free expression, protect American companies from coercion, and mitigate the impact of disinformation campaigns. Source:  U.S. Department of State, Announcement of Actions to Combat the Global Censorship-Industrial Complex  (Press Statement by Marco Rubio, Secretary of State). [online] Published 23 December 2025. Available at: https://www.state.gov/releases/office-of-the-spokesperson/2025/12/announcement-of-actions-to-combat-the-global-censorship-industrial-complex/ Top Of Page US Judge Blocks Deportation of Anti-Disinformation Advocate ABC News   published  that a U.S. judge has temporarily prevented the Trump administration from detaining Imran Ahmed, a British anti-disinformation campaigner and chief executive of the Center for Countering Digital Hate, over visa restrictions aimed at five Europeans accused of targeting U.S. tech companies or censoring speech. Ahmed, a permanent U.S. resident, fears deportation that would separate him from his U.S.-citizen family while he continues his work combating online hate and misinformation. The visa bans sparked criticism from European governments, which argued that Ahmed’s work enhances online safety by exposing disinformation, hate speech, and child-exploitation content—highlighting the tension between foreign-policy concerns and anti-disinformation efforts. Ahmed filed a lawsuit alleging violations of his free-speech and due-process rights, prompting U.S. District Judge Vernon Broderick to issue a temporary restraining order halting any detention or removal until the case can be heard. Source:  ABC News, US judge blocks deportation of British anti-disinformation activist . [online] Published 26 December 2025. Available at: https://www.abc.net.au/news/2025-12-26/us-barred-from-deporting-campaigner-imran-ahmed/106179794 Top Of Page The WikiSignals Project Strengthens Wikipedia Against Disinformation MisinfoCon’s coverage of WikiCredCon 2025 highlights a core reality for Wikipedia: credibility depends not only on accurate articles, but on the trustworthiness of the sources behind them. WikiSignals brings together organisations, including Veri.FYI, Factiverse, SimPPL, AfroCrowd, Iffy.news, and the Internet Archive Reference Explorer. The WikiSignals project is developing tools to help editors assess the reliability of references by compiling data from press associations, fact-checking ratings, domain registration records, media indexes, and lists of predatory or hijacked journals. This resource—now covering 70,000 websites—provides editors with a searchable credibility score for each source, helping prevent disinformation from entering Wikipedia. WikiSignals also offers a practical tool that lets editors evaluate a URL’s credibility and format it as a Wikipedia citation. The project integrates machine-harvestable data to estimate reliability programmatically, supporting both manual and automated efforts to counter misleading or biased sources. Input from active Wikipedia editors has guided the development of additional resources, including a compendium of favourite editing tools and an expanded index of predatory journals. Source:  MisinfoCon, B. Golding, WikiSignals: Strengthening the Credibility of Wikipedia Citations . [online] Published 24 October 2025. Available at: https://misinfocon.com/wikisignals-strengthening-the-credibility-of-wikipedia-citations-8531d2673099 Top Of Page EU Sanctions Act against Russian Aggression The European External Action Service (EEAS) announced it has renewed its restrictive measures against Russia for six more months, until 31 th July 2026, in response to Moscow’s ongoing destabilization of Ukraine. These sanctions, first imposed in 2014 and significantly expanded after Russia’s 2022 invasion, cover a wide range of sectors, including trade, finance, energy, technology, and luxury goods. The measures include suspending the broadcasting activities and licenses of Kremlin-backed disinformation outlets within the EU, directly targeting coordinated propaganda campaigns aimed at influencing European public opinion. The EU’s sanctions also include asset freezes, travel bans, and restrictions on economic activity in Crimea, Sevastopol, and non-government-controlled areas of Ukraine. Source:  EEAS, Press and information team of the Delegation to Ukraine, Russia’s war of aggression against Ukraine: Council extends economic sanctions for a further 6 months . [online] Published 22 December 2025. Available at: https://www.eeas.europa.eu/delegations/ukraine/russia%E2%80%99s-war-aggression-against-ukraine-council-extends-economic-sanctions-further-6-months_en Top Of Page Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation A report by the EEAS  revealed that on December 25 th , 2025, Uzbekistan inaugurated a Fact-Checking Club at the Youth Affairs Agency in Tashkent under the EU-funded “Digital Detectives” project, in collaboration with the “Qizlar Ovozi” (Voices of Girls) initiative. The event emphasized the importance of promoting media literacy, critical thinking, and responsible information consumption among young people, while linking fact-checking to gender equality and inclusive public discourse. Discussions highlighted the societal role of fact-checkers, the dangers of misinformation, and the negative impact of gender stereotypes. The project has trained 10 new fact-checking trainers and expanded a network of freelance contributors, delivering workshops and regional sessions that reached over 220 participants from schools, universities, and youth centers. Between October and December 2025, participants produced more than 50 fact-checking articles, approximately 20 videos, and multiple awareness campaigns addressing disinformation, cyberbullying, gender equality, and public health. Fact-checking clubs are now being launched in multiple institutions, including the University of Journalism and Mass Communications, and in Nukus, Karakalpakstan. Source:  EEAS, Press and information team of the Delegation to Uzbekistan, EU-Funded Project “Digital Detectives” Launches Fact-Checking Club at the Youth Affairs Agency . [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/uzbekistan/eu-funded-project-%E2%80%9Cdigital-detectives%E2%80%9D-launches-fact-checking-club-youth-affairs-agency_en Top Of Page EU - Western Balkans Conference Strengthens Fight Against Disinformation According to the EEAS publication , the EU–Western Balkans Media Literacy Conference in Skopje, on 17 th November, convened representatives from across the region, EU Member States, and international partners to address the growing threat of disinformation. The conference emphasized media literacy as a cornerstone of informed citizenship and democracy. Panels and workshops focused on key challenges in the regional information landscape, including social media’s role in the spread and counteracting of manipulated content, gendered disinformation, digital hate, and the specific vulnerabilities of younger audiences. Sessions also explored how EU policy frameworks, such as the Digital Services Act, can support human-rights-based, whole-of-society approaches to combating misinformation and influence operations. Participants exchanged practical strategies for detecting, mitigating, and preventing disinformation while enhancing institutional capacity. Source:  European External Action Service (EEAS), Press and information team of the EU Delegation to the Republic of North Macedonia, EU–Western Balkans Media Literacy Conference 2025 strengthens regional cooperation to counter disinformation . [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/north-macedonia/eu%E2%80%93western-balkans-media-literacy-conference-2025-strengthens-regional-cooperation-counter_en Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 29th December 2025 - 04th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Tik Tok EU Probes TikTok Over AI-Driven “Polexit” Disinformation Campaign [State Actors] Russia   RT Falsely Blames Ukraine for Mariupol Theatre Destruction RT Pushes Evidence-Free Narrative Around Alleged Drone Attack on Putin's Residence How Russia Sought to Win the Narrative in 2025 China Taiwan Warns of Coordinated Chinese Disinformation During Military Exercises [AI Related Articles] The US Invasion of Venezuela Exposed Limitations in AI Chatbot Reporting [Appendix - Frameworks to Counter Disinformation] Social Media Flooded with Disinformation After Maduro’s Capture South Korea’s Anti-Disinformation Law Sparks Free Speech and Diplomatic Concerns [CRC Glossary] [ Report Highlights] Heise Online reports that the European Commission has launched an investigation into TikTok over the spread of AI-generated videos promoting Poland’s exit ("Polexit") from the European Union, amid concerns that the platform may have breached its obligations under the Digital Services Act. Throughout 2025, EUvsDisinfo identified a consistent pattern in Russian FIMI, marked by the deliberate construction of an image of strength despite mounting military, economic, and political constraints. Biometric Update covers legal developments in South Korea. The country has enacted a law imposing heavy penalties for spreading false or AI-manipulated information, prompting domestic concerns over censorship and international criticism from the United States over its broader impact on media freedom and digital regulation. Wired covers that, following the US announcement of Venezuelan President Nicolás Maduro and his wife, Cilia Flores, being captured, disinformation about the operation flooded social media. AI-generated images and videos, repurposed old footage, and doctored posts quickly spread across TikTok, Instagram, and X. [Social Media Platforms] TikTok EU Probes TikTok Over AI-Driven “Polexit” Disinformation Campaign Heise Online reports that the European Commission has launched an investigation into TikTok over the spread of AI-generated videos promoting Poland’s exit (“Polexit”) from the European Union, amid concerns that the platform may have breached its obligations under the Digital Services Act. The probe was triggered after Poland’s Deputy Minister for Digitalization formally alerted Brussels to a coordinated wave of content that mimicked the style of the UK’s Brexit campaign and targeted young voters. The Commission confirmed it is working closely with Polish authorities to assess whether TikTok adequately identified and mitigated the risks posed by this material. The videos, which have gained significant traction on TikTok, often featured young women in patriotic clothing delivering far-right, anti-EU messages. Polish officials describe the effort as a deliberate disinformation campaign, while investigations suggest that the content exhibits characteristics of Russian influence, as the texts sometimes contain specific Russian syntax. While TikTok has stated that it removed some content for violating its own rules, it has not provided detailed responses to the broader allegations. The case follows earlier scrutiny of the platform, including a formal DSA procedure opened in late 2024 over alleged failures to curb manipulation during Romania’s presidential election. The investigation comes as the EU braces for a dense election calendar in 2026, raising fears that AI-driven disinformation could distort democratic processes across multiple member states. Under the DSA, very large platforms face fines of up to 6% of their global turnover if they fail to address systemic risks.   Source: Heise Online, S. Krempl. AI disinformation on TikTok: EU Commission examines "Polexit" campaign . [online] Published 3 January 2026. Available at: https://www.heise.de/en/news/AI-disinformation-on-TikTok-EU-Commission-examines-Polexit-campaign-11127844.html Top Of Page [State Actors] Russia RT Falsely Blames Ukraine for Mariupol Theatre Destruction A report by DisinfoWatch  notes that the Russian state outlet RT has claimed Mariupol’s Drama Theatre has reopened nearly three years after what it described as a “Ukrainian Armed Forces’ terrorist attack,” a framing that directly contradicts extensive independent investigations. Multiple credible inquiries by Amnesty International, the OSCE, Human Rights Watch, and the Associated Press concluded that the theater was destroyed in March 2022 by a Russian airstrike, despite clear indications that it was being used as a civilian shelter. Large signs reading “CHILDREN” were painted outside the building to prevent an attack, yet the strike killed hundreds of civilians, with AP estimates placing the death toll at around 600. The theater’s reopening under Russian occupation authorities has been widely described as a propaganda exercise aimed at normalizing Moscow’s control of Mariupol and erasing evidence of a likely war crime. Source:  DisinfoWatch, RT claims Mariupol Theatre reopens “after the Ukrainian Armed Forces’ terrorist attack”. [online] Published 2025. Available at: https://disinfowatch.org/disinfo/rt-claims-mariupol-theatre-reopens-after-the-ukrainian-armed-forces-terrorist-attack/ Top Of Page RT Pushes Evidence-Free Narrative Around Alleged Drone Attack on Putin's Residence DisinfoWatch   reported that RT presented an unverified claim that Ukraine carried out a large-scale drone attack on a residence linked to Vladimir Putin as fact, despite offering no evidence that the incident occurred. The coverage asserted that Ukraine launched the "terror attack" using 91 long-range drones and then amplified supposed foreign reactions and Kremlin warnings of a tougher, non-diplomatic response. Independent reporting directly contradicts these claims. Multiple credible international media outlets note that Russia has provided no visual, local, or official corroboration consistent with past verified Ukrainian strikes, and the Ukrainian government has categorically denied the attack. Analysis by the Institute for the Study of War highlighted inconsistencies between Kremlin statements and Russian Defense Ministry figures, as well as the total absence of open-source evidence such as footage, damage reports, or local authority accounts. Source:  DisinfoWatch, Lavrov and Russian state propagandist allege Ukrainian attack on Putin’s residence without evidence. [online] Published 29 December 2025. Available at: https://disinfowatch.org/disinfo/lavrov-and-russian-state-propagandist-allege-ukrainian-attack-on-putins-residence-without-evidence/ Top Of Page How Russia Sought to Win the Narrative in 2025 Throughout 2025, EUvsDisinfo identified a consistent pattern in Russian FIMI, marked by the deliberate construction of an image of strength despite mounting military, economic, and political constraints. Kremlin-aligned outlets repeatedly exaggerated battlefield developments in Ukraine, portraying limited or ambiguous advances as decisive victories and promoting a narrative of Russian invincibility. This messaging drew heavily on historical revisionism and wartime glorification, amplified through “mapaganda” and symbolic moments such as Victory Day, to mask Russia’s failure to achieve key strategic objectives, including air superiority or sustained territorial control. In reality, Russian forces have suffered significant losses, failed to hold major regional capitals in the long term, and relied on external support, even as propaganda continued to present the war as steadily progressing in Moscow’s favor. Beyond Ukraine, 2025 also highlighted the growing scale and sophistication of Russian FIMI operations. EUvsDisinfo documented coordinated interference efforts in Moldova, Ukraine, and Georgia, as well as influence activities extending to Africa. The year saw the maturation of a disinformation-for-hire industry, increasingly professionalized and amplified by AI and large language model grooming. In response, the EU advanced new analytical tools, such as the “iceberg” and “galaxy” frameworks, to expose both visible narratives and hidden infrastructures behind disinformation, while launching the Democracy Shield to strengthen resilience against manipulation. Crucially, the Kremlin’s inflated claims of success served not only to bolster domestic morale but also to advance its strategic objectives ahead of potential negotiations. Source:  EUvsDisinfo, 2025 in review: winning the narrative. [online] Published 29 December 2025. Available at: https://euvsdisinfo.eu/2025-in-review-winning-the-narrative/ Top Of Page China Taiwan Warns of Coordinated Chinese Disinformation During Military Exercises In a Focus Taiwan article , Taiwan’s Ministry of National Defense warned of a coordinated wave of Chinese disinformation accompanying Beijing’s large-scale military exercises around the island, aimed at undermining public morale and confidence in Taiwan’s institutions and international partners. Officials identified 46 false or misleading narratives, many of which targeted government policies, the armed forces, and trust in the United States as part of broader psychological pressure during the drills. The disinformation included fabricated claims by Chinese state media that ports had been blockaded, that PLA forces had advanced close to Taiwan’s shores, as well as misleading video content suggesting that Chinese aircraft or drones had flown near Taipei 101. Taiwanese authorities rejected these claims and stressed that such information operations have become a routine component of PLA exercises, prompting ongoing monitoring, public corrections, and calls from political leaders for societal unity to counter disinformation. Source:  Focus Taiwan, S. Lin. Taiwan's military highlights Chinese disinformation during large-scale drill. [online] Published 30 December 2025. Available at: https://focustaiwan.tw/politics/202512300030 Top Of Page [AI Related Articles] The US Invasion of Venezuela Exposed Limitations in AI Chatbot Reporting A Wired   report highlighted how leading AI chatbots diverged sharply when asked about an alleged U.S. invasion of Venezuela and the capture of President Nicolás Maduro—an event publicly claimed by U.S. officials but denied by some AI systems. When tested, Google’s Gemini and Anthropic’s Claude provided timely summaries of the reported events, citing multiple news sources and acknowledging competing narratives. By contrast, ChatGPT and Perplexity flatly rejected the premise, insisting that no such invasion or capture had occurred and attributing the claims to misinformation or sensational headlines.The discrepancy underscores a core limitation of large language models: their reliance on training cutoffs and, in some cases, the absence of real-time search. ChatGPT’s response reflected outdated knowledge and expressed high confidence in an incorrect conclusion, while Perplexity later acknowledged routing the query to a lower-tier model after flagging it as “likely fraud.” Experts note that models with live web access are better equipped to handle fast-moving events; however, accuracy still depends on data sourcing and system design. Source:  Wired, B. Barrett. Business. The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees. [online] Published 3 January 2026. Available at: https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/ Top Of Page [General Reports] Social Media Flooded with Disinformation After Maduro’s Capture Wired   highlighted how leading AI chatbots diverged sharply when asked about an alleged U.S. invasion of Venezuela and the capture of President Nicolás Maduro—an event publicly claimed by U.S. officials but denied by some AI systems. When tested, Google’s Gemini and Anthropic’s Claude provided timely summaries of the reported events, citing multiple news sources and acknowledging competing narratives. By contrast, ChatGPT and Perplexity flatly rejected the premise, insisting that no such invasion or capture had occurred and attributing the claims to misinformation or sensational headlines.The discrepancy underscores a core limitation of large language models: reliance on training cutoffs and, in some cases, the absence of real-time search. ChatGPT’s response reflected outdated knowledge and expressed high confidence in an incorrect conclusion, while Perplexity later acknowledged routing the query to a lower-tier model after flagging it as “likely fraud.” Experts note that models with live web access are better equipped to handle fast-moving events; however, accuracy still depends on data sourcing and system design. Source:  Wired, B. Barrett. Business. The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees. [online] Published 3 January 2026. Available at: https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/ Top Of Page South Korea’s Anti-Disinformation Law Sparks Free Speech and Diplomatic Concerns As published  by Biometric Update , South Korea has passed a new law aimed at curbing false and fabricated information, including deepfakes and synthetic media, reflecting growing concern over the impact of disinformation on public order and democratic accountability. The legislation allows courts to impose punitive damages of up to five times proven losses on traditional media outlets and large online channels, including YouTube creators, found to have disseminated illegal or false content that causes measurable harm. It also empowers the country’s media regulator to fine repeat offenders up to 1 billion won if courts confirm that they have repeatedly distributed manipulated information. The law has triggered domestic and international controversy, with journalists’ unions and civil society groups warning that vague definitions of “false” or “fabricated” information could encourage self-censorship and enable abuse. Critics have urged the adoption of clearer standards and have called on the president to veto the bill. The debate is shaped by South Korea’s recent experience with disinformation, including the circulation of misleading YouTube content that preceded the 2024 declaration of martial law by then-President Yoon Suk Yeol. The legislation has also drawn criticism from the United States, which views such digital regulations as a threat to free expression and to U.S.-based technology companies. U.S. officials have expressed concern that the revised network law extends beyond addressing deepfakes and could jeopardize technological cooperation, echoing similar disputes over digital regulation in Europe and elsewhere. Despite these objections, the bill passed the National Assembly by a wide margin, highlighting the growing global tension between countering disinformation and preserving media freedom in an era of AI-driven content manipulation. Source:  Biometric Update, J. McConvey. South Korea law targeting deepfakes, disinformation gets frosty reception from US. [online] Published 2 January 2026. Available at: https://www.biometricupdate.com/202601/south-korea-law-targeting-deepfakes-disinformation-gets-frosty-reception-from-us Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Weekly: Cyber-based hostile influence campaigns 05th - 11th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Disinformation and Digital Distortion of Geopolitics Through Social Media X Grok Spreads AI-Generated Sexualized Images [State Actors] Russia   Russian Influence Networks Flood U.S. Discourse After Maduro’s Capture Disinformation Campaign Targeting Estonian History on Wikipedia Report Warns of Growing Russia–China Coordination in Technology-Driven Hybrid and Disinformation Operations The War in Ukraine Kremlin Disinformation Framing Ukraine and its Supporters as “Neo-Nazi” China China’s AI and Disinformation Campaign Following Maduro’s Capture [AI Related Articles] AI Turbocharges Hoaxes About the Minneapolis ICE Shooting Deepfakes as Disinformation Threats to Digital Identity and Trust AI-Generated Disinformation on TikTok Targeting Ukraine AI Is Becoming a Force Multiplier for Disinformation and Democratic Manipulation [General Reports] Disinformation Surge After U.S. Raid in Venezuela Climate Disinformation Identified as a Long-Term Influence Threat in Europe Social Media Amplifies Disinformation and Polarization Around Venezuela Crisis AI-Driven Disinformation Exploits Maduro Arrest to Shape Global Narratives [Appendix - Frameworks to Counter Disinformation] U.S. Withdrawal from Cyber and Information Security Forums Raises Disinformation Concerns [CRC Glossary] [ Report Highlights] A Jamestown Foundation publication shows how Russia and China are increasingly fusing cyber capabilities, AI, and information warfare into coordinated hybrid operations aimed at eroding Western resilience through disinformation, infrastructure disruption, and strategic ambiguity. Reporting by The Record indicates that the European Commission is weighing further action against platform X after its AI tool Grok was used to generate sexualized images of a minor. A Carnegie Endowment for International Peace report finds that AI is accelerating disinformation and hostile influence by enabling the scalable production of synthetic content, deepfakes, and automated narratives that undermine elections, public trust, and democratic institutions. The HEAT report describes climate-related disinformation in Europe as persistent and adaptive, driven by conspiratorial and anti-elite narratives that undermine trust in science, institutions, and environmental policy, and amplified by fringe actors and cross-platform dynamics that push these claims into the mainstream. WIRED argues that the U.S. intervention in Venezuela illustrates how social media accelerates disinformation and polarization by prioritizing emotional, simplified, and propagandistic content over verified reporting, reshaping public understanding, and weakening citizens’ ability to distinguish truth from manipulation. A CyberScoop article warns that the Trump administration’s withdrawal from key international cybersecurity and counter-disinformation bodies risks weakening global coordination against hybrid threats and opening space for expanded authoritarian influence. [Social Media Platforms] Disinformation and Digital Distortion of Geopolitics Through Social Media A Wired   article examines how social media has transformed geopolitics into short, viral videos that simplify and distort complex events. The recent US attack on Venezuela illustrated that while serious reporting detailed casualties, the capture of President Maduro, and broader geopolitical consequences, social platforms such as TikTok and Instagram circulated fragmented, emotionally charged narratives. Users encounter simplified, highly partisan content that frames reality in dramatic, often misleading ways, creating an illusion of immediate understanding while omitting nuance and context. This environment allows disinformation—whether intentional or algorithmically amplified—to spread rapidly, shaping public perception before facts can be verified. Experts have noted that digital disintermediation has worsened the problem. Each user effectively becomes a media outlet, selecting content that confirms preexisting beliefs. Emotional reactions, memes, and viral sketches often dominate the discourse, drowning out fact-based reporting. The result is a polarized, reactive public that interprets events through ideological lenses rather than verified evidence. The Venezuelan case highlighted how social media algorithms reward engagement over accuracy, creating echo chambers that reinforce biases and erode critical judgment. Source:  Wired, A. Lagos. Culture. The Danger of Reducing America’s Venezuela Invasion to a 60-Second Video . [online] Published 11 January 2026. Available at: https://www.wired.com/story/the-danger-of-reducing-a-americas-venezuela-invasion-to-a-60-second-video/ Top Of Page X Grok Spreads AI-Generated Sexualized Images The European Commission is weighing further action against social media platform X, The Record   reports , after its AI tool Grok was used to generate sexualized images of a minor. The Commission described Grok’s output as illegal and unacceptable. EU officials noted that Grok has previously been linked to the spread of other unlawful content, including material undermining recognition of the Holocaust, which is criminalized in several European countries. Under the Digital Services Act (DSA), platforms are required to protect users from illegal content and systemic risks. Following these developments, X has come under urgent review by the UK media regulator Ofcom, while Australia’s online safety regulator has also opened an investigation, as reported by   CyberNews  citing The Guardian . Governments have warned X to comply with digital safety laws or face serious consequences, including potential blocking in the UK. UK Technology Secretary Liz Kendall said she would support blocking the platform if it refuses to comply with the Online Safety Act, and The Record   noted that three U.S. senators have asked Google and Apple to remove the Grok and X apps from their app stores. Politico   also reported  that users targeted political figures such as Sweden’s deputy prime minister, Ebba Busch, whose manipulated image was later withheld in the European Union under local laws. X has responded by framing EU enforcement actions as “political censorship” and an attack on free speech. Nevertheless, it has limited Grok’s image-generation feature to paying users, presenting this as a temporary safeguard following recent misuse. This step is widely viewed as damage control rather than a solution to the systemic risks posed by AI-driven disinformation and manipulation. As CyberNews   reported , experts and regulators argue that restricting access fails to address the deeper problem: Grok’s lack of effective safeguards against abuse and deception. Sources : The Record (Recorded Future), A. Martin. Government. EU looking ‘very seriously’ at taking action against X over Grok . [online] Published 5 January 2026. Available at: https://therecord.media/eu-grok-regulation-deepfake CyberNews , A. Zhadan. UK considers blocking X over Grok’s AI-generated sexualized images . [online] Published 10 January 2026. Available at: https://cybernews.com/ai-news/uk-considers-blocking-x-over-groks-ai-generated-sexualized-images/ The Record  (Recorded Future), S. Smalley. Lawmakers call on app stores to remove Grok, X over sexualized deepfakes . [online] Published 9 January 2026. Available at: https://therecord.media/lawmakers-call-on-app-stores-to-remove-grok-x Politico , M. Pollet & P. Haeck. Elon Musk’s Grok chatbot limits access to image generator that put women in bikinis . [online] Published 9 January 2026. Available at: https://www.politico.eu/article/elon-musk-x-grok-chatbot-limits-access-image-generator-women-bikini-deepfakes/ CyberNews , N. Ancell. Elon Musk limits access to Grok as experts criticize his childlike behaviour over deepfake nudes . [online] Published 9 January 2026. Available at: https://cybernews.com/ai-news/musk-limits-access-grok-deepfake-nudes/ Top Of Page [State Actors] Russia Russian Influence Networks Flood U.S. Discourse After Maduro’s Capture DFR Lab   details how , following the capture of Venezuelan President Nicolás Maduro by U.S. forces, a broad Russia-linked influence infrastructure rapidly mobilized to target U.S. information spaces. Unable to materially protect a key Latin American ally, Moscow pivoted to information warfare, flooding social media and fringe news ecosystems with contradictory narratives, conspiracy theories, pseudo–fact checks, and AI-generated or manipulated visuals. Rather than promoting a single coherent story, Russian-linked actors focused on saturating the discourse to confuse audiences and reframe the episode as evidence of U.S. instability and unreliability as an ally, particularly for Europe and NATO. A central role was played by Storm-1516, a Russia-linked operation that leveraged influencer networks, fabricated media sites, and cross-platform amplification to push mutually conflicting claims. These ranged from false-flag allegations and antisemitic conspiracies to recycled or AI-generated footage portraying Venezuelan reactions in misleading ways. The contradictions were often deliberate, with some accounts promoting AI-generated videos while others “debunked” the same content. Parallel efforts came from the Pravda Network, which translated and laundered Kremlin-aligned material into U.S.-facing websites and pro-Trump or QAnon-adjacent Telegram ecosystems, ensuring steady reinforcement of Russian state narratives for English-speaking audiences. In Russian-language spaces, state-aligned actors and military bloggers emphasized geopolitical consequences, portraying Maduro as a legitimate leader and framing his capture as an attack on the rules-based international order. Commentary amplified figures such as Viktor Bout to cast U.S. courts as tools of political repression and to deflect attention from Russia’s limited response. Source:  DFR Lab, E. Buziashvili. How Russia’s influence machine mobilized immediately after Maduro’s capture . [online] Published 10 January 2026. Available at: https://dfrlab.org/2026/01/10/how-russias-influence-machine-mobilized-immediately-after-maduros-capture/ttps://disinfowatch.org/disinfo/rt-claims-mariupol-theatre-reopens-after-the-ukrainian-armed-forces-terrorist-attack/ Top Of Page Disinformation Campaign Targeting Estonian History on Wikipedia StopFake   reports that Estonian authorities and Wikimedia Estonia are meeting to address growing concerns about historical disinformation on Wikipedia, particularly in the English-language edition. Hundreds of biographies and history articles have been altered to falsely portray Estonia and other Baltic states as integral parts of the Soviet Union or Russia, downplaying their occupation, independence, and defensive struggles, such as Estonia’s War of Independence. These edits misrepresent historical facts and shape misleading narratives for international readers. Wikimedia Estonia believes the changes are largely driven by a single Russian-speaking editor promoting pro-Kremlin interpretations of history. While most questionable edits are quickly detected and reversed, officials warn that such coordinated manipulation reflects long-standing information-warfare tactics against Estonia dating back to 2007. The goal, they say, is not accuracy but narrative distortion—rewriting history to legitimize Russian geopolitical claims. Source:  StopFake, V. Troyan. для ІМІ. В Эстонии фиксируют попытки искажения истории страны в Википедии . [online] Published 9 January 2026. Available at: https://www.stopfake.org/ru/v-estonii-fiksiruyut-popytki-iskazheniya-istorii-strany-v-vikipedii / Top Of Page Report Warns of Growing Russia–China Coordination in Technology-Driven Hybrid and Disinformation Operations A Jamestown Foundation   publication outlines how Russia and the People’s Republic of China (PRC) are increasingly integrating technology into coordinated hybrid operations, combining cyber activity, information warfare, and emerging military technologies to pressure Western states and U.S. allies. It emphasizes that digital influence campaigns, cyberattacks, and electromagnetic disruption are now integral to this hybrid approach, alongside conventional military power. These activities rely on ambiguity and deniability, enabling both actors to undermine adversaries while avoiding clear escalation thresholds. A central finding is the growing role of artificial intelligence, cyber capabilities, and information dominance in shaping the operational environment. Russia has accelerated AI integration following lessons learned in Ukraine, while the PRC is adapting AI systems—often built on Western technologies—to support “intelligentized warfare” that prioritizes algorithms, data control, and narrative influence. Cyber and information operations are portrayed as tools to exploit societal and institutional vulnerabilities, particularly through disinformation, cyber sabotage, and pressure on critical infrastructure, including NATO frontline states. The article also highlights targeted hybrid tactics against infrastructure and information systems, including cyberattacks, GPS spoofing, and suspected sabotage of undersea cables in the Baltic, the Arctic, and near Taiwan, raising concerns about potential coordination between Russia and the PRC. These actions illustrate how technology-enabled hybrid operations blend physical disruption with information effects to create insecurity, weaken trust, and strain Western resilience. Overall, the report frames these activities as part of a long-term strategy to reshape the strategic environment through sustained technological, cyber, and informational pressure rather than overt conflict. Source:  The Jamestown Foundation. Strategic Snapshot: Russia–PRC Technology and Hybrid Operations . [online] Published 11 January 2026. Available at: https://jamestown.org/strategic-snapshot-russia-prc-technology-and-hybrid-operations / Top Of Page War in Ukraine Kremlin Disinformation Framing Ukraine and its Supporters as “Neo-Nazi” DisinfoWatch   reported that Russian state outlet RT is using a real event to spread a misleading and recycled disinformation narrative. While Ukrainian President Volodymyr Zelenskyy did appoint Canada’s Chrystia Freeland as a non-staff economic adviser, RT distorted this fact by reviving the long-standing Kremlin trope equating Ukraine with Nazism. The outlet relied on comments by Russian official Maria Zakharova and used insinuation rather than evidence to push this claim. RT alleged that Freeland’s appointment proves Ukraine is a “haven for neo-Nazis,” pointing to the historical fact that her grandfather edited a newspaper under Nazi occupation during World War II. A documented family history was presented as misleading proof of Freeland’s beliefs or of the present-day Ukrainian government’s ideology, while no evidence was offered to connect this historical detail to current Ukrainian policy or leadership. In a separate article, DisinfoWatch added that  Russian state media TASS and the Russian Foreign Ministry presented a “counter-sanctions” list barring 28 Canadian citizens from entering Russia, falsely accusing them of promoting neo-Nazi ideology and framing even Western support for Ukraine as Nazism. In reality, the list was not evidence-based at all. Since 2022, Moscow has repeatedly issued symbolic entry bans against Canadians in response to Ottawa’s sanctions, routinely using “neo-Nazi” rhetoric without substantiation. Those named include respected academics, civil society activists, journalists, and NGO leaders, many of whom are linked to Ukrainian studies or human rights work. There is no credible evidence that any of them supports neo-Nazism. Sources: DisinfoWatch. RT uses Freeland appointment to launder “Nazi Ukraine” trope . [online] Published 5 January 2026. Available at: https://disinfowatch.org/disinfo/rt-uses-freeland-appointment-to-launder-nazi-ukraine-trope/   DisinfoWatch. Russia bans 28 Canadians, falsely branding them as “neo-Nazis . [online] Published 5 January 2026. Available at: https://disinfowatch.org/disinfo/russia-bans-28-canadians-falsely-branding-them-as-neo-nazis/ Top Of Page China China’s AI and Disinformation Campaign Following Maduro’s Capture DFR Lab   described how Chinese state-linked social media accounts deployed a distinctive strategic response to the US capture of Venezuelan President Nicolás Maduro, relying heavily on AI-generated memes, parody videos, and hashtag campaigns to undermine US credibility. Chinese accounts adopted an irreverent, digitally native style—mirroring the aesthetic often used by U.S. President Donald Trump—to criticize U.S. foreign interventions and portray Washington as a unilateral power that leaves disorder in its wake. Official statements from Beijing, echoed across state media and social platforms, framed China as a defender of sovereignty and international law, in contrast to what it described as U.S. hegemony. China’s domestic information ecosystem reinforced this messaging at scale. On platforms such as Weibo and Zhihu, hashtags and discussions tied to the foreign ministry’s response dominated trending lists, accumulating tens of millions of views. Popular posts questioned the legitimacy of the U.S. arrest of a foreign leader and warned that regime change in Venezuela could set a dangerous precedent for smaller states. In parallel, inauthentic accounts on X, exhibiting traits associated with the Spamouflage network, amplified narratives accusing the United States of acting out of economic self-interest, particularly in the oil sector. Chinese state media also experimented with English-language AI-generated meme videos to reach global audiences, though these saw limited organic engagement outside China. Source: DFR Lab, K. Thibaut. AI, memes, and hashtags: How China is battling the US online over Venezuela . [online] Published 10 January 2026. Available at: https://dfrlab.org/2026/01/10/how-china-is-battling-us-online-over-venezuela/ Top Of Page [AI Related Articles] AI Turbocharges Hoaxes About the Minneapolis ICE Shooting NewsGuard   reported that, in the immediate aftermath of the 7 January 2026 shooting of 37-year-old Renee Nicole Good by an ICE agent in Minneapolis, social media was rapidly flooded with AI-generated and AI-altered misinformation. Within 24 hours, at least six false claims circulated widely, four of them relying on AI to fabricate images or distort the circumstances of the shooting, collectively drawing more than 4 million views across X, Instagram, Threads, and TikTok. These hoaxes included AI-enhanced images falsely claiming to reveal the masked agent’s identity and face, as well as fabricated visuals portraying the encounter as calm and non-confrontational, contradicting verified video footage. Some posts even misidentified the agent as Steve Grove, the CEO of the Minnesota Star Tribune, forcing the newspaper to publicly deny any connection. Investigations found that the viral images were digitally altered or entirely generated by AI tools, including content produced directly by X’s chatbot Grok, which later acknowledged that the images were speculative and not based on real data. To demonstrate how easily such hoaxes can be created, NewsGuard conducted a “red-teaming” audit of OpenAI’s Sora 2 and xAI’s Grok Imagine. Both tools generated convincing fake videos that advanced false claims about the shooting in roughly 75 percent of tests, including fabricated news reports and invented identities, and sometimes added realistic but false details on their own. Source:  NewsGuard's Reality Check, C. Vercellone & N. Huet. AI Used to ‘Unmask’ ICE Agent, in Latest Case of ‘AI Enhancement’ of Breaking News . [online] Published 8 January 2026. Available at: https://www.newsguardrealitycheck.com/p/minneapolis-ice-shooting-gets-ai Top Of Page Deepfakes as Disinformation Threats to Digital Identity and Trust Research by the Cybercrime Atlas  research group analyzed 17 face-swapping tools and eight camera injection tools to assess whether they effectively enable KYC (know-your-customer) bypass and to characterize the current deepfake landscape. The paper found that the misuse of deepfakes in KYC and remote verification processes enables attackers to inject false identities into financial systems, leading to fraud, impersonation, and reputational harm. By combining face-swapping technology, camera injection tools, and stolen or AI-generated documents, threat actors can present synthetic identities as real, undermining trust in digital verification at individual, organizational, and systemic levels. The research shows that while many deepfake tools remain imperfect, some already enable real-time, high-fidelity impersonation that can bypass biometric checks under certain conditions. This form of identity-based disinformation is becoming more scalable and dangerous as advanced AI tools become widely available and regulation remains fragmented. The study stressed that countering deepfake-driven disinformation requires coordinated, multi-layered defenses. Stronger liveness detection, continuous monitoring, risk-based fraud analysis, and shared governance standards are essential to prevent synthetic identities from becoming embedded in digital systems. Source:  World Economic Forum, N. Umansky. Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes . [online] Published January 2026. Available at: https://reports.weforum.org/docs/WEF_Unmasking_Cybercrime_Strengthening_Digital_Identity_Verification_against_Deepfakes_2026.pdf Top Of Page AI-Generated Disinformation on TikTok Targeting Ukraine StopFake   has documented  a recent surge of AI-generated videos pushing pro-Russian disinformation on Ukrainian TikTok. These clips, frequently depicting staged mass protests in Kyiv, Odesa, and Kharkiv, have accumulated millions of views. Leveraging realistic AI models such as OpenAI’s Sora, propagandists fabricate reporters, crowds, and events that never took place, combining emotionally charged delivery with vague, alarming claims. Several accounts play a particularly prominent role in these campaigns. They circulate AI-generated footage showing crowds chanting pro-Russian slogans, waving Russian flags, or portraying Ukrainian citizens as eager to join Russia. While some videos reference real concerns such as corruption or a desire for peace, these themes are manipulated to advance the false narrative that Russia offers the solution. Advances in AI now allow such actors to bypass traditional production barriers, resulting in content that appears highly realistic and is more difficult to detect than archival or staged footage. Source: StopFake, A. Korol & S. Mikhalkov. для Текстів. «Россия, помоги!» ИИ-видео с протестами в Украине набирают миллионы просмотров в TikTok . [online] Published 8 January 2026. Available at: https://www.stopfake.org/ru/rossiya-pomogi-ii-video-s-protestami-v-ukraine-nabirayut-milliony-prosmotrov-v-tiktok/ Top Of Page AI Is Becoming a Force Multiplier for Disinformation and Democratic Manipulation As a Carnegie Endowment for International Peace   report notes, artificial intelligence is increasingly intersecting with democratic systems in ways that both amplify disinformation risks and create new avenues for political influence. The report identifies AI-generated misinformation, deepfakes, and automated content as growing tools used by domestic and foreign actors to manipulate elections, distort public debate, and undermine trust in democratic institutions, particularly during election cycles. These tactics exploit AI’s ability to scale persuasive, personalized, and synthetic content across digital platforms, complicating efforts to verify information and safeguard information integrity. The analysis maps four key domains where these risks manifest: elections and campaigns, citizen deliberation, government institutions and services, and social cohesion. Across these areas, hostile influence actors can leverage AI to spread misleading narratives, suppress or confuse voters, and erode confidence in governance, while authoritarian governments may deploy AI for surveillance, censorship, and state-led disinformation. The report notes that existing countermeasures, such as content labeling, monitoring, and platform policies, remain fragmented and insufficient given the speed and adaptability of AI-enabled influence operations. At the same time, the report emphasizes that responses to AI-driven disinformation depend heavily on the actors involved, including governments, technology companies, civil society, and political campaigns. It argues that without stronger coordination, transparency, and guardrails, AI’s role in the information environment risks deepening polarization and weakening democratic resilience. Addressing hostile influence, therefore, requires not only technical solutions but also institutional capacity, cross-sector cooperation, and sustained oversight to prevent AI from becoming a force multiplier for disinformation campaigns. Source:  Carnegie Endowment for International Peace, R. George & I. Klaus. AI and Democracy: Mapping the Intersections . [online] Published 8 January 2026. Available at: https://carnegieendowment.org/research/2026/01/ai-and-democracy-mapping-the-intersections?lang=en Top Of Page [General Reports] Disinformation Surge After U.S. Raid in Venezuela CyberScoop  reported that, following the surprise U.S. operation in Caracas that led to the capture of Venezuelan President Nicolás Maduro, a lack of clear and timely information from the Trump administration created an information vacuum. This was quickly filled by disinformation online, including realistic but fake images and videos, recycled footage from past conflicts, and AI-generated content falsely portraying Maduro’s arrest. Social media platforms were flooded with manipulated visuals, some of which even inserted other world leaders into fabricated “perp walk” scenes, amplifying confusion during a fast-moving news event. At the same time, coordinated narratives framed the U.S. action as imperialist aggression driven by a desire to seize Venezuela’s oil. Spanish- and English-language online groups widely spread claims that the raid was theft rather than a law enforcement operation, echoing long-standing anti-U.S. themes promoted by actors aligned with Russia and China. The disinformation also revived debunked domestic conspiracy theories. Right-wing influencers falsely linked Maduro’s capture to claims that Venezuela helped manipulate the 2020 U.S. election—allegations that have been repeatedly disproven and legally repudiated. Experts stressed that much of the AI-driven, conspiratorial content was not intended to persuade skeptics but to reinforce existing beliefs and mobilize supporters. Source:  CyberScoop, D. B. Johnson. AI, voting machine conspiracies fill information vacuum around Venezuela operation . [online] Published 5 January 2026. Available at: https://cyberscoop.com/ai-voting-machine-conspiracies-fill-information-vacuum-venezuela-raid/ Top Of Page Climate Disinformation Identified as a Long-Term Influence Threat in Europe Harmful Environmental Agendas and Tactics (HEAT) post-project reflection report analyzes how climate-related misinformation and hostile narratives circulate across France, Germany, and the Netherlands, undermining trust in climate science, environmental policy, and public institutions. The research identifies persistent narrative clusters, including climate denial, anti-elite and anti-government framing, and conspiracy theories portraying environmental action as coercive, manipulative, or part of hidden global agendas. These narratives are shown to be resilient, adaptive, and responsive to political events, extreme weather, and public anxiety, allowing them to re-emerge repeatedly across different national contexts. The report highlights key tactics and actors involved in amplifying harmful environmental narratives. Fringe influencers, partisan commentators, alternative media outlets, and loosely coordinated online communities act as initial catalysts, introducing misleading claims that later migrate into more mainstream discourse when they align with salient political debates. Cross-platform dissemination enables these narratives to move between fringe and mainstream spaces, while cross-border information flows allow conspiratorial content—often originating in English-language ecosystems—to be adapted to local grievances in each country. Although national contexts differ, common structures persist: attacks on scientific authority, portrayals of climate policy as economically punitive, and claims of climate manipulation or geoengineering. Overall, the HEAT project concludes that environmental misinformation functions as a long-term influence vector, embedded within broader patterns of institutional distrust, identity politics, and socio-economic anxiety rather than isolated false claims. The findings stress that harmful narratives do not dissipate after exposure but evolve alongside political and social developments, requiring sustained monitoring and comparative analysis. The report’s dissemination and uptake by media, researchers, and international institutions underscore its relevance for understanding how climate disinformation intersects with democratic resilience and policy legitimacy across Europe. Source:  Logically & EU DisinfoLab. Prepared for the European Media and Information Fund (EMIF). HEAT: Harmful Environmental Agendas & Tactics: A look at France, Germany, and the Netherlands . [online] Published 23 June 2025. Available at: https://www.disinfo.eu/wp-content/uploads/2025/06/20250623-heat-harmful-environmental-agendas-tactics.pdf Top Of Page Social Media Amplifies Disinformation and Polarization Around Venezuela Crisis A WIRED   article examines how social media–driven information dynamics have reshaped the interpretation of major geopolitical events, accelerating the spread of misleading, polarized, and propagandistic narratives surrounding the U.S. military intervention in Venezuela. Experts cited in the piece argue that the collapse of traditional fact-checking timelines has allowed platforms to function as real-time constructors of reality, privileging speed, emotion, and simplification over verification. Political actors, most notably U.S. President Donald Trump, used social media to frame the intervention as legitimate and triumphant, while competing narratives from Venezuelan officials, activists, and commentators circulated simultaneously, producing confusion rather than clarity. Disinformation and distortion did not rely solely on outright falsehoods but also on selective framing, satire, memes, and emotionally charged short-form videos that spread rapidly across platforms such as TikTok, Instagram, WhatsApp, and X. Accounts such as Historia Para Tontos illustrated how simplified, ironic content can go viral and shape geopolitical understanding, while comment sections became arenas for ideological confrontation, often reinforcing pre-existing beliefs. Scholars and journalists highlighted how algorithms reward outrage and immediacy, amplifying polarized narratives and reducing space for nuance, dialogue, or contextual reporting.The article further underscores how information control, digital blackouts, and fear of persecution in Venezuela have driven citizens toward fragmented, private communication channels, particularly WhatsApp, making coordinated, high-quality information dissemination more difficult. Independent media collectives and diaspora voices attempt to counter misinformation, but the overall environment remains saturated with noise, bias, and competing propaganda. Analysts warn that this disintermediated ecosystem, in which every user acts as a broadcaster, creates fertile ground for manipulation, echo chambers, and the erosion of the public’s capacity to distinguish truth from fabricated or legitimized narratives, turning geopolitical crises into ongoing cognitive battles rather than discrete events. Source:  Wired, A. Lagos. The Danger of Reducing America’s Venezuela Invasion to a 60-Second Video . [online] Published 11 January 2026. Available at: https://www.wired.com/story/the-danger-of-reducing-a-americas-venezuela-invasion-to-a-60-second-video/ Top Of Page AI-Driven Disinformation Exploits Maduro Arrest to Shape Global Narratives Graphika  found that the arrest and extradition of Venezuelan President Nicolás Maduro generated a highly volatile information environment that was quickly filled with misleading and synthetic content across major platforms. Within 48 hours, Graphika identified AI-generated videos, fabricated images, and repurposed footage circulating widely on TikTok, Facebook, Instagram, X, and YouTube. This content was shared by both engagement-driven accounts seeking virality and ideologically motivated users advancing political narratives, illustrating how disinformation thrives when facts are still emerging, and verification lags behind real-time sharing. Graphika could identify three dominant disinformation tactics. First, AI-generated videos falsely depicting Venezuelans celebrating Maduro’s arrest spread quickly, often posted by newly created TikTok accounts that had previously shared other emotionally charged synthetic content, suggesting opportunistic engagement farming rather than a single coordinated political campaign. Second, fabricated booking images showing Maduro in a U.S.-style mugshot circulated widely, despite clear visual inconsistencies indicating manipulation. These images were initially shared by generic “breaking news” accounts and later amplified by opposition voices, who framed them as proof of justice, eventually appearing in mainstream media coverage and demonstrating how unverified social media content can be laundered to appear legitimate. Third, the strategic repurposing of authentic but unrelated footage to advance misleading narratives. Ideologically aligned accounts, including pro-Maduro and anti-U.S. actors across multiple languages, reused videos from prior U.S. protests and falsely presented them as contemporary demonstrations against U.S. intervention in Venezuela. Some versions deliberately obscured original logos to conceal provenance. Together, these cases show how generative AI, recycled media, and platform incentives enable rapid narrative shaping without centralized coordination—reinforcing ideological positions, confusing audiences, and allowing misleading content to penetrate mainstream discourse in ways that mirror tactics used in elections, scams, and foreign influence campaigns beyond this specific event. Source:  Graphika. How Graphika Tracked Synthetic Content After Maduro's Arrest . [online] Published 9 January 2026. Available at: https://graphika.com/posts/venezuela Top Of Page [Appendix - Frameworks to Counter Disinformation] U.S. Withdrawal from Cyber and Information Security Forums Raises Disinformation Concerns The Trump administration is pulling the United States out of several international organizations focused on cybersecurity cooperation, online freedom, and countering hybrid threats, CyberScoop   reports , including the Global Forum on Cyber Expertise, the Freedom Online Coalition, and the European Centre of Excellence for Countering Hybrid Threats. The move is framed by the administration as part of a broader retreat from multilateral institutions it views as mismanaged, ideologically driven, or incompatible with U.S. sovereignty. Critics warn that the decision risks creating a leadership vacuum in areas central to countering cyber threats, disinformation, and hybrid influence campaigns. Experts cited in the article argue that these organizations play a critical role in strengthening collective defenses against cybercrime, infrastructure attacks, and information manipulation, and that U.S. withdrawal weakens international coordination at a time of escalating cyber and disinformation threats. The Global Forum on Cyber Expertise supports capacity building on cybersecurity and emerging technologies, while the Freedom Online Coalition promotes digital rights and opposes cyber-enabled repression. The European Centre of Excellence focuses on countering hybrid threats that blend cyber operations, disinformation, and other non-military tactics targeting democratic states. Former officials, academics, and civil society leaders caution that disengagement from these forums undermines the rules-based international order and reduces democratic states' ability to respond collectively to hostile influence operations. They warn that diminished U.S. participation could enable authoritarian actors to expand their influence, weaken protections for free expression online, and create conditions in which disinformation, cyber repression, and hybrid threats spread with fewer constraints. Source:  CyberScoop, T. Starks. Trump pulls US out of international cyber orgs . [online] Published 8 January 2026. Available at: https://cyberscoop.com/trump-pulls-us-out-of-international-cyber-orgs/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Weekly: Cyber-based hostile influence campaigns 12th - 18th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Wide-Scale Response to Grok’s AI-Driven Illegal Deepfakes [State Actors] Russia   Russian and Chinese Influence Operations Flood Maduro Capture Narrative With Disinformation Russian Propaganda Frames Iran Protests as a Western “Color Revolution” and Hybrid War Plot Kremlin Disinformation Sustains the Illusion of Economic Resilience The War in Ukraine Russian Following the Oreshnik Missile Strike China Graphika Exposes Pro-China Influence Network Using Spoofed Western News Sites Taiwan Warns China Is Outsourcing AI-Driven Disinformation and Cognitive Warfare Campaigns Iran AI Videos Fill Void Amid Iran Internet Blackout [AI Related Articles] Foreign Actors Used Misleading Protest Videos to Portray the U.S. in Crisis AI Citation Bias Is Steering Users Toward State Propaganda AI “Kirkification” Memes Blur Reality and Political Meaning   AI-Driven Misinformation and Bot Networks After the Bondi Attack [General Reports] Mysterious News Site Raises Concerns About Election Influence in California False Warship Claim Targets Greenland Crisis with Viral Disinformation Theory and Practice of Disinformation Ethics and Psychological Defence [Appendix - Frameworks to Counter Disinformation] Coordinated Influence Operations on Social Media Common Framework to Identify and Disrupt Russian FIMI Networks Strengthening U.S. Cybersecurity and Countering Disinformation Germany and Israel Strengthen Cybersecurity Cooperation Ugandan officials turn off internet on eve of national elections Grok Abuse Exposes Scalable AI-Driven Non-Consensual Image Manipulation Building a Common Operational Picture of FIMI Visual Disinformation and the Need for Visual Literacy "Armed with Facts": A Podcast to Counter Russian Disinformation [CRC Glossary] [ Report Highlights] A coalition of 28 digital rights, child safety, and women’s rights organizations has called on Apple to remove Grok and X from the App Store, arguing that the AI chatbot is fueling mass disinformation and abuse through nonconsensual sexual deepfakes. Graphika found that the “Glass Onion” network used spoofed versions of major Western news sites and coordinated amplification to launder pro-China narratives and make influence content appear like legitimate international media coverage. According to a TechXplore article, generative AI is accelerating crisis-time misinformation through deepfakes and bot-driven amplification and demonstrates via a student wargame that small teams using consumer-grade AI can flood platforms, micro-target voters, and even shift an election outcome, highlighting the urgent need for stronger digital literacy. As reported by the Los Angeles Times , a little-known website called the California Courier emerged as a heavy advertiser on Facebook, promoting articles that closely resembled political campaign messaging and consistently attacked Democratic leaders in California, including Governor Gavin Newsom. Psychological Defense Agency’s (PDA) textbook offered a multidisciplinary overview of information influence and psychological defense, with a strong focus on disinformation as a growing threat to democratic societies. EU DisinfoLab ’s report argues that Europe needs a shared operational framework, such as VIGINUM’s Information Manipulation Set (IMS), to reduce fragmented attribution and more effectively disrupt Russian FIMI campaigns by mapping how incidents connect through shared tactics, infrastructure, and enabling networks. [Social Media Platforms] X Wide-Scale Response to Grok’s AI-Driven Illegal Deepfakes In a letter organized by a coalition of 28 digital rights, child safety, and women’s rights organizations has called  on Apple to remove Grok and X from the App Store, arguing that the AI chatbot is fueling mass disinformation and abuse through nonconsensual sexual deepfakes. Investigations cited in the letter argue that Grok generates vastly more sexualized and “nudified” images than comparable platforms, including content that falsely depicts real women and minors. The groups highlighted that Grok itself has acknowledged generating child sexual abuse material (CSAM). Researchers and monitoring firms further reported that Grok produces nonconsensual sexual images at an unprecedented scale, especially after xAI introduced a “spicy mode” allowing users to digitally “undress” real people. The movement echoed globally, and an article  by TechXplore  stated that several countries have taken direct action: Turkey ordered a nationwide ban on Grok after it generated insulting and destabilizing content about national leaders, and in the United States, antisemitic outputs prompted Jewish lawmakers to warn that government use of Grok could pose national security risks. Additionally, the UK’s online safety regulator, Ofcom , reported  that it has opened a formal investigation into X under the Online Safety Act, even though X claims to have introduced measures to prevent Grok from generating intimate images. Another report  by The Hill  stated that Malaysia and Indonesia have also restricted access to Grok. Sources: UltraViolet. Organizational sign-on letter: Demand Apple & Google remove Grok from app stores . [online] Published 2026. Available at: https://weareultraviolet.org/wp-content/uploads/2026/01/FINAL-Organizational-Sign-On-Letter_-Demand-Apple-Google-Remove-Grok-from-App-Stores-1.pdf TechXplore, A. Zinin. What you need to know about Grok and the controversies surrounding it . [online] Published 2026. Available at: https://techxplore.com/news/2026-01-grok-controversies.html Ofcom. Ofcom launches investigation into X over Grok sexualised imagery . [online] Published 12 January 2026. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-launches-investigation-into-x-over-grok-sexualised-imagery The Hill, J. Shapero. Mother of Musk child sues over Grok sexualized images . [online] Published 16 January 2026. Available at: https://thehill.com/policy/technology/5692810-grok-ai-sexualized-images/ Top Of Page [State Actors] Russia Russian Influence Networks Flood U.S. Discourse After Maduro’s Capture Narrative With Disinformation The Washington Post   article describes how, following the capture of Venezuelan President Nicolás Maduro by U.S. Special Forces, researchers observed a rapidly deteriorating information environment in which hostile influence actors exploited uncertainty to inject confusion and competing narratives into U.S.-focused online discourse. The Atlantic Council’s Digital Forensic Research Lab (DFRLab) reported that a covert Russian operation linked to Storm-1516 used a network of social media accounts, influencers, and fake websites to spread contradictory conspiracies and inflammatory claims in what analysts described as a strategy designed to overwhelm the information space and make it harder for users to distinguish fact from fabrication. The article also describes a second Russian effort, the Pravda Network, which recycled Kremlin-aligned talking points through news-like websites targeting U.S. audiences, while China pursued messaging portraying the United States as unstable and unreliable, including through inauthentic amplification of a CCTV-produced AI parody video styled for U.S. digital culture. Alongside these state-backed operations, traditional deception tactics remained prominent, such as repurposing older footage from U.S. protests and misrepresenting it as current backlash against Maduro’s capture, sometimes by obscuring identifying logos. Together, these campaigns illustrate how high-attention geopolitical crises enable influence operations to rapidly flood platforms with misleading content and narrative confusion before authoritative information stabilizes. Source:  The Washington Post, N. Nix. How China and Russia are using Maduro’s capture to sway U.S. discourse . [online] Published 10 January 2026. Available at: https://www.washingtonpost.com/technology/2026/01/10/maduro-capture-china-russia-social-media-campaign/ Top Of Page Russian Propaganda Frames Iran Protests as a Western “Color Revolution” and Hybrid War Plot A StopFake  publication reports  that Russian propaganda outlets on Telegram have responded to Iran’s late-December 2025 protest wave by framing it not as a domestic uprising against economic collapse and repression, but as an externally orchestrated conspiracy tied to a broader confrontation between the West and Russia. The DM Research Center analyzed 506 messages from major Russian propaganda Telegram channels and found that Kremlin-aligned messaging initially described the unrest as purely economic but quickly shifted to narratives portraying escalation, radicalization, and civil conflict as inevitable and implicitly driven by outside interference. This framing also serves to justify the Iranian regime’s violent crackdown and to position Iran as a strategic partner whose instability threatens Russia’s geopolitical interests. A central disinformation tactic is the use of familiar “color revolution” rhetoric, denying the protests’ legitimacy by claiming they are engineered by U.S. and Israeli intelligence and carried out by “trained provocateurs.” Propagandists emphasize alleged “pogroms,” “riots,” and attacks on security forces while downplaying or omitting the underlying causes of public anger, including inflation and restrictions on civil liberties. Russian messaging also targets opposition figure Reza Pahlavi, portraying him as a U.S.-controlled puppet, a detached elite, or a war provocateur—reducing a complex protest movement to a caricature and reinforcing the claim that the unrest lacks authentic domestic roots. More broadly, the protests are incorporated into a sweeping narrative of Western “hybrid war” or “cognitive warfare” against Russia, depicting events in Tehran alongside crises elsewhere as “synchronized strikes” aimed at weakening Moscow by destabilizing its allies. This narrative is used to mobilize Russian audiences, normalize repression, and cultivate fear of internal dissent by projecting the Iranian scenario onto Russia’s domestic context. In parallel, propaganda channels exploit Iran-related ethnic mobilization to fuel xenophobia toward diasporas, especially Azerbaijanis, branding them as foreign-controlled agents and presenting Iran as a “testing ground” for tactics allegedly intended to fracture multinational states like Russia. Source:  StopFake, L. Bidochko & K. Zadiraka & A. Pivtorak & A. Pilipenko. «Майдан» в Иране, а «жертва» все равно Россия. Пропаганда снова обвиняет Запад в организации протестов . [online] Published 16 January 2026. Available at: https://www.stopfake.org/ru/majdan-v-irane-a-zhertva-vse-ravno-rossiya-propaganda-snova-obvinyaet-zapad-v-organizatsii-protestov/ Top Of Page Kremlin Disinformation Sustains the Illusion of Economic Resilience EUvsDisinfo described how the Kremlin sustains an economic narrative of resilience through persistent disinformation and foreign information manipulation and interference (FIMI), aimed at both domestic audiences and Europe. A core message in these campaigns is that sanctions are ineffective and, in fact, harm Europeans more than Russians, with recurring claims that Europe will freeze in winter, energy prices will spiral, and EU unity will collapse, while Russia is portrayed as stable and adaptive. This framing uses everyday “normality” inside Russia as proof that the West has failed, reinforcing the illusion of control and strength. The article also highlighted how this narrative is supported by reduced transparency and restricted data, with Russian authorities limiting or halting the publication of key economic and demographic statistics, including detailed trade figures, reserve data, and population indicators. By shrinking what can be independently measured, the state makes it easier to obscure economic pressure at home while amplifying perceived weakness abroad, allowing Russian information channels to continue insisting that Russia remains in control and that it is the West that is truly suffering. Source:  EUvsDisinfo. In 2026, the Russian economy is in big trouble . [online] Published 16 January 2026. Available at: https://euvsdisinfo.eu/in-2026-the-russian-economy-is-in-big-trouble/ Top Of Page War in Ukraine Russian Following the Oreshnik Missile Strike A StopFake   article  describes how, following Russia’s January 12, 2026, missile strike on Lviv, the Kremlin launched a coordinated disinformation campaign to justify and politically exploit the attack. Russian authorities claimed the strike—allegedly involving the new “Oreshnik” medium-range ballistic missile—was retaliation for a supposed Ukrainian drone attack on Vladimir Putin’s residence, an incident for which no evidence exists and which Ukraine, Western governments, journalists, and even Donald Trump rejected as false. This fabricated pretext allowed Russia to frame the strike not as aggression but as “self-defense,” masking potential war crimes behind a false narrative. The strike triggered strong international reactions. Ukraine called urgent meetings of the UN Security Council and the NATO-Ukraine Council, warning that the use of such missiles threatens broader European security. At the same time, Russian state media and Telegram propaganda channels spread contradictory explanations for the strike, claiming it was a warning to Europe, a response to U.S. pressure, retaliation for peace talks, or a move in the global gas market. Source:  StopFake, L. Bidochko & A. Pivtorak & A. Pilipenko. «За Путина, за советские газохранилища»: чем российская пропаганда оправдывает удар «Орешником» по Украине . [online] Published 12 January 2026. Available at: https://www.stopfake.org/ru/za-putina-za-sovetskie-gazohranilishha-chem-rossijskaya-propaganda-opravdyvaet-udar-oreshnikom-po-ukraine/ Top Of Page China Graphika Exposes Pro-China Influence Network Using Spoofed Western News Sites Graphika ’s “Glass Onion” report  uncovered a coordinated ecosystem of 43 domains and 37 subdomains designed to push pro-China messaging while impersonating major international outlets, including The New York Times, The Guardian, and The Wall Street Journal. The network relied on deceptive infrastructure and media spoofing to publish recycled Chinese state media content, pro–Chinese Communist Party narratives, and commercial material, using copied design elements and shared technical templates to appear credible and legitimate to readers. The report links this activity to a broader Chinese influence ecosystem through technical overlaps with campaigns previously attributed to Chinese entities, including HaiEnergy, Paperwall, DuringBridge, and BayBridge. Graphika found that once PR and marketing firms placed content on these spoofed websites, clients then promoted the articles on Chinese platforms as evidence of international media coverage, effectively laundering influence messaging through the appearance of third-party legitimacy. A key operational component involved Chinese PR and digital marketing firms and individuals who hosted and distributed the content, sometimes alongside amplification linked to Spamouflage activity on Western platforms. Overall, Graphika assesses that “Glass Onion” demonstrates how commercial marketing infrastructure can be repurposed for influence operations, blending impersonation, narrative laundering, and coordinated amplification to shape perceptions while obscuring attribution. Source:  Graphika, M. Fulde-Hardy & L. Turner. Glass Onion. Peeling Back the Layers of a Pro-China Online Ecosystem . [online] Published 13 January 2026. Available at: https://public-assets.graphika.com/reports/Graphika_Report_Glass_Onion.pdf Top Of Page Taiwan Warns China Is Outsourcing AI-Driven Disinformation and Cognitive Warfare Campaigns CyberNews   notes  that Taiwan’s National Security Bureau (NSB) claims Beijing is increasingly outsourcing influence operations to Chinese IT and marketing companies, enabling large-scale “cognitive warfare” against Taiwanese audiences through automation and AI-generated content. According to the NSB, Taiwan’s intelligence community detected more than 45,000 fake social media accounts and over 2.3 million pieces of disinformation circulated through those networks in 2025. The report argues these activities are not random trolling but are tied to contracted firms that allegedly built databases of fake accounts and developed automated programs to manage and deploy them. The NSB describes a broad toolkit that includes deepfake voice and video production, fake news websites designed to resemble international outlets, and Facebook content farms that attract audiences with clickbait before pivoting to political narratives. It also alleges that Chinese agencies and contractors use web crawlers to collect data on Taiwanese political figures and influencers, enabling targeted propaganda, while cyber intrusions have been used to hijack Taiwanese accounts to amplify false claims, such as blockades or Chinese forces entering Taiwan’s territorial zone. The NSB says the overall goal is to deepen internal divisions and weaken external support for Taiwan, and it reports increased cooperation with fact-checkers, platforms, and international partners to expose and counter these campaigns. Source:  CyberNews, A.-M. Corvin. Taiwan warns China is launching AI and deepfakes campaigns run by legitimate tech and marketing firms . [online] Published 12 January 2026. Available at: https://cybernews.com/cyber-war/taiwan-china-ai-and-deepfakes-campaigns/ Top Of Page Iran AI Videos Fill Void Amid Iran Internet Blackout As Iran enters the sixth day of a near-total internet blackout amid widespread anti-government protests, NewsGuard describes  how foreign social media users are filling the information vacuum with AI-generated videos. The demonstrations, which began in December 2025 over economic grievances, have expanded into nationwide calls to end Iran’s theocratic regime. According to the Human Rights Activists News Agency, at least 2,400 protesters have been killed and more than 18,400 detained as of January 14, 2026, while internet access remains largely cut off, severely limiting reliable reporting from inside the country. In this environment, NewsGuard has identified at least seven AI-generated videos falsely presented as real footage of the protests, amassing around 3.5 million views. These videos have been shared by both anti-regime and pro-regime accounts and include fabricated scenes of women attacking paramilitary vehicles, protesters renaming streets after former U.S. President Donald Trump, and alleged million-strong pro-government marches. AI detection tools found these videos to be almost certainly artificial, pointing to visual inconsistencies and other telltale signs. Source: NewsGuard’s Reality Check, I. Chomnalez. AI videos fill void amid Iran internet blackout . [online] Published 14 January 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-videos-fill-void-amid-iran-internet-blackout Top Of Page [AI Related Articles] Foreign Actors Used Misleading Protest Videos to Portray the U.S. in Crisis According to a report by NewsGuard , foreign state-aligned and anti-West social media accounts are increasingly repurposing protest footage from unrelated events to depict the United States as unstable and on the verge of collapse. Pro-Iran and pro-China actors have shared out-of-context videos of demonstrations to amplify narratives of U.S. chaos, polarization, and public revolt. These posts often exaggerate the scale of unrest and falsely tie violent imagery to current U.S. political debates, helping foreign audiences, and sometimes U.S. users, perceive American society as spiraling into disorder. Pro-Iran accounts circulated a video allegedly showing U.S. protests against President Trump’s immigration policies, including claims that demonstrators were throwing Molotov cocktails. The footage came from a November 2025 incident in Thessaloniki, Greece, where concertgoers clashed with police near the Turkish Consulate. Similarly, pro-China accounts spread videos said to show nationwide U.S. protests and the military capture of Venezuelan President Nicolás Maduro. NewsGuard found the clips were actually from the large “No Kings” protests in mid- and late 2025, with identifying logos deliberately obscured. While small protests did occur in early 2026, they were far smaller than the demonstrations shown in the manipulated videos. This tactic is not limited to the United States. NewsGuard has documented similar efforts to misrepresent unrest in other Western countries, including a case in which pro-Iran accounts shared footage from a major anti-government protest in Serbia and falsely claimed it showed millions of Yemenis demanding retaliation against Israel. Source:  NewsGuard’s Reality Check, C. Lin & L. Marchl. Anti-West Accounts Use Out-of-Context Protest Footage to Portray U.S. as on the Brink of Collapse . [online] Published 13 January 2026. Available at: https://www.newsguardrealitycheck.com/p/foreign-actors-twist-videos-to-portray Top Of Page AI Citation Bias Is Steering Users Toward State Propaganda CyberScoop’s   article  argues that artificial intelligence is reshaping how people evaluate credibility, particularly through the common advice to “check the citations.” While many LLMs provide sources alongside answers, they prioritize citations based on availability rather than trustworthiness, creating a structural vulnerability in the information environment. The author warns that authoritarian states are optimizing propaganda for AI consumption while many major U.S. media outlets restrict or block AI systems and remain behind paywalls, meaning even “ideologically neutral” AI tools can systematically steer users toward state-aligned narratives simply because they are easier to access. The piece cites research from the Foundation for Defense of Democracies (FDD) showing that a significant share of responses from ChatGPT, Claude, and Gemini referenced state-aligned sources, including frequent citations of Al Jazeera in responses about the Israel-Gaza conflict. The article frames this as an influence risk, highlighting how citation-driven traffic can shape what information ecosystems survive while amplifying propaganda at scale, such as Russia-backed Pravda allegedly publishing millions of pro-Kremlin articles to saturate the internet. It calls for faster integration of credible journalism into AI systems, improved AI literacy about citation bias, and clearer labeling or deprioritization of state-controlled media in AI outputs to reduce vulnerability to foreign influence. Source:  CyberScoop, L. Siskind. The quiet way AI normalizes foreign influence . [online] Published 15 January 2026. Available at: https://cyberscoop.com/the-quiet-way-ai-normalizes-foreign-influence/ Top Of Page AI “Kirkification” Memes Blur Reality and Political Meaning  After the killing of conservative influencer Charlie Kirk in September, Politico   describes  how the event rapidly spawned a wave of AI-generated memes—dubbed the “Great Kirkification” or “Kirkening”—in which creators grafted Kirk’s face onto countless public figures and absurd scenarios. These synthetic videos and images spread widely across X and other platforms, shared not only by ideological critics but also by engagement-driven and largely apolitical accounts, accelerating the transformation of a real political event into viral, decontextualized content. While conservatives increasingly frame Kirk as a martyr, the meme ecosystem has reshaped his image into a joke in parts of the internet, weakening the seriousness of his legacy by collapsing the context surrounding his death. The article highlights how this trend blends political mockery with algorithmic amplification, including remixes of an entirely AI-generated tribute song (“We Are Charlie Kirk”) that was initially shared sincerely and later repurposed as ironic soundtrack material. It also notes that some users began claiming AI tools were becoming “poisoned” by repeated Kirk-themed requests, allegedly causing unrelated prompts to generate Kirk-like features—an interpretation disputed by AI researchers as a misunderstanding of how image models work. Ultimately, the article frames “Kirkification” as a case study in how synthetic media and meme culture can function as narrative disruption, not by fabricating believable deepfakes, but by flooding the information space with surreal AI content that reshapes how a public figure is remembered and interpreted. Source:  Politico, C. McHugh. Charlie Kirk Memes Are Taking Over the Internet . [online] Published 17 January 2026. Available at: https://www.politico.com/news/magazine/2026/01/17/charlie-kirk-memes-ai-00722839/ Top Of Page AI-Driven Misinformation and Bot Networks After the Bondi Attack In the TechXplore   article , the December 14, 2025, terrorist attack at Sydney’s Bondi Beach was rapidly followed by a surge of generative AI–enabled misinformation across social media. Examples included a manipulated video falsely claiming one attacker was an Indian national, widespread celebration of a supposed hero figure, “Edward Crabtree,” on X, and a deepfake image portraying human rights lawyer Arsen Ostrovsky as a “crisis actor.” The piece argues that this pattern is increasingly common in major crises, as AI enables the production of misinformation at scale and reinforces it through realistic-looking bot profiles that create the illusion of widespread public agreement and amplify false narratives. To examine how easily such manipulation can occur, the article describes running Capture the Narrative, a student-led social media wargame in which teams built AI bots to influence a fictional election. Across four weeks, 108 teams from 18 universities generated more than 7 million bot-driven posts, accounting for over 60% of all content on the platform, using emotional, increasingly toxic messaging as a key engagement strategy and, in some cases, identifying “undecided voters” for micro-targeting. The simulation showed measurable impact: the bot campaign shifted the election outcome compared to a rerun without interference, demonstrating how small teams using inexpensive, consumer-grade AI can flood an online environment, distort debate, and meaningfully influence political decision-making, reinforcing the article’s conclusion that stronger digital literacy is urgently needed to help users recognize manipulated content. Source:  TechXplore, H. Pearce & A. Vassar & R. Masood, The Conversation. World-first social media wargame reveals how AI bots can swing elections . [online] Published 16 January 2026. Available at: https://techxplore.com/news/2026-01-world-social-media-wargame-reveals.html Top Of Page [General Reports] Mysterious News Site Raises Concerns About Election Influence in California A little-known website called the California Courier, described  by the Los Angeles Times , emerged as a heavy advertiser on Facebook, promoting articles that closely resembled political campaign messaging and consistently attacked Democratic leaders in California, including Governor Gavin Newsom. While presenting itself as a neutral news outlet, the site largely published conservative-leaning stories and spent more than $80,000 on Facebook ads since 2021, potentially reaching tens of thousands of users weekly. Notably, the site provided no clear information about its ownership or funding, raising concerns about transparency and intent as California headed toward the 2026 election cycle. Investigations linked the Courier to a broader ecosystem of so-called “pink slime” news sites—outlets that mimic local journalism but lack basic editorial standards and disclosures. Researchers found apparent connections between the Courier and conservative organizations tied to the Lincoln Club of Orange County and the Lincoln Media Foundation, groups previously associated with similar sites in other states. Some Courier contributors had backgrounds in conservative advocacy, and the outlet appeared to amplify content produced by aligned organizations without attribution. Critics said the California outlet is part of a growing, nationwide ecosystem of innocuous-looking, cheaply produced news publications that publish and advertise biased articles to surreptitiously influence elections. They worry the practice could mislead voters and corrode trust in nonpartisan news providers. Source:  Los Angeles Times. Could this mysterious news site influence California’s 2026 election?  [online] Published 11 January 2026. Available at: https://www.latimes.com/california/story/2026-01-11/will-mysterious-california-news-site-influence-2026-election Top Of Page False Warship Claim Targets Greenland Crisis with Viral Disinformation Viral graphics on X fueled a false claim, identified  by NewsGuard , that Germany had deployed two heavily armed guided-missile warships to Greenland to defend it from potential U.S. aggression following renewed statements by U.S. President Donald Trump about annexing the Danish territory. The narrative spread through viral graphics on X, including posts asserting that Germany was the first country to offer military assistance and that one ship carried an “attack jet squadron,” using an image of a German naval vessel to lend the claim visual credibility and amplify engagement. There were no credible reports supporting the deployment of the warship, and German authorities confirmed only a small reconnaissance mission: a 15-member team flown to Greenland from January 15th to January 17th, 2026, as part of a Danish-led effort focused on maritime surveillance and Arctic security. The fabricated claim also relied on technically implausible details, as Germany does not operate aircraft carriers or ships capable of launching fixed-wing combat aircraft. The German Navy reaffirmed this point in an email response, stating that it has no aircraft carriers. Source:  NewsGuard’s Reality Check, M. Wolf. Debunk: Germany Has Not Deployed Warships to Attack the U.S. and Defend Greenland . [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/is-germany-deploying-warships-against Top Of Page Theory and Practice of Disinformation Ethics and Psychological Defence PDA’s textbook offered  a multidisciplinary overview of information influence and psychological defence, with a strong focus on disinformation as a growing threat to democratic societies. Drawing on research from Sweden and the Nordic region, it explained how disinformation exploits media systems, cognitive biases, polarization, and new technologies such as artificial intelligence. While Nordic countries show relatively high resilience due to strong institutions and social trust, the book highlighted increasing risks from geopolitical tensions, AI-driven manipulation, and the vulnerability of certain social groups, especially children and youth. A central theme of the book was that disinformation is not only false or misleading information but also a morally condemnable form of communication. Assessing disinformation is complex: determining whether a message is false, intentionally harmful, sarcastic, or merely an expression of opinion is often difficult in practice. The book emphasizes that responses to disinformation, such as ignoring, correcting, or removing content, are inherently moral choices that require balancing the need to protect democratic discourse with the obligation to uphold freedom of expression. The textbook also explored how to counter disinformation strategically and ethically. It argues that countermeasures must be justified, proportionate, and transparent to maintain public trust. Rather than relying solely on censorship or technical solutions, the book highlighted open dialogue, media literacy, crisis communication, and carefully designed countermeasures as key tools. Source:  Psychological Defence Agency, J. Falkheimer & J. Pamment, eds. Manual. Psychological defence and information influence – a textbook on theory and practice . [online] Published January 2026. Available at: https://mpf.se/psychological-defence-agency/publications/archive/2026-01-19-psychological-defence-and-information-influence----a-textbook-on-theory-and-practice Top Of Page [Appendix - Frameworks to Counter Disinformation] Coordinated Influence Operations on Social Media According to Google ’s TAG Bulletin , in Q3 2025, platforms like YouTube and Google News took action against large-scale, coordinated influence operations originating from multiple countries, including Russia, China, Turkey, Iran, and others. These operations involved thousands of channels and domains spreading content supportive of the governments behind them while criticizing other states, international organizations, or political opponents. Much of this content was deliberately designed to mislead viewers, manipulate public opinion, and polarize audiences. Russia was the most frequent source, with thousands of YouTube channels terminated for sharing content in multiple languages, supportive of Russian interests and critical of Ukraine, Moldova, NATO, and Western nations. China’s coordinated inauthentic networks also generated massive amounts of content in Chinese and English focused on China-US relations. Other states, including Turkey, Iran, Azerbaijan, and the Philippines, conducted smaller campaigns to promote their governments’ narratives and discredit critics. These takedowns highlighted the global scale of disinformation and the challenge of countering it. Coordinated campaigns often involve multiple languages, platforms, and accounts, making detection complex. Source:  Google, B. Leonard. Threat Analysis Group TAG Bulletin: Q3 2025 . [online] Published 2025. Available at: https://blog.google/threat-analysis-group/tag-bulletin-q3-2025/ Top Of Page Common Framework to Identify and Disrupt Russian FIMI Networks Russian FIMI operations are highly organised and persistent, yet efforts to counter them remain fragmented due to inconsistent terminology and uncoordinated reporting. To address this, EU DisinfoLab , together with its partners the European External Action Service (EEAS), Viginum, DFRLab, CheckFirst, Cassini, and the Auswärtiges Amt, published the report "Building a Common Operational Picture of FIMI", which proposes a shared framework for identifying and analysing disinformation. The report introduced clear definitions and called for moving beyond isolated incident reporting toward a coordinated understanding of how operations function over time. At the core of the report is the Information Manipulation Set (IMS) model, developed by France’s Viginum agency. An IMS captures a disinformation actor’s distinctive pattern of behavior, its tools, tactics, platforms, and methods, effectively serving as a “digital fingerprint.” Applying this model to known operations such as Doppelganger and Storm-1516 revealed important differences that are often obscured by broad labels. By focusing on how operations work rather than just their messaging, analysts can more accurately attribute activity to specific networks, including sanctioned Russian entities, and better understand the full disinformation supply chain from content creation to amplification. The report concluded that the IMS model enables more credible attribution, exposes vulnerabilities in disinformation networks, and strengthens the design and enforcement of sanctions. However, its success depends on sustained cooperation between public authorities, researchers, and private platforms. Despite existing sanctions, Russian-linked networks continue to rely on intermediaries and infrastructure within the EU, highlighting enforcement gaps rather than a lack of evidence. Source:  EU DisinfoLab. Building a common operational picture of FIMI: Using IMS to strengthen technical attribution and disruption . [online] Published January 2026. Available at: https://euvsdisinfo.eu/uploads/2026/01/20260115_building-a-common-operational-picture-of-FIMI_01.pdf Top Of Page Strengthening U.S. Cybersecurity and Countering Disinformation As published by CSIS , Emily Harding’s testimony before the House Homeland Security Subcommittee highlighted critical gaps in the United States’ cyber defenses and the nation’s vulnerability to foreign influence operations and disinformation. Cyberattacks from actors such as China, Russia, Iran, and North Korea are often part of coordinated campaigns that blur the line between crime and warfare. The testimony emphasized that these attacks should be recognized as hostile acts, not isolated technical incidents, and that failing to respond effectively cedes the initiative to adversaries. The testimony argued that current U.S. policies treat cyber largely as a technical afterthought, resulting in weak deterrence and inadequate integration of offensive and defensive cyber capabilities. This approach enables disinformation campaigns to accompany cyberattacks, spreading false narratives and shaping public understanding of security events in ways favorable to the attacker. To counter this, the testimony recommended a comprehensive “cyber first” strategy, including a declaratory policy treating cyberattacks on critical infrastructure as acts of war, integrating cyber tools into early-stage policy planning, and increasing coordination with private-sector partners. Congress should fund a dedicated Cyber Force, strengthen domestic infrastructure, and establish clear international norms to deter adversaries. Source:  CSIS, E. Harding. Congressional Testimony. U.S. Cyber Capabilities to Deter and Disrupt Malign Foreign Activity Targeting the Homeland . [online] Published 13 January 2026. Available at: https://www.csis.org/analysis/us-cyber-capabilities-deter-and-disrupt-malign-foreign-activity-targeting-homeland Top Of Page Germany and Israel Strengthen Cybersecurity Cooperation The Bundesministerium des Inneren (BMI) announced that Germany and Israel are deepening their security partnership, with a focus on cybersecurity. During a recent visit, German Interior Minister Alexander Dobrindt signed a cybersecurity pact with Israeli Prime Minister Benjamin Netanyahu, building on an already strong relationship. Israel is considered Germany’s most important security partner outside NATO and the EU, particularly in defending against digital threats that can spread false information and manipulate public perception. A central element of this cooperation is the planned German Cyber Dome, a semi-automated system for detecting, analyzing, and responding to cyberattacks. The Cyber Dome is designed not only to protect critical infrastructure but also to reduce the risk of coordinated campaigns that use cyber intrusions to spread disinformation. In addition to technology cooperation, Germany is deploying Federal Police experts to support civilian security authorities in Israel and the Palestinian territories, reinforcing stability and reducing the operational space for disinformation-driven manipulation. Source:  Bundesministerium des Innern und für Heimat. Zusammenarbeit zur Cybersicherheit mit Israel wird ausgebaut . [online] Published 12 January 2026. Available at: https://www.bmi.bund.de/SharedDocs/kurzmeldungen/DE/2026/01/israel-dobrindt.html?gcp_21273932=1 Top Of Page Ugandan officials turn off internet on eve of national elections The Uganda Communication Commission (UCC) announced  it imposed an internet blackout and suspended some mobile phone services on the eve of a general election scheduled for January 15. The authorities started turning off internet and mobile phone services on January 13. According to the agency, the measure was taken to mitigate the spread of disinformation and curb the risk of election fraud. Therefore, any internet-based services that are not critical were to be suspended for the whole period. On January 14, Internet monitoring firm Netblocks  confirmed  a “widespread internet shutdown”. The move raised a strong objection. In addition to suspending internet and mobile phone services, authorities have halted the sale of new sim or phone cards and cut off outbound data roaming services, Amnesty said in a  press release . The United Nations Human Rights also condemned the action and said  that access to communication and information is crucial to ensure free elections. On January 18, the agency announced that it had restored public internet access, reiterating that the temporary restriction was a measured action aimed at limiting misinformation and electoral interference during a critical period while ensuring essential services continued uninterrupted. Sources: Uganda Communications Commission. UCC . [online] Published 14 January 2026. Available at: https://x.com/UCC_Official/status/2011409612176507095 NetBlocks. NetBlocks . [online] Published 14 January 2026. Available at: https://mastodon.social/@netblocks/115892400097081262 Amnesty International. Uganda: Authorities must urgently lift internet restrictions ahead of crucial election day . [online] Published 14 January 2026. Available at: https://www.amnesty.org/en/latest/news/2026/01/uganda-internet-shutdown/ UN Human Rights. UN Human Rights . [online] Published 14 January 2026. Available at: https://x.com/UNHumanRights/status/2011436800292573693 Uganda Communications Commission. Public update: 18th January 2026, press briefing to update the country . [online] Published 18 January 2026. Available at: https://www.ucc.co.ug/public-update-18th-january-2026-press-briefing-to-update-the-country/ Top Of Page Grok Abuse Exposes Scalable AI-Driven Non-Consensual Image Manipulation A CIR   article  presents a rapid analysis of 1,625 Grok image-generation requests collected on January 7, 2025, identifying large-scale abuse driven by highly repetitive prompts designed to create non-consensual and degrading imagery. The data indicate that 72% of requests targeted women, with 98% of these explicitly sexualized and focused on “nudification,” sexual poses, and exaggerated body parts, while requests targeting men were less consistently sexualized and more often aimed at humiliation, infantilization, feminization, or racial and ethnic caricatures. CIR argues that this pattern reflects structured misuse rather than isolated incidents and that it is amplified by formulaic prompt language, making abusive intent relatively easy to detect and moderate through targeted filtering without broadly restricting legitimate use. CIR outlines five policy and platform lessons centered on disrupting abuse at scale: blocking repeated sexualized prompt patterns, monitoring high-volume offenders, and ensuring safeguards exist across all access points, including Grok’s website, where generation remains available without meaningful identity or age verification. The article also highlights accountability gaps and enforcement opportunities, noting that location signals were identifiable for some repeat offenders and that subscription-based access can improve traceability, though it risks monetizing harm if not paired with strict safeguards. It warns that workarounds to generate child sexual abuse material and evade deletions can be found quickly, requiring agile enforcement, and argues that regulatory frameworks such as the UK Online Safety Act and EU mechanisms can support accountability, while emphasizing that inconsistent application across platforms versus standalone websites creates exploitable loopholes. Source:  Center for Information Resilience. “Grok’d”: Five emerging lessons on limiting abuse of AI image generation . [online] Published 14 January 2026. Available at: https://www.info-res.org/cir/articles/grokd-five-emerging-lessons-on-limiting-abuse-of-ai-image-generation/ Top Of Page Building a Common Operational Picture of FIMI EU DisinfoLab ’s report argues that European efforts to counter foreign information manipulation and interference (FIMI) are being weakened by fragmented reporting and inconsistent attribution, which can inflate the perceived scale of disinformation operations while obscuring how they actually function. It highlights how focusing only on isolated incidents blurs accountability across a wider “supply chain” of actors—from sponsors and producers to intermediaries and amplifiers—and notes that Russian operations often rely on outsourced or semi-autonomous proxies that do not always fit a purely state-controlled model. To address these gaps, the report recommends using VIGINUM’s “Information Manipulation Set” (IMS) framework to map threats at tactical, operational, and strategic levels, linking visible incidents to shared tactics, infrastructure, and campaign resources. It presents case studies of five Russian-linked IMS clusters—Doppelganger, Media Brands/RRN, Undercut, Storm-1516, and Overload/Matryoshka—showing how the framework can distinguish between different operational models even when they appear connected or share contractors, thereby improving the targeting of countermeasures. Finally, the report assesses EU sanctions as an important but often weakly enforced tool against FIMI, arguing that restrictions targeting individuals or entities have limited operational impact when campaigns continue to use EU-based services such as hosting, cloaking, and advertising via proxies. It calls for stronger public-private cooperation, improved platform transparency, consistent IMS tagging for takedowns, and more systematic enforcement mechanisms to disrupt the infrastructure enabling disinformation and raise the costs for perpetrators, rather than merely causing reputational damage. Source:  EU DisinfoLab. Building a common operational picture of FIMI . [online] Published 15 January 2026. Available at: https://www.disinfo.eu/building-a-common-operational-picture-of-fimi Top Of Page Visual Disinformation and the Need for Visual Literacy A publication by the Psychological Defense Agency  argues that visual disinformation is becoming a central vulnerability in modern information environments, as images and video increasingly dominate social media and are processed faster and remembered more strongly than text. It warns that generative AI has sharply expanded the ability to produce realistic synthetic visuals, making misleading content easier to create and harder to verify, while increasing uncertainty about authenticity and potentially undermining trust in established media. The authors emphasize that malign influence campaigns exploit this environment by using visual media to manipulate attention, emotions, and perceptions, including through deepfakes, recontextualized footage, and entertainment-style formats that lower audience vigilance. To strengthen resilience, the report frames visual literacy as a core democratic and psychological defence capability, defined not only as verifying images but also interpreting, decoding, and producing visual messages. It highlights that traditional fact-checking approaches have limits when it comes to AI-generated visuals, and that effective resistance requires a broader skill set: understanding how visual “grammar” (composition, cropping, editing, context) shapes meaning and developing “visual agency” so citizens can critically engage with image-based narratives rather than passively absorb them. The report ultimately calls for stronger, more systematic education and interdisciplinary research to build scalable methods for visual literacy that can counter the evolving tactics of visual disinformation. Source:  Psychological Defence Agency, I. Karlsson & T. Karlsson Häikiö & M. Malm & H. Ördén. Report. Visual disinformation and visual literacy . [online] Published January 2026. Available at: https://mpf.se/psychological-defence-agency/publications/archive/2026-01-14-visual-disinformation-and-visual-literacy Top Of Page "Armed with Facts": A Podcast to Counter Russian Disinformation Hromadske Radio and the fact-checking organization StopFake have launched a new podcast project, “Armed with Facts: When Truth Protects,” aimed at countering disinformation affecting communities in frontline and temporarily occupied territories. Hosted by Alena Nesterenko and Elena Churanova, the podcast will explain how disinformation works, investigate sensitive issues, and help audiences access accurate information without panic or exposure to fabricated narratives. The hosts frame the initiative as a response to hostile influence efforts, particularly from Russia, and broader “post-truth” messaging that seeks to erode confidence in objective facts. The project positions verified information as a protective tool against foreign propaganda and coordinated online manipulation, emphasizing that truth grounded in evidence is essential for resilience under information pressure. Nesterenko highlights the podcast’s goal of making complex topics like propaganda, disinformation, and post-truth accessible and engaging, while Churanova describes the collaboration as a joint mission to push back against fake content circulating on social media. The first episode examines how Russia allegedly uses holidays to spread propaganda and false claims. Source:  Hromadske Radio/StopFake. «Озброєні фактами»: новый совместный проект «Громадського Радіо» и StopFake . [online] Published 16 January 2026. Available at: https://www.stopfake.org/ru/ozbroyeni-faktami-novyj-sovmestnyj-proekt-gromadskogo-radio-i-stopfake/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Once More Unto the Breach: Cyfluence Operations Hijack Iran’s State Media Amid Internet Shutdowns

    What Happened On 18 January 2026, at approximately 21:30 Tehran time, Iranian state television was briefly   hijacked by anti-regime actors . A coordinated cyber-enabled attack compromised the Badr satellite network, allowing attackers to inject footage and subversive messaging into state broadcasts. The illicit transmission featured content supporting exiled Crown Prince Reza Pahlavi , urging Iran’s security forces to cease repression and join the ongoing uprising. Figure 1 - Report on the 18 January broadcast hijack (X/Twitter) [ i] Multiple channels operated by the   Islamic Republic of Iran Broadcasting (IRIB) were affected  before normal programming was restored. This incident reflects a recurring pattern of cyfluence operations  that exploit digital broadcast infrastructure to penetrate Iran’s otherwise tightly controlled information environment. [ii] According to international reporting, the disruption involved content injection affecting IRIB’s   satellite transmission , including: Video footage of exiled Crown Prince Reza Pahlavi Messaging urging domestic security forces to realign with opposition interests Graphics and imagery supportive of protest movements inside and outside Iran Operational Execution Available evidence suggests the attack exploited vulnerabilities in satellite broadcast distribution, rather than IRIB’s internal IT or production networks. Current assessments indicate a satellite uplink override  as the most plausible attack vector. In many DVB-S (Digital Video Broadcasting – Satellite) architectures, satellites rebroadcast signals without authenticating their source. An adversary with sufficient transmission power and access to the correct frequency can overwhelm a legitimate uplink, causing the satellite to relay the stronger rogue signal instead. The apparent confinement of the disruption to satellite viewers, while terrestrial and cable feeds remained intact, further supports this assessment. Actor Attribution Public attribution remains inconclusive . Iranian authorities were quick to blame Israel, consistent with Tehran’s established narrative framing such incidents as foreign-led cognitive warfare. However, various opposition groups or aligned hacktivist networks remain likely candidates. On 19 January, an X account named “Anonymous TV” claimed responsibility for the operation under the hashtag #OpIran. [iii] Figure 2  - January 19 post by Anonymous TV claiming responsibility for the cyber operation (X/Twitter) [iv] Based on the observed modus operandi, duration of the transmission, and messaging, collaboration between opposition-aligned groups and state-level actors remains a credible scenario. From an operational standpoint, the cyber-influence attack demonstrated offensive capabilities comparable to those typically associated with state-sponsored actors. The attack’s relatively extended duration not only enabled delivery of targeted messages directly via regime-controlled media channels  – effectively bypassing censorship – but also suggests simultaneous interruption of real-time remediation efforts. This, in turn, indicates a level of tactical sophistication higher than that of most hacktivist groups. Information Flow and Online Amplification At the time of the broadcast hijack, Iran was still subject to a nationally imposed internet blackout.  Despite severe connectivity restrictions, reports of the incident spread rapidly. Early indicators point to a familiar dissemination pattern: within minutes, individuals inside Iran managed to leak footage and documentation of the hijack to journalists abroad. These materials were subsequently posted to social media platforms (primarily Telegram, Instagram and X), with visible watermarks and links enabling source tracing. Figure 3  - Examples of content posted shortly after the hijack by Telegram and Instagram accounts associated with journalist Amin Fardin. [V] While the number of Iranians directly reached by the illicit broadcast remains difficult to assess,  secondary amplification on social media proved substantial. Notably, in the aftermath of the operation, a sequence of posts by the X account Reza Pahlavi   Communications focused   heavily   on the IRIB breach, generating over one million impressions and engagements across related posts. [vi] Internet Blackouts and the Information War For over three weeks, Iran has experienced severe information suppression amid widespread protests and violent crackdowns. Authorities have implemented one of the most extensive internet shutdowns in recent history, with reports warning of plans to transition Iran toward a national intranet model  designed to isolate citizens from global information ecosystems. [vii] As mentioned in our latest blog , such blackouts create environments in which state-sanctioned media becomes the dominant – and often only – information source . These conditions drive adversaries to adapt their influence strategies, making satellite television infrastructure and its inherent vulnerabilities high-value targets for hybrid information operations (IOs) seeking to breach censorship barriers. Recent Precedents: Iranian Media Under Attack The latest January 2026 attack fits within an existing pattern of broadcast signal hacking targeting Iranian state media in recent years, especially during periods of heightened political tension and restricted internet connectivity. Two previous cyfluence operations are noteworthy: June 2025: During the Iran-Israel war, IRIB’s satellite feed was compromised to broadcast footage from the 2022 “Woman, Life, Freedom”  protests alongside calls for street demonstrations. Iranian officials attributed the disruption to hostile interference with satellite signals, while security analysts suggested links to foreign actors amid broader cyber and kinetic hostilities. [viii] October 2022 : During the Mahsa Amini protests, Iranian state television was briefly interrupted with protest slogans and imagery. Opposition-linked hacker group Edaalat-e Ali took claimed responsibility. [ix] Figure 4  – Left:  Imagery from the October 2022 IRIB broadcast hijack, Courtesy of CNN [x] ; Right:   X post showing imagery from the June 2025 IRIB broadcast hijack [xi] In addition to cyber-enabled broadcast disruption, Iranian media infrastructure has also been targeted kinetically . On 16 June 2025, Israeli forces struck IRIB’s headquarters in Tehran during a live broadcast. [xii] The intended impact extended beyond physical damage to Iran’s strategic communication (StratCom) capabilities. The devastating strike carried significant symbolic weight by targeting a central propaganda apparatus of the Islamic Republic’s regime. [xiii] Conclusion The January 2026 breach of Iranian state television forms part of a persistent, multi-actor campaign targeting Iranian media outlets, shaped in part by the unique constraints imposed by the regime on alternative sources of information. It demonstrates the growing importance of agile, cyber-enabled   information operations   that exploit the full spectrum of digital and broadcast communication vectors. As authoritarian regimes tighten control over internet access, domestic and external rivals adapt. By targeting critical information delivery systems, attackers can still challenge narrative dominance and influence target audiences. For counter-cyfluence practitioners and stakeholders, this incident reinforces a key lesson: the struggle for information dominance is not fought solely through digital influence networks, synthetic propaganda or inauthentic narrative amplification, but also through the  defense and control of strategic communication assets. This operation constitutes a prime example of a strategically deployed cognitive threat, intended to shape public perception, erode regime resilience, and sustain internal dissent at a critical moment . Conducted under defined operational and strategic constraints, it highlights the dynamic and diversified nature of today’s cognitive battlespace.   [Footnotes:] [i]  X, Iran International English (@IranIntl_En), Reporting that s everal Iranian state TV channels carried on the Badr satellite were hacked on Sunday, broadcasting footage of protests . [online] Published 18 January 2026. Available at: https://x.com/IranIntl_En/status/2012971768202301549 [ii]  Associated Press (AP), J. Gambrell, Hackers disrupt Iran state TV to support exiled crown prince as deaths from crackdown exceed 4,000 . [online] Published 20 January 2026. Available at: https://apnews.com/article/iran-protests-hack-us-aircraft-carrier-crackdown-65a7a3ee84748531387b72fa15b61456 [iii]  X, Anonymous TV (@YourAnonTV), Message to the brutal regime of Iran: We hijacked IRIB’s satellite broadcast feed and ripped your propaganda mask off in front of the people . [online] Published 19 January 2026. Available at: https://x.com/YourAnonTV/status/2013191538184954221 [iv]  Ibid. [v] Instagram, behindpersiantv (@behindpersiantv), Post dated 18 January 2026 . [online] Published 18 January 2026. Available at: https://www.instagram.com/p/DTqfI_YgHBr [vi]  X, Pahlavi Communications (@PahlaviComms), Post reporting 428.8K views,was followed by numerous reposts . [online] Published 18 January 2026. Available at: https://x.com/PahlaviComms/status/2012971682034585739 [vii]  The Guardian, A. Down, Iran plans permanent break from global internet, say activists . [online] Published 17 January 2026. Available at: https://www.theguardian.com/world/2026/jan/17/iran-plans-permanent-break-from-global-internet-say-activists [viii]  i24NEWS, Iranian TV hacked, protest footage aired; Israel blamed for cyber offensive . [online] Published 19 June 2025. Available at: https://www.i24news.tv/en/news/israel/defense/artc-iranian-tv-hacked-protest-footage-aired-israel-blamed-for-cyber-offensive [ix]  CNN, A. Moshtaghian & R. Razek & J. Deaton, Iran’s state broadcaster hacked during nightly news program . [online] Published 10 October 2022. Available at: https://edition.cnn.com/2022/10/09/middleeast/iran-state-broadcaster-hacked-intl-hnk [x]  Ibid. ; Source: Telegram, edalate_ali (account page) . [online] Published 2026. Available at: https://t.me/edalate_ali [xi]  X, Jonathan Haroun (@JonathanHaroun1), Post showing imagery from the June 2025 IRIB broadcast hijack . [online] Published 18 June 2025. Available at: https://x.com/JonathanHaroun1/status/1935400866032304536 [xii]  BBC News, D. Gritten, Israel bombs Iran state TV during live broadcast . [online] Published 17 June 2025. Available at: https://www.bbc.com/news/articles/c3w46pw2xn0o [xiii]  Middle East Forum, G. Roman, MEF Iran War Monitor Report: June 16, 2025 . [online] Published June 2025. Available at: https://cdn-mef.meforum.org/ac/15/a5d8be144b48bcae8c0f99aff93e/mef-iran-war-monitor-report-june-16-2025.pdf

  • The Sound of Silence: Detecting Influence Operations Through Internet Blackouts

    Background In August 2025, we published a CRC Spotlight report on a suspected Iranian influence network . The malign activity cluster consisted primarily of English- and Spanish-language X (formerly Twitter) accounts, each addressing distinct political narratives aligned with specific geopolitical objectives. The English-language cluster focused heavily on Scottish and Irish independence, persistent anti-England rhetoric, and sustained narrative pressure against British political institutions. In contrast, the Spanish-language cluster has been consistently promoting pro-Maduro content, anti-U.S. framing, and broader Latin American anti-Western narratives. This thematic separation, coupled with observed cross-account amplification and unique biographical features, pointed to a coordinated hostile influence campaign (HIC), rather than organic political discourse. From the outset, many of these profiles exhibited non-authentic behavioural signals: unusually high posting frequency, limited personal interaction, narrative rigidity, and synchronised activity patterns tied to external events rather than domestic UK or European news cycles. If that was not enough, in November 2025, the launch of X’s new location transparency feature provided condemning evidence, by showing that most of the network was in fact being operated from Iran using proxy services (For more information, read the CRC Spotlight Article: From Rollout to Fallout: The Impact of X’s Location Transparency Feature ). Internet Blackouts as an Attribution Indicator A crucial indicator of foreign coordinated inauthentic behavior (CIB) was the apparent  correlation between Iranian internet shutdowns and the sudden inactivity of identified sockpuppet clusters . This pattern was first observed during the Iran–Israel escalation in June 2025 , when Iranian internet connectivity collapsed following Israeli and U.S. strikes. Figure 1:  Iran's internet shutdown during the Iran-Israel war (June 2025), Courtesy of Netblocks.org At that time, many  pro-Scottish Independence assets fell silent almost simultaneously , despite having posted intensively in the preceding days. Disinformation watchdogs reported that the network went dark immediately after Iran’s internet infrastructure was disrupted on 13 June 2025. [i] The simultaneity of the silence served as a key confidence booster for the identification of operational dependence on Iranian internet connectivity. The findings were widely published by media outlets and narrative intelligence vendors, resulting in a limited number of deactivations of inauthentic accounts. A Repeated Pattern Between 5 and 7 January 2026 , localised internet disruptions were reported across parts of Iran, particularly in urban areas associated with rising protest activity. These disruptions escalated on 8 January 2026 , when Iranian authorities cut off global internet access nationwide , citing national security concerns. According to network monitoring organization NetBlocks, the imposed internet blackout was part of a series of censorship measures that “restricted the public’s right to communicate at a critical moment” [ii] . Figure 2: Iran's internet shutdown during the December 2025-January 2026 protests, Courtesy of Netblocks.org Similarly, Cloudflare reported a near-total collapse in Iranian web traffic , with a reduction approaching a 100% after earlier declines of approximately 90%. Limited access reportedly remained available to elements of the Iranian government, military, and security apparatus. [iii] According to CRC analysts, every operational asset within the monitored dataset ceased posting between 5 and 8 January . This second instance of coordinated silence aligns precisely with the documented nationwide internet blackout in Iran:   Figure 3: Cumulative Account Activity - Total posts from the Iranian HIC operational assets cluster, Courtesy of XPOZ. Figure 4: Individual Account Activity - Daily posting patterns across individual accounts associated with the Iranian HIC, Courtesy of XPOZ.   The Show Must Go On Recent reporting, including publications by multiple investigative outlets, was followed by  a new wave of account deactivations.  As of January 14, 2026, several identified inauthentic accounts were either deleted or suspended. Among these are prominent personas that disappeared entirely, reinforcing the assessment that mounting asset exposure triggered mitigation or shutdown by the threat actor. Figure 5 : Examples of recently suspended X sockpuppet accounts associated with the Iranian hostile influence campaign, Courtesy of X However, at present, the network has not been dismantled in its entirety, and some inauthentic accounts remain technically operational, albeit temporarily inactive due to the internet shutdown. If not suspended or removed by X, these accounts are likely to resume content proliferation. Figure 6:  X sockpuppet accounts linked to the Iranian hostile influence campaign that remain unsuspended, Courtesy of X Implications for Counter Influence Operations The repeated synchronization between Iranian connectivity loss and the sudden inactivity of suspected sockpuppet accounts contributed to the attribution of these operational assets to Iranian threat actors. It enhanced the understanding of their technical practices, infrastructure dependencies, and overall existing operational limitations. Although the investigation –like Iranian hostile influence efforts themselves – remains ongoing, several methodological and practical conclusions can already be drawn for threat researchers and counter-HIC practitioners: Internet connectivity disruptions as detection signals - national or regional internet access fluctuations serve as a valuable confidence booster, enabling both correlation and attribution of suspected activity clusters. Bottom-up detection approaches –  Documented, large-scale internet connectivity shutdowns (an exceptionally rare occurrence) should be automatically ingested and incorporated into agnostic detection frameworks. Condemnation does not equate to remediation – Even compelling, well-documented exposure of hostile influence campaigns does not result in immediate platform takedowns. The gradual remediation observed in this case highlights the persistent challenge of countering HICs, even after networks have been mapped and verified. Sustained mitigation requirements -  Effective mitigation requires continuous monitoring, proactive reporting, and community-driven engagement. Prolonged remediation demands consistent defender investment and substantial resource allocation, while allowing threat actors continued opportunities to disseminate adversarial narratives.   The CRC continues to monitor the reported threat clusters and will publish further findings as additional data becomes available. [Footnotes:] [i]  Cyabra. Uncovering Iran’s Online Manipulation Network.  [online] Published 9 July 2025. Available at: https://cyabra.com/reports/uncovering-irans-online-manipulation-network/ ;  UK Defence Journal, B. Davis. Figures claim 26% of recent Scot independence posts by Iran.  [online] Published 23 August 2025. Available at: https://ukdefencejournal.org.uk/figures-claim-26-of-recent-scot-independence-posts-by-iran/ ;  Scottish Daily Express, D. Walker, Politics. Iranian pro-Scottish independence accounts go silent after Israel attacks.  [online] Published 25 June 2025. Available at: https://www.scottishdailyexpress.co.uk/news/politics/iranian-pro-scottish-independence-accounts-35450209 ;  MSN (via Forward), P. Cramer. When Iran’s internet went down during its war with Israel, so did bot networks spreading disinformation: Report.  [online] Published 14 July 2025. Available at: https://www.msn.com/en-us/news/world/when-iran-s-internet-went-down-during-its-war-with-israel-so-did-bot-networks-spreading-disinformation-report/ar-AA1IAx29 [ii]  NetBlocks. Update: #Iran has now been offline for 48 hours, as telemetry shows the nationwide internet blackout remains firmly in place.  [online] Published January 2026. Available at: https://mastodon.social/@netblocks/115880269709982943 [iii]   The New York Times , F. Fassihi & P. Baskar & S. Mahoozi. Iran Is Cut Off From Internet as Protests Calling for Regime Change Intensify.  [online] Published 8 January 2026. Available at: https://www.nytimes.com/2026/01/08/world/middleeast/iran-protests-internet-shutdown.html ; Deutsche Welle , K. Sexton & W. Dockery & S. Sobhan. Iran: Internet cut off as protests spread nationwide.  [online] Published 9 January 2026. Available at: https://www.dw.com/en/iran-internet-cut-off-as-protests-spread-nationwide/a-75440793

bottom of page