Cyber-based hostile influence campaigns 9th - 15th February 2026
- CRC

- 13 hours ago
- 21 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[Social Media Platforms]
X
[State Actors]
Russia
The Russian Federation Used Video Games to Spread State Propaganda
Russian Influence Operation Targeted Ukrainians During the Winter Olympics
Check First Unveiled GRU's Information Operations Troops with OSINT and Phaleristics
France's Municipal Elections and the Risk of Russian Influence
Disinformation Continued After the New START Treaty Expired
Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors
[General Reports]
False Allegations Labeled Epstein an Israeli Spy
Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy
Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes
India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation
Foreign Hybrid Campaigns Intensify Across the European Union
TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar
G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index
Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
An article by Bellingcat reports that, after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, users on X began asking Grok to “unblur” images in which the faces of children and women had been redacted for privacy.
As published by Rand, Controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks.
NewsGuard’s Reality Check reports that a Kremlin-linked influence operation known as Matryoshka has circulated at least 28 fabricated news reports targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy.
A report by the Ukrainian Centre for Strategic Communications and the NATO Strategic Communications Centre of Excellence concludes that coordinated, state-aligned Russian actors conducted structured, cross-platform disinformation campaigns that employed forged content, narrative laundering, and synchronized amplification to undermine Ukraine and advance the Kremlin's geopolitical objectives.
The latest Institute for Strategic Dialogue (ISD) publication finds that Russia, alongside China and Iran, is conducting coordinated hybrid campaigns across the EU that combine disinformation, cyberattacks, political subversion, and kinetic actions to undermine democratic institutions, inflame societal divisions, and weaken European unity.
The 2026 Munich Security Index, as covered by Infosecurity Magazine, shows that G7 nations rank cyberattacks as their top security threat for the second consecutive year, with disinformation campaigns also identified as a leading concern, underscoring heightened awareness of digital and information-based hostile influence risks.
[Social Media Platforms]
Meta's Community Notes System Showed Limits in Countering Misleading Content
An article by Politico reports that, in the wake of the fatal shootings of Alex Pretti and Nicole Renee Good in Minneapolis, Meta’s new volunteer-driven fact-checking system faced a major test as misleading and AI-generated posts flooded Facebook, Instagram, and Threads. The company adopted this model after its CEO, Mark Zuckerberg, ended partnerships with professional U.S. fact-checkers, citing perceived political bias. Inspired by X’s Community Notes system, Meta decided to rely on volunteers to propose and rate notes that add context to questionable posts. Only notes that receive broad support across differing viewpoints are published, meaning users typically see comments that reflect cross-partisan agreement.
However, behind the scenes, the process is often marked by partisan conflict. A review of Meta’s closed rollout found that many proposed notes were themselves politically charged, echoing concerns raised by professional fact-checkers that volunteers may be motivated by ideology rather than neutrality. While overtly one-sided notes are usually filtered out by low ratings, critics question whether the system can effectively counter misinformation, especially on platforms such as Facebook, where content spreads within private networks rather than in high-visibility public debates. Meta said the program is expanding rapidly, with tens of thousands of contributors and thousands of published notes. Yet only a small fraction of proposed notes ultimately appear publicly.
Source: Politico, A. Ng., Meta’s answer to ‘politically biased’ fact-checkers: Let them fight. [online] Published 10 February 2026. Available at: https://www.politico.com/newsletters/digital-future-daily/2026/02/10/metas-answer-to-politically-biased-fact-checkers-let-them-fight-00773633
X
X Users Asked Grok to Unblur Photos of Children from Epstein Files
In coverage by Bellingcat, users on X were reported to have asked Grok to “unblur” images after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, in which the faces of children and women had been redacted for privacy. The documents were already criticized by survivors, 18 of whom issued a joint statement condemning the release for exposing identifying details. Thousands of documents were later removed due to flawed redactions that reportedly compromised the identities of nearly 100 survivors.
Between the 30th of January and the 5th of February 2026, researchers reviewed 31 requests in which Grok was asked to remove redactions or identify individuals in protected images. Although Grok often acknowledged that minors’ faces were blurred for legal and ethical reasons, it still generated AI images in response to most requests, producing fabricated faces that ranged from realistic to clearly distorted. Some of these posts attracted millions of views, amplified by X’s monetization system. Following media inquiries, stricter guardrails appeared to be implemented. In subsequent days, Grok largely refused such requests or instead generated unrelated images. X did not respond to requests for comment. Grok has also been used to create manipulated images tied to other incidents, sometimes falsely identifying individuals. Investigations have been launched in countries including the UK and France over the chatbot’s role in generating non-consensual sexual deepfakes, including child abuse imagery.
Source: Bellingcat, K. Koltai, Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children. [online] Published 10 February 2026. Available at: https://www.bellingcat.com/news/2026/02/10/grok-epstein-photos/
Grok Raised Regulatory Concerns About Misinformation and Platform Risk
As published by Rand, controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. The case emerged at a critical moment when UK and EU digital regulations shifted from voluntary compliance to active enforcement. Regulators increasingly rejected claims that harmful outputs were isolated glitches, instead focusing on whether platforms had proactively identified and mitigated foreseeable systemic risks, including misinformation, abuse, and radicalization.
Generative AI poses distinct risks of misinformation because it engages users in personalized, iterative conversations that, over time, can reinforce false narratives or extremist views. Unlike standalone AI tools, Grok operates within X, where its conversational responses can be rapidly amplified and repurposed. The interactions occur privately and cumulatively, making harm less visible, harder to evidence, and more difficult to audit using tools designed for posts, shares, or recommendations.
The report concluded that Grok raises fundamental questions about whether platforms can credibly self-govern where systemic risks are foreseeable, and whether governments can meaningfully enforce laws designed to protect users, democratic processes, and the integrity of the information environment in a fragmented, cross-border digital ecosystem.
Source: RAND, D. Toro, Grok Isn’t a Glitch—It Is a Regulatory Reckoning. [online] Published 9 February 2026. Available at: https://www.rand.org/pubs/commentary/2026/02/grok-isnt-a-glitch-it-is-a-regulatory-reckoning.html
Iranian Officials Received Premium Services on X During Protest Crackdown
The Tech Transparency Project reports that X provided paid premium subscriptions to dozens of Iranian officials, state agencies, and state-controlled media outlets, including individuals subject to U.S. sanctions. These premium accounts, marked with blue checkmarks, offered enhanced features, including increased visibility, longer posts, and monetization tools. Many of these accounts were used to spread government messaging and propaganda during nationwide protests and internet blackouts in Iran, including threats of harsher repression against demonstrators.
The findings raised concerns that X may have violated US sanctions, which allow free access to communication platforms but prohibit charging sanctioned entities for additional services. Some premium badges were removed after media inquiries, but others remained active. The report highlighted that Elon Musk appeared to show support for the recent anti-government protests in Iran, taunting Khamenei on X, changing the Iranian flag emoji on X to the pre-revolutionary version, and allowing Iranians to access his Starlink satellite service for free amid an internet blackout in the country.
Source: Tech Transparency Project, Iranian Officials Get Premium Treatment on X Amid Protester Crackdown. [online] Published 12 February 2026. Available at: https://www.techtransparencyproject.org/articles/iranian-officials-get-premium-treatment-on-x-amid-protester-crackdown
[State Actors]
Russia
The Russian Federation Used Video Games to Spread State Propaganda
A report by EU VS Disinfo revealed that the Russian Federation recognized the potential of video games as a tool of ideological influence and used them to spread propaganda. Unlike passive media, games immerse players in interactive worlds in which ideological messages are embedded in narratives and mechanics. With 3.3 billion players globally, many aged 18–34, games can shape perceptions through community dynamics, repetition of symbols, and the illusion of personal choice.
The Kremlin has systematically integrated the gaming industry into its state information strategy under a model of “digital authoritarianism.” Through state funding, censorship, and the development of domestic platforms and engines, Russian authorities have promoted “patriotic” content aligned with official narratives. Organisations such as the Institute for Internet Development provide substantial subsidies to ideologically aligned projects, while youth movements such as “Yunarmiya” incorporate military-themed games into their activities. Some titles, reportedly recommended for cadets, recreate battles in Ukraine with overt pro-Kremlin symbolism, presenting war as heroic and justified. Gaming platforms such as Steam further enable the spread of Z-propaganda and extremist content within echo chambers, normalizing militarism and dehumanizing narratives under the guise of humor or community culture.
Source: EU VS Disinfo, A. Vyshniakova, New weapon in the shadows: how the Kremlin uses video games for war propaganda. [online] Published 11 February 2026. Available at: https://euvsdisinfo.eu/new-weapon-in-the-shadows-how-the-kremlin-uses-video-games-for-war-propaganda/
Russian Influence Operation Targeted Ukrainians During the Winter Olympics
Findings by NewsGuard’s Reality Check show that a Kremlin-linked influence operation known as Matryoshka circulated at least 28 fabricated news reports between the 30th of January and the 10th, of February 2026, targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. The fake materials, shared mainly on Telegram and viewed approximately two million times, mimicked respected outlets such as Reuters, Euronews, CBC, and Italian institutions. Among the allegations were claims that Ukrainian staff spat in food, that anti-doping rules were relaxed for Ukrainian athletes, and that Ukrainians were segregated in the Olympic Village due to aggressive behavior—assertions denied by official sources. Other viral claims accused Ukrainians of vandalizing Rome’s Colosseum, staging obscene protests, creating fraudulent hotel booking websites, or participating in the Olympics to avoid military service. As the Games approached, the disinformation gained broader traction, spreading beyond pro-Kremlin Telegram channels to multilingual websites in the Pravda network and even being repeated by mainstream Russian media.
Source: NewsGuard, A. Lee. Russia Targets Ukraine at the Winter Olympics with Flood of Fabricated News Reports. [online] Published 10 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russias-olympian-disinformation-game
CheckFirst Unveiled GRU's Information Operations Troops with OSINT and Phaleristics
Building on a previous investigation into the FSB’s 16th Centre, CheckFirst's report applied OSINT and heraldry, the study of military decorations, to uncover the structure of the GRU’s Information Operations Troops (VIO). By analyzing 118 photographs of insignia, patches, and pennants linked to the GRU, researchers reconstructed key elements of the VIO’s organization, chain of command, and likely geographic footprint.
Although the VIO has been publicly acknowledged since 2014, its internal structure and missions have remained opaque. The study suggested that the VIO was formalized around 2014, likely under Military Unit 55111, consolidating pre-existing GRU units into a unified command. The VIO comprises at least fifteen units spanning three core specializations: encryption and cryptanalysis, computer network operations, and psychological operations. Some units, such as 26165 and 74455, previously linked to the APT28 and Sandworm cyber groups, are well documented, while others, including Units 20766, 48707, and 20978, have received little or no prior public attention.
While identifying operational locations proved challenging, the researchers triangulated data from investigative journalism, leaked materials, Russian public records, and social media platforms to assess probable sites. The report concluded that combining phaleristics with OSINT offers a replicable and effective method for illuminating clandestine Russian intelligence structures, helping to clarify the scale and evolution of the GRU’s information and cyber warfare capabilities amid ongoing conflict with Ukraine and increased operations targeting Europe.
Sources:
CheckFirst, Unveiling FSB’s 16th Center SIGINT Capabilities. [online] Published July 2025.
Available at: https://checkfirst.network/wp-content/uploads/2025/07/OSINT_Phaleristics_Unveiling_FSB_16th_Center_SIGINT_Capabilities.pdf
CheckFirst, OSINT & Phaleristics: Unveiling GRU’s Information Operations Troops (VIO). [online] Published February 2026.
Available at: https://checkfirst.network/2026/02/OSINT_Phaleristics_Unveiling_GRUs_Information_Operations_Troops.pdf
France's Municipal Elections and the Risk of Russian Influence
An article published by Le Monde reports that a network of more than 80 fake French-language news websites, identified by Reporters Without Borders and reportedly linked to the Russian group Storm-1516, has raised concerns about potential foreign interference ahead of France’s March 2026 municipal elections. These sites imitate local media and promote pro-Russian narratives. Similar tactics have been observed in countries such as Moldova, where pro-Russian disinformation targeted legislative elections, though pro-European candidates ultimately prevailed.
Political choices made by major metropolitan areas are of interest to major foreign powers, sometimes for primarily economic reasons. However, these elections are also particularly difficult for foreign actors to grasp. Russia’s past information campaigns in France have focused more on national themes, such as criticizing French leadership or promoting narratives of Western decline, rather than attempting to shape specific local races, including high-profile contests in Paris involving allies and opponents of Emmanuel Macron. In practice, most manipulation in municipal elections has come from domestic actors rather than foreign states. Cases in multiple countries have involved candidates spreading false information, using doctored media, or engaging in questionable campaign funding.
Source: Le Monde, D. Leloup, France's municipal elections at risk of foreign interference. [online] Published 10 February 2026.
Disinformation Continued After the New START Treaty Expired
EU VS Disinfo reports that the New START Treaty expired on 6 February 2026, bringing to an end the last remaining nuclear arms control agreement between Russia and the United States. In response, pro-Kremlin information campaigns sought to deflect blame from Moscow by portraying Russia as a responsible actor whose extension efforts were ignored by Washington, while warning of nuclear catastrophe and promoting the idea of a new multipolar nuclear order. These narratives aimed to frame the treaty’s collapse as inevitable and as the result of Western actions, despite the Kremlin’s own role in weakening the agreement over several years.
The treaty, which entered into force in 2011, limited strategic nuclear arsenals and included transparency measures such as on-site inspections. However, Russia suspended inspections in 2022, citing Western sanctions after it invaded Ukraine, and formally halted its participation in 2023. At the same time, pro-Kremlin outlets intensified messaging accusing Western countries and NATO of provoking nuclear confrontation, spreading false claims about Ukrainian nuclear threats, biological weapons, and Western sabotage.
Source: EU VS Disinfo, As New START ends, disinformation about it continues. [online] Published 13 February 2026.
Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors
The joint report by the Ukrainian Centre for Strategic Communications and the NATO Strategic Communications Centre of Excellence develops and tests a structured Information Influence Attribution Framework (IIAF) to assess Russian information influence operations targeting Ukraine and neighbouring regions. Focusing on campaigns linked to Russian state and state-aligned actors, including RT, Sputnik, pro-Kremlin Telegram networks, and proxy outlets, the report documents coordinated disinformation efforts built on falsified content, impersonation of trusted media brands, source laundering, and synchronized cross-platform amplification. Case studies demonstrate how Russian actors deploy forged documents, deceptively edited images and videos, bot networks, and multilingual content tailored to specific audiences to advance narratives portraying Ukraine as corrupt, unstable, or manipulated by the West.
The analysis shows that these campaigns rely on layered tactics: technical infrastructure reuse (shared domains, hosting, analytics IDs), behavioural coordination (near-simultaneous reposting, template-based comment flooding, and tasking via channels such as the “Digital Army of Russia”), and contextual alignment with Kremlin geopolitical objectives. Disinformation narratives are strategically timed to exploit political events, social tensions, and policy debates. In contrast, narrative laundering techniques obscure their origins by staging dissemination across inauthentic and overtly state-linked media ecosystems. Applying a spectrum of state responsibility, the report concludes that several examined operations fall between “state-shaped” and “state-coordinated,” reflecting high-confidence assessments of structured, state-aligned influence activity rather than organic or rogue behaviour.
Source: The Ukrainian Centre for Strategic Communications & NATO Strategic Communications Centre of Excellence,
V. Smith & J. Pamment & S. Dikhtiarenko & B. Heap & D. Tsurtsumia & A. Maunder, Attributing Russian Information Influence Operations. [online] Published January 2026.
Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Attribution_Russian_Information_Influence.pdf?zoom=page-fit
[General Reports]
False Allegations Labeled Epstein an Israeli Spy
In its latest debunking, NewsGuard’s Reality Check examines false claims that emerged after the January 2026 release of millions of Justice Department documents related to Jeffrey Epstein, alleging that the files proved he was an Israeli spy. The allegation centered on an October 2020 FBI document included in the release, which summarized statements from an unnamed source who described Epstein as a “co-opted Mossad agent.” Anti-Israel accounts on X amplified screenshots of the document, asserting it confirmed that Epstein had trained as a spy under former Israeli Prime Minister Ehud Barak. Posts spreading the claim gained millions of views and tens of thousands of engagements. However, the document in question is an FD-1023 form, which records raw, unverified information from confidential sources and does not constitute confirmed findings or investigative conclusions. The FBI has explicitly stated that such forms do not validate or establish the credibility of the claims they contain. No credible evidence has emerged showing that Epstein was affiliated with Israeli intelligence. In response to similar allegations made previously, former Israeli Prime Minister Naftali Bennett publicly denied that Epstein had ever worked for the Mossad.
Source: NewsGuard, J. Schofield, Debunk: Unverified Allegations Used to Label Epstein an Israeli Spy. [online] Published 9 February 2026. Available at: https://www.newsguardrealitycheck.com/p/was-epstein-an-israeli-spy-newsguards
Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy
Following the Justice Department’s January 2026 release of documents related to Jeffrey Epstein, NewsGuard’s Reality Check reports that the long-debunked “Pizzagate” conspiracy theory, falsely claiming that top Democrats ran a child sex-trafficking ring out of Comet Ping Pong, has resurfaced. Online users cited hundreds of references to the word “pizza” in the files, alleging that it was coded language for abuse. Social media posts promoting the claim surged dramatically, and the commentator Tucker Carlson amplified the theory in a YouTube episode, suggesting it might be real despite having previously rejected it.
However, a review of the documents found no evidence supporting the conspiracy. References to “pizza” were largely mundane, such as casual emails discussing restaurants or kitchen equipment. The pizzeria and its owner were mentioned only briefly, including in an article about the conspiracy itself and an unverified tip. Authorities have repeatedly debunked allegations that figures such as Hillary Clinton were involved. Although “pizza” appears in 842 documents, the files contain more references to other foods, such as “salad” and “soup.” The conspiracy has previously had dangerous real-world consequences, including a 2016 incident in which an armed man entered the restaurant believing he was rescuing children.
Source: NewsGuard, M. Wollen & S. Howard, Epstein Document Dump Breathes New Life into Long-Debunked Pizzagate Conspiracy — Pushed Now By Tucker Carlson. [online] Published 11 February 2026. Available at: https://www.newsguardrealitycheck.com/p/epstein-files-revive-pizzagate-with
Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes
An article by The Jamestown Foundation reports that, following alleged Pakistani airstrikes near Kabul in October 2025 targeting leaders of Tehreek-e-Taliban Pakistan (TTP), the Taliban launched a coordinated digital campaign to counter Pakistan’s accusations and reshape regional narratives. Using social media, affiliated news outlets, militant songs, and propaganda videos, the Taliban sought to portray Pakistan as the aggressor while denying claims that it sheltered militant groups. A widely circulated video featuring TTP leader Mufti Noor Wali Mehsud claimed he was not in Afghanistan, aiming to undermine Pakistan’s justification for the strikes and weaken its credibility. Territorial narratives rejecting the Durand Line border and circulated maps depicting parts of Pakistan as belonging to a so-called Greater Afghanistan were also promoted, reinforcing nationalist and revisionist claims.
Source: The Jamestown Foundation, R. Nasar, The Afghan Taliban’s ‘Digital War’ Against Pakistan. [online] Published 12 February 2026. Available at: https://jamestown.org/the-afghan-talibans-digital-war-against-pakistan//
India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation
According to a TechXplore article, India has amended its 2021 information technology regulations to tighten oversight of artificial intelligence–generated content, requiring social media platforms to clearly and permanently label synthetic media and comply with government takedown requests within three hours, replacing the previous 36-hour window. The updated rules, effective February 20, apply broadly to content “created, generated, modified, or altered” through computer resources, excluding routine or good-faith editing. Synthetic data is defined as material that appears real or is likely to be perceived as indistinguishable from an actual person or event, and it directly targets the growing spread of AI-generated misinformation among India’s more than one billion internet users.
The regulations mandate that platforms such as X, Instagram, and Facebook deploy automated tools to prevent the dissemination of illegal content, including false documents and sexually abusive material, and ensure that AI-manipulated content carries indelible markings that cannot be removed or suppressed. The government has also operationalized enforcement through the Sahyog portal, which automates and streamlines the submission of official takedown notices. The announcement precedes a major global AI summit in New Delhi, underscoring India’s intent to assert regulatory control over synthetic media and to hold platforms accountable amid rapid AI expansion and rising disinformation risks.
Source: TechXplore, A. Zinin, India ramps up AI rules for social media platforms. [online] Published 10 February 2026. Available at: https://techxplore.com/news/2026-02-india-ramps-ai-social-media.html
Foreign Hybrid Campaigns Intensify Across the European Union
The latest Institute for Strategic Dialogue (ISD) publication documents a sustained campaign of hybrid activity targeting all 27 EU member states since Russia’s full-scale invasion of Ukraine in February 2022. Russia is identified as the primary threat actor in most cases, with China and Iran also implicated. The report categorizes hybrid threats into information operations, cyber operations, kinetic actions, political and civil society subversion, and malign finance. Across the cases, adversarial states deploy coordinated disinformation campaigns, inauthentic news sites, bot amplification networks, deceptive branding, and impersonation tactics to manipulate public opinion, inflame ethnic and political tensions, interfere in elections, and erode trust in democratic institutions. These efforts are often synchronized with cyberattacks, espionage, and covert financing of political actors to maximize destabilizing impact.
The analysis highlights a notable escalation in kinetic and disruptive tactics accompanying information manipulation, including sabotage of critical infrastructure, arson, drone incursions, assassination plots, and the recruitment of local proxies through encrypted platforms. Many incidents combine physical acts, such as vandalism or migrant weaponization, with coordinated narrative amplification through state media, Telegram channels, and bot networks to magnify psychological and political effects. The report concludes that hybrid operations increasingly blur the line between foreign and domestic actors, leveraging criminal networks, diaspora communities, and local intermediaries to obscure attribution while advancing strategic goals of undermining European unity, weakening institutional trust, and challenging liberal democratic governance.
Source: Institute for Strategic Dialogue, K. Sikora & L. Savoia & B. Schafer, Europe’s Other Battlefields: Foreign Hybrid Threats in the EU. [online] Published 12 February 2026. Available at: https://www.isdglobal.org/publication/europes-other-battlefields-foreign-hybrid-threats-in-the-eu/
TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar
A comparative study by Doublethink Lab examines TikTok’s impact on adolescents in Taiwan and Myanmar, highlighting its role as both a vector for disinformation and a platform vulnerable to algorithmically amplified influence operations. In Myanmar, following the 2021 military coup and restrictions on Facebook and VPNs, propaganda and disinformation campaigns shifted toward Telegram and TikTok, which were perceived as having weaker moderation. Youth reported exposure to political propaganda, hate speech, and misinformation, often responding by skipping or reporting content but facing structural challenges in verification. In Taiwan, concerns extend beyond harmful content to the platform’s potential role in amplifying narratives aligned with the People’s Republic of China (PRC). During the 2024 presidential election, adolescents reported increased exposure to short, decontextualized political videos that criticized Taiwan’s democratic institutions and subtly promoted favorable perceptions of PRC governance. A 2025 nationwide study cited in the report found that active TikTok users showed higher agreement with conspiracy-like and PRC-aligned narratives, suggesting a measurable relationship between platform engagement and increased populist skepticism toward democratic institutions.
The report highlights TikTok’s highly personalized recommendation algorithm as a central driver of user exposure. Adolescents in both contexts described the platform as “precise” and immersive, with rapid adaptation to user behavior reinforcing content themes and potentially narrowing informational diversity. In Taiwan, researchers documented algorithmic promotion of borderline sexually suggestive content involving minors, alongside accounts linked to potential exploitation and scam networks, while in Myanmar, the primary risks centered on political manipulation and harassment. Across both contexts, the study identifies algorithmic amplification, subtle embedding of political messaging within entertainment content, and the use of emotionally charged, fragmented narratives as key tactics. The findings underscore how platform design, limited transparency, and weak regulatory oversight can facilitate information manipulation, normalize harmful behaviors, and erode digital literacy, particularly among adolescents in politically sensitive environments.
Source: Doublethink Lab, E. Hsu & Iris & P. S. S. Thant, Algorithms, Digital literacy, and Social influence: TikTok use among youths of Taiwan and Myanmar. [online] Published 13 February 2026. Available at: https://medium.com/doublethinklab/algorithms-digital-literacy-and-social-influence-tiktok-use-among-youths-of-taiwan-and-myanmar-5ac4ee947b8c
G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index
As reported by Infosecurity Magazine, the Munich Security Index (MSI) 2026, released at the Munich Security Conference, identifies cyberattacks as the top perceived national security risk among G7 countries for the second consecutive year, followed by economic or financial crises and disinformation campaigns from adversaries. The ranking reflects a sustained increase in cyber threats in G7 risk assessments, with particularly great concern in Germany, the UK, and Japan. Disinformation campaigns are widely recognized as a major hostile tactic, a core security concern alongside cyber operations, indicating growing awareness of information manipulation as a strategic threat vector. In contrast, among Brazil, India, China, and South Africa (BICS), cyber threats have declined in perceived urgency, while disinformation ranks seventh, suggesting differing threat prioritization across geopolitical blocs.
The report is based on a November 2025 survey of more than 11,000 respondents across G7 and BICS countries, employing representative sampling and multiple evaluative criteria to assess perceived risk severity and imminence. While most countries reported declining threat perceptions overall, concerns have intensified in the UK, the US, and India, particularly regarding economic instability, democratic fragility, civil unrest, and political violence, factors that can intersect with cyber and disinformation campaigns in destabilizing democratic systems. The authors caution that polling in autocratic contexts, particularly in China, may limit respondents’ willingness to express views freely, highlighting methodological constraints in assessing perceptions of information and cyber threats across political systems.
Source: Infosecurity Magazine, K. Poireault, Munich Security Conference: Cyber Threats Lead G7 Risk Index, Disinformation Ranks Third. [online] Published 13 February 2026. Available at: https://www.infosecurity-magazine.com/news/munich-security-index-cyberattacks/
Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland
An investigation by the DFRLab found that citizen patrols organized along the Polish-German border in 2025 were accompanied by a coordinated online disinformation campaign that amplified anti-migration rhetoric, legitimized vigilante activity, and portrayed the Polish government as weak. Inauthentic Facebook pages, AI-generated images, recycled protest photos, and copy-paste narratives were used to stoke fear and frame migrants as a violent “invasion.” Right-wing actors, including members of the Confederation Party and Law and Justice (PiS), boosted this content, at times sharing AI-manipulated visuals and unverified claims. Political posts amplified incidents, such as a soldier’s injury later clarified as accidental, to fuel calls for violence against migrants and accusations that border officials were “traitors.” The investigation identified coordinated networks of Facebook pages posting identical content within seconds or minutes of one another, often with large follower counts but minimal engagement, suggesting likely inauthentic amplification.
The PATRIOCI Facebook group emerged as a key amplification node, administered by pages impersonating media outlets and linked to previously dormant or disinformation-associated entities, including accounts that had shared content from Russian state-controlled RT. These networks promoted articles from outlets such as Tysol[.]pl in synchronized bursts and targeted Polish border guards with harassment and doxxing-style identification efforts. AI-generated banners depicting aggressive “foreign” figures and violent imagery were used to mobilize offline protests, while the absence of platform labeling for synthetic media enabled emotionally manipulative visuals to circulate unchecked. The report concludes that coordinated digital manipulation, leveraging algorithmic amplification, opaque platform governance, and political endorsement, distorted Poland’s migration debate, normalized extremist rhetoric, and blurred the line between grassroots activism and orchestrated influence operations.
Source: DFRLab, G. Gigitashvili & S. Gelava, Inauthentic networks fuel Poland’s border vigilante movement. [online] Published 11 February 2026. Available at: https://dfrlab.org/2026/02/11/inauthentic-networks-fuel-polands-border-vigilante-movement/
[Appendix - Frameworks to Counter Disinformation]
Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI
Based on a design thinking workshop, a paper published by DoubleThink Lab argued that participatory design methods can play a constructive, bounded role in counter-FIMI policy development. The workshop, which involved civil society, defense practitioners, and journalists, noted that traditional top-down, reactive policies often fail to keep pace with fast-moving influence operations. Instead, participatory and human-centered approaches can help policymakers better understand the emotional, institutional, and operational pressures caused by FIMI, including declining trust, decision-making under uncertainty, and professional burnout.
The workshop reframed FIMI not only as a problem of false content but also asa broader human and systems problem. Several tensions, such as speed versus due process, control versus trust, and short-term response versus long-term sustainability, appeared across all groups. Based on these insights, the paper proposed exploratory policy recommendations using a four-part framework: Education, Engineering, Enforcement, and Ethics (4E). These include improving media and crisis literacy, investing in shared verification tools and secure communication systems, clarifying institutional roles, strengthening legal protections, and ensuring responses respect rights and wellbeing.
The paper concluded that design thinking can play a valuable but limited role in counter-FIMI policymaking. While it does not measure effectiveness or replace legal and evidence-based analysis, it helps policymakers surface hidden assumptions, understand trade-offs, and design more adaptable and human-centered responses. The recommendations were presented as starting points that require further testing, legal review, and empirical validation.
Source: Doublethink Lab, D. V. Ligot, Crafting Policy Recommendations for Countering Disinformation and FIMI. [online] Published 11 February 2026. Available at: https://medium.com/doublethinklab/crafting-policy-recommendations-for-countering-disinformation-and-fimi-952825b9aa2b
EU Launches Initiative to Counter Online Polarization and Disinformation
The European Commission has launched a €785,000 pilot call for proposals to strengthen social cohesion and improve understanding of online political polarisation, recognizing the growing threat that digitally amplified division poses to democratic resilience in the European Union. The initiative highlights how algorithm-driven platforms, while central to civic engagement, can also amplify extreme, divisive, or manipulative content, creating conditions in which coordinated malicious influence operations and disinformation campaigns can thrive. Such dynamics risk eroding trust in democratic institutions and distorting public understanding of societal issues.
The project seeks to build cross-sector and cross-border partnerships to design, test, and scale mitigation strategies that counter harmful polarization without restricting freedom of expression. Proposed activities may include multilingual data collection; the development of AI- and IT-based tools; practitioner toolkits; policy and technical interventions; and the establishment of durable stakeholder networks. By engaging civil society, researchers, media actors, and technology companies, the initiative aims to produce actionable responses that align with the European Democracy Shield and reinforce long-term democratic resilience while safeguarding open and pluralistic debate.
Source: European Commission, New call for proposals to advance social cohesion amid online polarisation. [online] Published 12 February 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/new-call-proposals-advance-social-cohesion-face-online-polarisation
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)


