top of page

Weekly: Cyber-based hostile influence campaigns 12th - 18th January 2026

  • Writer: CRC
    CRC
  • 2 hours ago
  • 25 min read
Cover Image- Text: Weekly Media Update: Information Operations


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.  


During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.



[Contents]



[Social Media Platforms]


X


[State Actors]


Russia 


The War in Ukraine


China


Iran


[AI Related Articles]


[General Reports]


[Appendix - Frameworks to Counter Disinformation]




[ Report Highlights]




[Social Media Plattforms]


X

Wide-Scale Response to Grok’s AI-Driven Illegal Deepfakes

In a letter organized by a coalition of 28 digital rights, child safety, and women’s rights organizations has called on Apple to remove Grok and X from the App Store, arguing that the AI chatbot is fueling mass disinformation and abuse through nonconsensual sexual deepfakes. Investigations cited in the letter argue that Grok generates vastly more sexualized and “nudified” images than comparable platforms, including content that falsely depicts real women and minors. The groups highlighted that Grok itself has acknowledged generating child sexual abuse material (CSAM). Researchers and monitoring firms further reported that Grok produces nonconsensual sexual images at an unprecedented scale, especially after xAI introduced a “spicy mode” allowing users to digitally “undress” real people.


The movement echoed globally, and an article by TechXplore stated that several countries have taken direct action: Turkey ordered a nationwide ban on Grok after it generated insulting and destabilizing content about national leaders, and in the United States, antisemitic outputs prompted Jewish lawmakers to warn that government use of Grok could pose national security risks. Additionally, the UK’s online safety regulator, Ofcom, reported that it has opened a formal investigation into X under the Online Safety Act, even though X claims to have introduced measures to prevent Grok from generating intimate images. Another report by The Hill stated that Malaysia and Indonesia have also restricted access to Grok.


Sources:





[State Actors]


Russia

Russian Influence Networks Flood U.S. Discourse After Maduro’s Capture Narrative With Disinformation

The Washington Post article describes how, following the capture of Venezuelan President Nicolás Maduro by U.S. Special Forces, researchers observed a rapidly deteriorating information environment in which hostile influence actors exploited uncertainty to inject confusion and competing narratives into U.S.-focused online discourse. The Atlantic Council’s Digital Forensic Research Lab (DFRLab) reported that a covert Russian operation linked to Storm-1516 used a network of social media accounts, influencers, and fake websites to spread contradictory conspiracies and inflammatory claims in what analysts described as a strategy designed to overwhelm the information space and make it harder for users to distinguish fact from fabrication. The article also describes a second Russian effort, the Pravda Network, which recycled Kremlin-aligned talking points through news-like websites targeting U.S. audiences, while China pursued messaging portraying the United States as unstable and unreliable, including through inauthentic amplification of a CCTV-produced AI parody video styled for U.S. digital culture. Alongside these state-backed operations, traditional deception tactics remained prominent, such as repurposing older footage from U.S. protests and misrepresenting it as current backlash against Maduro’s capture, sometimes by obscuring identifying logos. Together, these campaigns illustrate how high-attention geopolitical crises enable influence operations to rapidly flood platforms with misleading content and narrative confusion before authoritative information stabilizes.


Source: The Washington Post, N. Nix. How China and Russia are using Maduro’s capture to sway U.S. discourse. [online] Published 10 January 2026. Available at: https://www.washingtonpost.com/technology/2026/01/10/maduro-capture-china-russia-social-media-campaign/


Russian Propaganda Frames Iran Protests as a Western “Color Revolution” and Hybrid War Plot

A StopFake publication reports that Russian propaganda outlets on Telegram have responded to Iran’s late-December 2025 protest wave by framing it not as a domestic uprising against economic collapse and repression, but as an externally orchestrated conspiracy tied to a broader confrontation between the West and Russia. The DM Research Center analyzed 506 messages from major Russian propaganda Telegram channels and found that Kremlin-aligned messaging initially described the unrest as purely economic but quickly shifted to narratives portraying escalation, radicalization, and civil conflict as inevitable and implicitly driven by outside interference. This framing also serves to justify the Iranian regime’s violent crackdown and to position Iran as a strategic partner whose instability threatens Russia’s geopolitical interests.


A central disinformation tactic is the use of familiar “color revolution” rhetoric, denying the protests’ legitimacy by claiming they are engineered by U.S. and Israeli intelligence and carried out by “trained provocateurs.” Propagandists emphasize alleged “pogroms,” “riots,” and attacks on security forces while downplaying or omitting the underlying causes of public anger, including inflation and restrictions on civil liberties. Russian messaging also targets opposition figure Reza Pahlavi, portraying him as a U.S.-controlled puppet, a detached elite, or a war provocateur—reducing a complex protest movement to a caricature and reinforcing the claim that the unrest lacks authentic domestic roots.


More broadly, the protests are incorporated into a sweeping narrative of Western “hybrid war” or “cognitive warfare” against Russia, depicting events in Tehran alongside crises elsewhere as “synchronized strikes” aimed at weakening Moscow by destabilizing its allies. This narrative is used to mobilize Russian audiences, normalize repression, and cultivate fear of internal dissent by projecting the Iranian scenario onto Russia’s domestic context. In parallel, propaganda channels exploit Iran-related ethnic mobilization to fuel xenophobia toward diasporas, especially Azerbaijanis, branding them as foreign-controlled agents and presenting Iran as a “testing ground” for tactics allegedly intended to fracture multinational states like Russia.


Source: StopFake, L. Bidochko & K. Zadiraka & A. Pivtorak & A. Pilipenko. «Майдан» в Иране, а «жертва» все равно Россия. Пропаганда снова обвиняет Запад в организации протестов. [online] Published 16 January 2026.


Kremlin Disinformation Sustains the Illusion of Economic Resilience

EUvsDisinfo described how the Kremlin sustains an economic narrative of resilience through persistent disinformation and foreign information manipulation and interference (FIMI), aimed at both domestic audiences and Europe. A core message in these campaigns is that sanctions are ineffective and, in fact, harm Europeans more than Russians, with recurring claims that Europe will freeze in winter, energy prices will spiral, and EU unity will collapse, while Russia is portrayed as stable and adaptive. This framing uses everyday “normality” inside Russia as proof that the West has failed, reinforcing the illusion of control and strength.


The article also highlighted how this narrative is supported by reduced transparency and restricted data, with Russian authorities limiting or halting the publication of key economic and demographic statistics, including detailed trade figures, reserve data, and population indicators. By shrinking what can be independently measured, the state makes it easier to obscure economic pressure at home while amplifying perceived weakness abroad, allowing Russian information channels to continue insisting that Russia remains in control and that it is the West that is truly suffering.


Source: EUvsDisinfo. In 2026, the Russian economy is in big trouble. [online] Published 16 January 2026.


War in Ukraine

Russian Following the Oreshnik Missile Strike

A StopFake article describes how, following Russia’s January 12, 2026, missile strike on Lviv, the Kremlin launched a coordinated disinformation campaign to justify and politically exploit the attack. Russian authorities claimed the strike—allegedly involving the new “Oreshnik” medium-range ballistic missile—was retaliation for a supposed Ukrainian drone attack on Vladimir Putin’s residence, an incident for which no evidence exists and which Ukraine, Western governments, journalists, and even Donald Trump rejected as false. This fabricated pretext allowed Russia to frame the strike not as aggression but as “self-defense,” masking potential war crimes behind a false narrative.


The strike triggered strong international reactions. Ukraine called urgent meetings of the UN Security Council and the NATO-Ukraine Council, warning that the use of such missiles threatens broader European security. At the same time, Russian state media and Telegram propaganda channels spread contradictory explanations for the strike, claiming it was a warning to Europe, a response to U.S. pressure, retaliation for peace talks, or a move in the global gas market.


Source: StopFake, L. Bidochko & A. Pivtorak & A. Pilipenko. «За Путина, за советские газохранилища»: чем российская пропаганда оправдывает удар «Орешником» по Украине. [online] Published 12 January 2026.


China

Graphika Exposes Pro-China Influence Network Using Spoofed Western News Sites

Graphika’s “Glass Onion” report uncovered a coordinated ecosystem of 43 domains and 37 subdomains designed to push pro-China messaging while impersonating major international outlets, including The New York Times, The Guardian, and The Wall Street Journal. The network relied on deceptive infrastructure and media spoofing to publish recycled Chinese state media content, pro–Chinese Communist Party narratives, and commercial material, using copied design elements and shared technical templates to appear credible and legitimate to readers.


The report links this activity to a broader Chinese influence ecosystem through technical overlaps with campaigns previously attributed to Chinese entities, including HaiEnergy, Paperwall, DuringBridge, and BayBridge. Graphika found that once PR and marketing firms placed content on these spoofed websites, clients then promoted the articles on Chinese platforms as evidence of international media coverage, effectively laundering influence messaging through the appearance of third-party legitimacy.


A key operational component involved Chinese PR and digital marketing firms and individuals who hosted and distributed the content, sometimes alongside amplification linked to Spamouflage activity on Western platforms. Overall, Graphika assesses that “Glass Onion” demonstrates how commercial marketing infrastructure can be repurposed for influence operations, blending impersonation, narrative laundering, and coordinated amplification to shape perceptions while obscuring attribution.


Source: Graphika, M. Fulde-Hardy & L. Turner. Glass Onion. Peeling Back the Layers of a Pro-China Online Ecosystem. [online] Published 13 January 2026. Available at: https://public-assets.graphika.com/reports/Graphika_Report_Glass_Onion.pdf


Taiwan Warns China Is Outsourcing AI-Driven Disinformation and Cognitive Warfare Campaigns

CyberNews notes that Taiwan’s National Security Bureau (NSB) claims Beijing is increasingly outsourcing influence operations to Chinese IT and marketing companies, enabling large-scale “cognitive warfare” against Taiwanese audiences through automation and AI-generated content. According to the NSB, Taiwan’s intelligence community detected more than 45,000 fake social media accounts and over 2.3 million pieces of disinformation circulated through those networks in 2025. The report argues these activities are not random trolling but are tied to contracted firms that allegedly built databases of fake accounts and developed automated programs to manage and deploy them.


The NSB describes a broad toolkit that includes deepfake voice and video production, fake news websites designed to resemble international outlets, and Facebook content farms that attract audiences with clickbait before pivoting to political narratives. It also alleges that Chinese agencies and contractors use web crawlers to collect data on Taiwanese political figures and influencers, enabling targeted propaganda, while cyber intrusions have been used to hijack Taiwanese accounts to amplify false claims, such as blockades or Chinese forces entering Taiwan’s territorial zone. The NSB says the overall goal is to deepen internal divisions and weaken external support for Taiwan, and it reports increased cooperation with fact-checkers, platforms, and international partners to expose and counter these campaigns.


Source: CyberNews, A.-M. Corvin. Taiwan warns China is launching AI and deepfakes campaigns run by legitimate tech and marketing firms. [online] Published 12 January 2026. Available at: https://cybernews.com/cyber-war/taiwan-china-ai-and-deepfakes-campaigns/


Iran

AI Videos Fill Void Amid Iran Internet Blackout

As Iran enters the sixth day of a near-total internet blackout amid widespread anti-government protests, NewsGuard describes how foreign social media users are filling the information vacuum with AI-generated videos. The demonstrations, which began in December 2025 over economic grievances, have expanded into nationwide calls to end Iran’s theocratic regime. According to the Human Rights Activists News Agency, at least 2,400 protesters have been killed and more than 18,400 detained as of January 14, 2026, while internet access remains largely cut off, severely limiting reliable reporting from inside the country.

In this environment, NewsGuard has identified at least seven AI-generated videos falsely presented as real footage of the protests, amassing around 3.5 million views. These videos have been shared by both anti-regime and pro-regime accounts and include fabricated scenes of women attacking paramilitary vehicles, protesters renaming streets after former U.S. President Donald Trump, and alleged million-strong pro-government marches. AI detection tools found these videos to be almost certainly artificial, pointing to visual inconsistencies and other telltale signs.

Source: NewsGuard’s Reality Check, I. Chomnalez. AI videos fill void amid Iran internet blackout. [online] Published 14 January 2026.


[AI Related Articles]


Foreign Actors Used Misleading Protest Videos to Portray the U.S. in Crisis

According to a report by NewsGuard, foreign state-aligned and anti-West social media accounts are increasingly repurposing protest footage from unrelated events to depict the United States as unstable and on the verge of collapse. Pro-Iran and pro-China actors have shared out-of-context videos of demonstrations to amplify narratives of U.S. chaos, polarization, and public revolt. These posts often exaggerate the scale of unrest and falsely tie violent imagery to current U.S. political debates, helping foreign audiences, and sometimes U.S. users, perceive American society as spiraling into disorder.


Pro-Iran accounts circulated a video allegedly showing U.S. protests against President Trump’s immigration policies, including claims that demonstrators were throwing Molotov cocktails. The footage came from a November 2025 incident in Thessaloniki, Greece, where concertgoers clashed with police near the Turkish Consulate. Similarly, pro-China accounts spread videos said to show nationwide U.S. protests and the military capture of Venezuelan President Nicolás Maduro. NewsGuard found the clips were actually from the large “No Kings” protests in mid- and late 2025, with identifying logos deliberately obscured. While small protests did occur in early 2026, they were far smaller than the demonstrations shown in the manipulated videos.


This tactic is not limited to the United States. NewsGuard has documented similar efforts to misrepresent unrest in other Western countries, including a case in which pro-Iran accounts shared footage from a major anti-government protest in Serbia and falsely claimed it showed millions of Yemenis demanding retaliation against Israel.


Source: NewsGuard’s Reality Check, C. Lin & L. Marchl. Anti-West Accounts Use Out-of-Context Protest Footage to Portray U.S. as on the Brink of Collapse. [online] Published 13 January 2026.



AI Citation Bias Is Steering Users Toward State Propaganda

CyberScoop’s article argues that artificial intelligence is reshaping how people evaluate credibility, particularly through the common advice to “check the citations.” While many LLMs provide sources alongside answers, they prioritize citations based on availability rather than trustworthiness, creating a structural vulnerability in the information environment. The author warns that authoritarian states are optimizing propaganda for AI consumption while many major U.S. media outlets restrict or block AI systems and remain behind paywalls, meaning even “ideologically neutral” AI tools can systematically steer users toward state-aligned narratives simply because they are easier to access.


The piece cites research from the Foundation for Defense of Democracies (FDD) showing that a significant share of responses from ChatGPT, Claude, and Gemini referenced state-aligned sources, including frequent citations of Al Jazeera in responses about the Israel-Gaza conflict. The article frames this as an influence risk, highlighting how citation-driven traffic can shape what information ecosystems survive while amplifying propaganda at scale, such as Russia-backed Pravda allegedly publishing millions of pro-Kremlin articles to saturate the internet. It calls for faster integration of credible journalism into AI systems, improved AI literacy about citation bias, and clearer labeling or deprioritization of state-controlled media in AI outputs to reduce vulnerability to foreign influence.


Source: CyberScoop, L. Siskind. The quiet way AI normalizes foreign influence. [online] Published 15 January 2026.


AI “Kirkification” Memes Blur Reality and Political Meaning 

After the killing of conservative influencer Charlie Kirk in September, Politico describes how the event rapidly spawned a wave of AI-generated memes—dubbed the “Great Kirkification” or “Kirkening”—in which creators grafted Kirk’s face onto countless public figures and absurd scenarios. These synthetic videos and images spread widely across X and other platforms, shared not only by ideological critics but also by engagement-driven and largely apolitical accounts, accelerating the transformation of a real political event into viral, decontextualized content. While conservatives increasingly frame Kirk as a martyr, the meme ecosystem has reshaped his image into a joke in parts of the internet, weakening the seriousness of his legacy by collapsing the context surrounding his death.


The article highlights how this trend blends political mockery with algorithmic amplification, including remixes of an entirely AI-generated tribute song (“We Are Charlie Kirk”) that was initially shared sincerely and later repurposed as ironic soundtrack material. It also notes that some users began claiming AI tools were becoming “poisoned” by repeated Kirk-themed requests, allegedly causing unrelated prompts to generate Kirk-like features—an interpretation disputed by AI researchers as a misunderstanding of how image models work. Ultimately, the article frames “Kirkification” as a case study in how synthetic media and meme culture can function as narrative disruption, not by fabricating believable deepfakes, but by flooding the information space with surreal AI content that reshapes how a public figure is remembered and interpreted.


Source: Politico, C. McHugh. Charlie Kirk Memes Are Taking Over the Internet. [online] Published 17 January 2026.


AI-Driven Misinformation and Bot Networks After the Bondi Attack

In the TechXplore article, the December 14, 2025, terrorist attack at Sydney’s Bondi Beach was rapidly followed by a surge of generative AI–enabled misinformation across social media. Examples included a manipulated video falsely claiming one attacker was an Indian national, widespread celebration of a supposed hero figure, “Edward Crabtree,” on X, and a deepfake image portraying human rights lawyer Arsen Ostrovsky as a “crisis actor.” The piece argues that this pattern is increasingly common in major crises, as AI enables the production of misinformation at scale and reinforces it through realistic-looking bot profiles that create the illusion of widespread public agreement and amplify false narratives.


To examine how easily such manipulation can occur, the article describes running Capture the Narrative, a student-led social media wargame in which teams built AI bots to influence a fictional election. Across four weeks, 108 teams from 18 universities generated more than 7 million bot-driven posts, accounting for over 60% of all content on the platform, using emotional, increasingly toxic messaging as a key engagement strategy and, in some cases, identifying “undecided voters” for micro-targeting. The simulation showed measurable impact: the bot campaign shifted the election outcome compared to a rerun without interference, demonstrating how small teams using inexpensive, consumer-grade AI can flood an online environment, distort debate, and meaningfully influence political decision-making, reinforcing the article’s conclusion that stronger digital literacy is urgently needed to help users recognize manipulated content.


Source: TechXplore, H. Pearce & A. Vassar & R. Masood, The Conversation. World-first social media wargame reveals how AI bots can swing elections. [online] Published 16 January 2026. Available at: https://techxplore.com/news/2026-01-world-social-media-wargame-reveals.html


[General Reports]


Mysterious News Site Raises Concerns About Election Influence in California

A little-known website called the California Courier, described by the Los Angeles Times, emerged as a heavy advertiser on Facebook, promoting articles that closely resembled political campaign messaging and consistently attacked Democratic leaders in California, including Governor Gavin Newsom. While presenting itself as a neutral news outlet, the site largely published conservative-leaning stories and spent more than $80,000 on Facebook ads since 2021, potentially reaching tens of thousands of users weekly. Notably, the site provided no clear information about its ownership or funding, raising concerns about transparency and intent as California headed toward the 2026 election cycle.


Investigations linked the Courier to a broader ecosystem of so-called “pink slime” news sites—outlets that mimic local journalism but lack basic editorial standards and disclosures. Researchers found apparent connections between the Courier and conservative organizations tied to the Lincoln Club of Orange County and the Lincoln Media Foundation, groups previously associated with similar sites in other states. Some Courier contributors had backgrounds in conservative advocacy, and the outlet appeared to amplify content produced by aligned organizations without attribution.


Critics said the California outlet is part of a growing, nationwide ecosystem of innocuous-looking, cheaply produced news publications that publish and advertise biased articles to surreptitiously influence elections. They worry the practice could mislead voters and corrode trust in nonpartisan news providers.


Source: Los Angeles Times. Could this mysterious news site influence California’s 2026 election? [online] Published 11 January 2026. Available at: https://www.latimes.com/california/story/2026-01-11/will-mysterious-california-news-site-influence-2026-election


False Warship Claim Targets Greenland Crisis with Viral Disinformation

Viral graphics on X fueled a false claim, identified by NewsGuard, that Germany had deployed two heavily armed guided-missile warships to Greenland to defend it from potential U.S. aggression following renewed statements by U.S. President Donald Trump about annexing the Danish territory. The narrative spread through viral graphics on X, including posts asserting that Germany was the first country to offer military assistance and that one ship carried an “attack jet squadron,” using an image of a German naval vessel to lend the claim visual credibility and amplify engagement.


There were no credible reports supporting the deployment of the warship, and German authorities confirmed only a small reconnaissance mission: a 15-member team flown to Greenland from January 15th to January 17th, 2026, as part of a Danish-led effort focused on maritime surveillance and Arctic security. The fabricated claim also relied on technically implausible details, as Germany does not operate aircraft carriers or ships capable of launching fixed-wing combat aircraft. The German Navy reaffirmed this point in an email response, stating that it has no aircraft carriers.


Source: NewsGuard’s Reality Check, M. Wolf. Debunk: Germany Has Not Deployed Warships to Attack the U.S. and Defend Greenland. [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/is-germany-deploying-warships-against


Theory and Practice of Disinformation Ethics and Psychological Defence

PDA’s textbook offered a multidisciplinary overview of information influence and psychological defence, with a strong focus on disinformation as a growing threat to democratic societies. Drawing on research from Sweden and the Nordic region, it explained how disinformation exploits media systems, cognitive biases, polarization, and new technologies such as artificial intelligence. While Nordic countries show relatively high resilience due to strong institutions and social trust, the book highlighted increasing risks from geopolitical tensions, AI-driven manipulation, and the vulnerability of certain social groups, especially children and youth.


A central theme of the book was that disinformation is not only false or misleading information but also a morally condemnable form of communication. Assessing disinformation is complex: determining whether a message is false, intentionally harmful, sarcastic, or merely an expression of opinion is often difficult in practice. The book emphasizes that responses to disinformation, such as ignoring, correcting, or removing content, are inherently moral choices that require balancing the need to protect democratic discourse with the obligation to uphold freedom of expression.


The textbook also explored how to counter disinformation strategically and ethically. It argues that countermeasures must be justified, proportionate, and transparent to maintain public trust. Rather than relying solely on censorship or technical solutions, the book highlighted open dialogue, media literacy, crisis communication, and carefully designed countermeasures as key tools.


Source: Psychological Defence Agency, J. Falkheimer & J. Pamment, eds. Manual. Psychological defence and information influence – a textbook on theory and practice. [online] Published January 2026. Available at: https://mpf.se/psychological-defence-agency/publications/archive/2026-01-19-psychological-defence-and-information-influence----a-textbook-on-theory-and-practice


[Appendix - Frameworks to Counter Disinformation]


Coordinated Influence Operations on Social Media

According to Google’s TAG Bulletin, in Q3 2025, platforms like YouTube and Google News took action against large-scale, coordinated influence operations originating from multiple countries, including Russia, China, Turkey, Iran, and others. These operations involved thousands of channels and domains spreading content supportive of the governments behind them while criticizing other states, international organizations, or political opponents. Much of this content was deliberately designed to mislead viewers, manipulate public opinion, and polarize audiences.


Russia was the most frequent source, with thousands of YouTube channels terminated for sharing content in multiple languages, supportive of Russian interests and critical of Ukraine, Moldova, NATO, and Western nations. China’s coordinated inauthentic networks also generated massive amounts of content in Chinese and English focused on China-US relations. Other states, including Turkey, Iran, Azerbaijan, and the Philippines, conducted smaller campaigns to promote their governments’ narratives and discredit critics. These takedowns highlighted the global scale of disinformation and the challenge of countering it. Coordinated campaigns often involve multiple languages, platforms, and accounts, making detection complex.


Source: Google, B. Leonard. Threat Analysis Group TAG Bulletin: Q3 2025. [online] Published 2025. Available at: https://blog.google/threat-analysis-group/tag-bulletin-q3-2025/


Common Framework to Identify and Disrupt Russian FIMI Networks

Russian FIMI operations are highly organised and persistent, yet efforts to counter them remain fragmented due to inconsistent terminology and uncoordinated reporting. To address this, EU DisinfoLab, together with its partners the European External Action Service (EEAS), Viginum, DFRLab, CheckFirst, Cassini, and the Auswärtiges Amt, published the report "Building a Common Operational Picture of FIMI", which proposes a shared framework for identifying and analysing disinformation. The report introduced clear definitions and called for moving beyond isolated incident reporting toward a coordinated understanding of how operations function over time.


At the core of the report is the Information Manipulation Set (IMS) model, developed by France’s Viginum agency. An IMS captures a disinformation actor’s distinctive pattern of behavior, its tools, tactics, platforms, and methods, effectively serving as a “digital fingerprint.” Applying this model to known operations such as Doppelganger and Storm-1516 revealed important differences that are often obscured by broad labels. By focusing on how operations work rather than just their messaging, analysts can more accurately attribute activity to specific networks, including sanctioned Russian entities, and better understand the full disinformation supply chain from content creation to amplification.


The report concluded that the IMS model enables more credible attribution, exposes vulnerabilities in disinformation networks, and strengthens the design and enforcement of sanctions. However, its success depends on sustained cooperation between public authorities, researchers, and private platforms. Despite existing sanctions, Russian-linked networks continue to rely on intermediaries and infrastructure within the EU, highlighting enforcement gaps rather than a lack of evidence.


Source: EU DisinfoLab. Building a common operational picture of FIMI: Using IMS to strengthen technical attribution and disruption. [online] Published January 2026.


Strengthening U.S. Cybersecurity and Countering Disinformation

As published by CSIS, Emily Harding’s testimony before the House Homeland Security Subcommittee highlighted critical gaps in the United States’ cyber defenses and the nation’s vulnerability to foreign influence operations and disinformation. Cyberattacks from actors such as China, Russia, Iran, and North Korea are often part of coordinated campaigns that blur the line between crime and warfare. The testimony emphasized that these attacks should be recognized as hostile acts, not isolated technical incidents, and that failing to respond effectively cedes the initiative to adversaries.


The testimony argued that current U.S. policies treat cyber largely as a technical afterthought, resulting in weak deterrence and inadequate integration of offensive and defensive cyber capabilities. This approach enables disinformation campaigns to accompany cyberattacks, spreading false narratives and shaping public understanding of security events in ways favorable to the attacker. To counter this, the testimony recommended a comprehensive “cyber first” strategy, including a declaratory policy treating cyberattacks on critical infrastructure as acts of war, integrating cyber tools into early-stage policy planning, and increasing coordination with private-sector partners. Congress should fund a dedicated Cyber Force, strengthen domestic infrastructure, and establish clear international norms to deter adversaries.


Source: CSIS, E. Harding. Congressional Testimony. U.S. Cyber Capabilities to Deter and Disrupt Malign Foreign Activity Targeting the Homeland. [online] Published 13 January 2026.



Germany and Israel Strengthen Cybersecurity Cooperation

The Bundesministerium des Inneren (BMI) announced that Germany and Israel are deepening their security partnership, with a focus on cybersecurity. During a recent visit, German Interior Minister Alexander Dobrindt signed a cybersecurity pact with Israeli Prime Minister Benjamin Netanyahu, building on an already strong relationship. Israel is considered Germany’s most important security partner outside NATO and the EU, particularly in defending against digital threats that can spread false information and manipulate public perception.


A central element of this cooperation is the planned German Cyber Dome, a semi-automated system for detecting, analyzing, and responding to cyberattacks. The Cyber Dome is designed not only to protect critical infrastructure but also to reduce the risk of coordinated campaigns that use cyber intrusions to spread disinformation. In addition to technology cooperation, Germany is deploying Federal Police experts to support civilian security authorities in Israel and the Palestinian territories, reinforcing stability and reducing the operational space for disinformation-driven manipulation.


Source: Bundesministerium des Innern und für Heimat. Zusammenarbeit zur Cybersicherheit mit Israel wird ausgebaut. [online] Published 12 January 2026.



Ugandan officials turn off internet on eve of national elections

The Uganda Communication Commission (UCC) announced it imposed an internet blackout and suspended some mobile phone services on the eve of a general election scheduled for January 15. The authorities started turning off internet and mobile phone services on January 13. According to the agency, the measure was taken to mitigate the spread of disinformation and curb the risk of election fraud. Therefore, any internet-based services that are not critical were to be suspended for the whole period.


On January 14, Internet monitoring firm Netblocks confirmed a “widespread internet shutdown”. The move raised a strong objection. In addition to suspending internet and mobile phone services, authorities have halted the sale of new sim or phone cards and cut off outbound data roaming services, Amnesty said in a press release. The United Nations Human Rights also condemned the action and said that access to communication and information is crucial to ensure free elections. On January 18, the agency announced that it had restored public internet access, reiterating that the temporary restriction was a measured action aimed at limiting misinformation and electoral interference during a critical period while ensuring essential services continued uninterrupted.


Sources:




Grok Abuse Exposes Scalable AI-Driven Non-Consensual Image Manipulation

A CIR article presents a rapid analysis of 1,625 Grok image-generation requests collected on January 7, 2025, identifying large-scale abuse driven by highly repetitive prompts designed to create non-consensual and degrading imagery. The data indicate that 72% of requests targeted women, with 98% of these explicitly sexualized and focused on “nudification,” sexual poses, and exaggerated body parts, while requests targeting men were less consistently sexualized and more often aimed at humiliation, infantilization, feminization, or racial and ethnic caricatures. CIR argues that this pattern reflects structured misuse rather than isolated incidents and that it is amplified by formulaic prompt language, making abusive intent relatively easy to detect and moderate through targeted filtering without broadly restricting legitimate use.


CIR outlines five policy and platform lessons centered on disrupting abuse at scale: blocking repeated sexualized prompt patterns, monitoring high-volume offenders, and ensuring safeguards exist across all access points, including Grok’s website, where generation remains available without meaningful identity or age verification. The article also highlights accountability gaps and enforcement opportunities, noting that location signals were identifiable for some repeat offenders and that subscription-based access can improve traceability, though it risks monetizing harm if not paired with strict safeguards. It warns that workarounds to generate child sexual abuse material and evade deletions can be found quickly, requiring agile enforcement, and argues that regulatory frameworks such as the UK Online Safety Act and EU mechanisms can support accountability, while emphasizing that inconsistent application across platforms versus standalone websites creates exploitable loopholes.


Source: Center for Information Resilience. “Grok’d”: Five emerging lessons on limiting abuse of AI image generation. [online] Published 14 January 2026. Available at: https://www.info-res.org/cir/articles/grokd-five-emerging-lessons-on-limiting-abuse-of-ai-image-generation/


Building a Common Operational Picture of FIMI

EU DisinfoLab’s report argues that European efforts to counter foreign information manipulation and interference (FIMI) are being weakened by fragmented reporting and inconsistent attribution, which can inflate the perceived scale of disinformation operations while obscuring how they actually function. It highlights how focusing only on isolated incidents blurs accountability across a wider “supply chain” of actors—from sponsors and producers to intermediaries and amplifiers—and notes that Russian operations often rely on outsourced or semi-autonomous proxies that do not always fit a purely state-controlled model.


To address these gaps, the report recommends using VIGINUM’s “Information Manipulation Set” (IMS) framework to map threats at tactical, operational, and strategic levels, linking visible incidents to shared tactics, infrastructure, and campaign resources. It presents case studies of five Russian-linked IMS clusters—Doppelganger, Media Brands/RRN, Undercut, Storm-1516, and Overload/Matryoshka—showing how the framework can distinguish between different operational models even when they appear connected or share contractors, thereby improving the targeting of countermeasures.


Finally, the report assesses EU sanctions as an important but often weakly enforced tool against FIMI, arguing that restrictions targeting individuals or entities have limited operational impact when campaigns continue to use EU-based services such as hosting, cloaking, and advertising via proxies. It calls for stronger public-private cooperation, improved platform transparency, consistent IMS tagging for takedowns, and more systematic enforcement mechanisms to disrupt the infrastructure enabling disinformation and raise the costs for perpetrators, rather than merely causing reputational damage.


Source: EU DisinfoLab. Building a common operational picture of FIMI. [online] Published 15 January 2026. Available at: https://www.disinfo.eu/building-a-common-operational-picture-of-fimi


Visual Disinformation and the Need for Visual Literacy

A publication by the Psychological Defense Agency argues that visual disinformation is becoming a central vulnerability in modern information environments, as images and video increasingly dominate social media and are processed faster and remembered more strongly than text. It warns that generative AI has sharply expanded the ability to produce realistic synthetic visuals, making misleading content easier to create and harder to verify, while increasing uncertainty about authenticity and potentially undermining trust in established media. The authors emphasize that malign influence campaigns exploit this environment by using visual media to manipulate attention, emotions, and perceptions, including through deepfakes, recontextualized footage, and entertainment-style formats that lower audience vigilance.


To strengthen resilience, the report frames visual literacy as a core democratic and psychological defence capability, defined not only as verifying images but also interpreting, decoding, and producing visual messages. It highlights that traditional fact-checking approaches have limits when it comes to AI-generated visuals, and that effective resistance requires a broader skill set: understanding how visual “grammar” (composition, cropping, editing, context) shapes meaning and developing “visual agency” so citizens can critically engage with image-based narratives rather than passively absorb them. The report ultimately calls for stronger, more systematic education and interdisciplinary research to build scalable methods for visual literacy that can counter the evolving tactics of visual disinformation.


Source: Psychological Defence Agency, I. Karlsson & T. Karlsson Häikiö & M. Malm & H. Ördén. Report. Visual disinformation and visual literacy. [online] Published January 2026.



"Armed with Facts": A Podcast to Counter Russian Disinformation

Hromadske Radio and the fact-checking organization StopFake have launched a new podcast project, “Armed with Facts: When Truth Protects,” aimed at countering disinformation affecting communities in frontline and temporarily occupied territories. Hosted by Alena Nesterenko and Elena Churanova, the podcast will explain how disinformation works, investigate sensitive issues, and help audiences access accurate information without panic or exposure to fabricated narratives. The hosts frame the initiative as a response to hostile influence efforts, particularly from Russia, and broader “post-truth” messaging that seeks to erode confidence in objective facts.


The project positions verified information as a protective tool against foreign propaganda and coordinated online manipulation, emphasizing that truth grounded in evidence is essential for resilience under information pressure. Nesterenko highlights the podcast’s goal of making complex topics like propaganda, disinformation, and post-truth accessible and engaging, while Churanova describes the collaboration as a joint mission to push back against fake content circulating on social media. The first episode examines how Russia allegedly uses holidays to spread propaganda and false claims.


Source: Hromadske Radio/StopFake. «Озброєні фактами»: новый совместный проект «Громадського Радіо» и StopFake. [online] Published 16 January 2026. Available at: https://www.stopfake.org/ru/ozbroyeni-faktami-novyj-sovmestnyj-proekt-gromadskogo-radio-i-stopfake/


[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 











 
 
bottom of page