CRC Weekly: Cyber-based hostile influence campaigns 29th December 2025 - 04th January 2026
- CRC

- 7 hours ago
- 9 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[Social Media Platforms]
Tik Tok
[State Actors]
Russia
China
[AI Related Articles]
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
Heise Online reports that the European Commission has launched an investigation into TikTok over the spread of AI-generated videos promoting Poland’s exit ("Polexit") from the European Union, amid concerns that the platform may have breached its obligations under the Digital Services Act.
Throughout 2025, EUvsDisinfo identified a consistent pattern in Russian FIMI, marked by the deliberate construction of an image of strength despite mounting military, economic, and political constraints.
Biometric Update covers legal developments in South Korea. The country has enacted a law imposing heavy penalties for spreading false or AI-manipulated information, prompting domestic concerns over censorship and international criticism from the United States over its broader impact on media freedom and digital regulation.
Wired covers that, following the US announcement of Venezuelan President Nicolás Maduro and his wife, Cilia Flores, being captured, disinformation about the operation flooded social media. AI-generated images and videos, repurposed old footage, and doctored posts quickly spread across TikTok, Instagram, and X.
[Social Media Plattforms]
TikTok
EU Probes TikTok Over AI-Driven “Polexit” Disinformation Campaign
Heise Online reports that the European Commission has launched an investigation into TikTok over the spread of AI-generated videos promoting Poland’s exit (“Polexit”) from the European Union, amid concerns that the platform may have breached its obligations under the Digital Services Act. The probe was triggered after Poland’s Deputy Minister for Digitalization formally alerted Brussels to a coordinated wave of content that mimicked the style of the UK’s Brexit campaign and targeted young voters. The Commission confirmed it is working closely with Polish authorities to assess whether TikTok adequately identified and mitigated the risks posed by this material. The videos, which have gained significant traction on TikTok, often featured young women in patriotic clothing delivering far-right, anti-EU messages. Polish officials describe the effort as a deliberate disinformation campaign, while investigations suggest that the content exhibits characteristics of Russian influence, as the texts sometimes contain specific Russian syntax. While TikTok has stated that it removed some content for violating its own rules, it has not provided detailed responses to the broader allegations.
The case follows earlier scrutiny of the platform, including a formal DSA procedure opened in late 2024 over alleged failures to curb manipulation during Romania’s presidential election. The investigation comes as the EU braces for a dense election calendar in 2026, raising fears that AI-driven disinformation could distort democratic processes across multiple member states. Under the DSA, very large platforms face fines of up to 6% of their global turnover if they fail to address systemic risks.
Source: Heise Online, S. Krempl. AI disinformation on TikTok: EU Commission examines "Polexit" campaign. [online] Published 3 January 2026. Available at: https://www.heise.de/en/news/AI-disinformation-on-TikTok-EU-Commission-examines-Polexit-campaign-11127844.html
[State Actors]
Russia
RT Falsely Blames Ukraine for Mariupol Theatre Destruction
A report by DisinfoWatch notes that the Russian state outlet RT has claimed Mariupol’s Drama Theatre has reopened nearly three years after what it described as a “Ukrainian Armed Forces’ terrorist attack,” a framing that directly contradicts extensive independent investigations. Multiple credible inquiries by Amnesty International, the OSCE, Human Rights Watch, and the Associated Press concluded that the theater was destroyed in March 2022 by a Russian airstrike, despite clear indications that it was being used as a civilian shelter. Large signs reading “CHILDREN” were painted outside the building to prevent an attack, yet the strike killed hundreds of civilians, with AP estimates placing the death toll at around 600. The theater’s reopening under Russian occupation authorities has been widely described as a propaganda exercise aimed at normalizing Moscow’s control of Mariupol and erasing evidence of a likely war crime.
Source: DisinfoWatch, RT claims Mariupol Theatre reopens “after the Ukrainian Armed Forces’ terrorist attack”. [online] Published 2025. Available at: https://disinfowatch.org/disinfo/rt-claims-mariupol-theatre-reopens-after-the-ukrainian-armed-forces-terrorist-attack/
RT Pushes Evidence-Free Narrative Around Alleged Drone Attack on Putin's Residence
DisinfoWatch reported that RT presented an unverified claim that Ukraine carried out a large-scale drone attack on a residence linked to Vladimir Putin as fact, despite offering no evidence that the incident occurred. The coverage asserted that Ukraine launched the "terror attack" using 91 long-range drones and then amplified supposed foreign reactions and Kremlin warnings of a tougher, non-diplomatic response.
Independent reporting directly contradicts these claims. Multiple credible international media outlets note that Russia has provided no visual, local, or official corroboration consistent with past verified Ukrainian strikes, and the Ukrainian government has categorically denied the attack. Analysis by the Institute for the Study of War highlighted inconsistencies between Kremlin statements and Russian Defense Ministry figures, as well as the total absence of open-source evidence such as footage, damage reports, or local authority accounts.
Source: DisinfoWatch, Lavrov and Russian state propagandist allege Ukrainian attack on Putin’s residence without evidence. [online] Published 29 December 2025. Available at: https://disinfowatch.org/disinfo/lavrov-and-russian-state-propagandist-allege-ukrainian-attack-on-putins-residence-without-evidence/
How Russia Sought to Win the Narrative in 2025
Throughout 2025, EUvsDisinfo identified a consistent pattern in Russian FIMI, marked by the deliberate construction of an image of strength despite mounting military, economic, and political constraints. Kremlin-aligned outlets repeatedly exaggerated battlefield developments in Ukraine, portraying limited or ambiguous advances as decisive victories and promoting a narrative of Russian invincibility. This messaging drew heavily on historical revisionism and wartime glorification, amplified through “mapaganda” and symbolic moments such as Victory Day, to mask Russia’s failure to achieve key strategic objectives, including air superiority or sustained territorial control.
In reality, Russian forces have suffered significant losses, failed to hold major regional capitals in the long term, and relied on external support, even as propaganda continued to present the war as steadily progressing in Moscow’s favor. Beyond Ukraine, 2025 also highlighted the growing scale and sophistication of Russian FIMI operations. EUvsDisinfo documented coordinated interference efforts in Moldova, Ukraine, and Georgia, as well as influence activities extending to Africa. The year saw the maturation of a disinformation-for-hire industry, increasingly professionalized and amplified by AI and large language model grooming.
In response, the EU advanced new analytical tools, such as the “iceberg” and “galaxy” frameworks, to expose both visible narratives and hidden infrastructures behind disinformation, while launching the Democracy Shield to strengthen resilience against manipulation. Crucially, the Kremlin’s inflated claims of success served not only to bolster domestic morale but also to advance its strategic objectives ahead of potential negotiations.
Source: EUvsDisinfo, 2025 in review: winning the narrative. [online] Published 29 December 2025. Available at: https://euvsdisinfo.eu/2025-in-review-winning-the-narrative/
China
Taiwan Warns of Coordinated Chinese Disinformation During Military Exercises
In a Focus Taiwan article, Taiwan’s Ministry of National Defense warned of a coordinated wave of Chinese disinformation accompanying Beijing’s large-scale military exercises around the island, aimed at undermining public morale and confidence in Taiwan’s institutions and international partners. Officials identified 46 false or misleading narratives, many of which targeted government policies, the armed forces, and trust in the United States as part of broader psychological pressure during the drills.
The disinformation included fabricated claims by Chinese state media that ports had been blockaded, that PLA forces had advanced close to Taiwan’s shores, as well as misleading video content suggesting that Chinese aircraft or drones had flown near Taipei 101. Taiwanese authorities rejected these claims and stressed that such information operations have become a routine component of PLA exercises, prompting ongoing monitoring, public corrections, and calls from political leaders for societal unity to counter disinformation.
Source: Focus Taiwan, S. Lin. Taiwan's military highlights Chinese disinformation during large-scale drill. [online] Published 30 December 2025. Available at: https://focustaiwan.tw/politics/202512300030
[AI Related Articles]
The US Invasion of Venezuela Exposed Limitations in AI Chatbot Reporting
A Wired report highlighted how leading AI chatbots diverged sharply when asked about an alleged U.S. invasion of Venezuela and the capture of President Nicolás Maduro—an event publicly claimed by U.S. officials but denied by some AI systems. When tested, Google’s Gemini and Anthropic’s Claude provided timely summaries of the reported events, citing multiple news sources and acknowledging competing narratives. By contrast, ChatGPT and Perplexity flatly rejected the premise, insisting that no such invasion or capture had occurred and attributing the claims to misinformation or sensational headlines.The discrepancy underscores a core limitation of large language models: their reliance on training cutoffs and, in some cases, the absence of real-time search. ChatGPT’s response reflected outdated knowledge and expressed high confidence in an incorrect conclusion, while Perplexity later acknowledged routing the query to a lower-tier model after flagging it as “likely fraud.” Experts note that models with live web access are better equipped to handle fast-moving events; however, accuracy still depends on data sourcing and system design.
Source: Wired, B. Barrett. Business. The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees. [online] Published 3 January 2026. Available at: https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/
[General Reports]
Social Media Flooded with Disinformation After Maduro’s Capture
Wired highlighted how leading AI chatbots diverged sharply when asked about an alleged U.S. invasion of Venezuela and the capture of President Nicolás Maduro—an event publicly claimed by U.S. officials but denied by some AI systems. When tested, Google’s Gemini and Anthropic’s Claude provided timely summaries of the reported events, citing multiple news sources and acknowledging competing narratives. By contrast, ChatGPT and Perplexity flatly rejected the premise, insisting that no such invasion or capture had occurred and attributing the claims to misinformation or sensational headlines.The discrepancy underscores a core limitation of large language models: reliance on training cutoffs and, in some cases, the absence of real-time search. ChatGPT’s response reflected outdated knowledge and expressed high confidence in an incorrect conclusion, while Perplexity later acknowledged routing the query to a lower-tier model after flagging it as “likely fraud.” Experts note that models with live web access are better equipped to handle fast-moving events; however, accuracy still depends on data sourcing and system design.
Source: Wired, B. Barrett. Business. The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees. [online] Published 3 January 2026. Available at: https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/
South Korea’s Anti-Disinformation Law Sparks Free Speech and Diplomatic Concerns
As published by Biometric Update, South Korea has passed a new law aimed at curbing false and fabricated information, including deepfakes and synthetic media, reflecting growing concern over the impact of disinformation on public order and democratic accountability. The legislation allows courts to impose punitive damages of up to five times proven losses on traditional media outlets and large online channels, including YouTube creators, found to have disseminated illegal or false content that causes measurable harm. It also empowers the country’s media regulator to fine repeat offenders up to 1 billion won if courts confirm that they have repeatedly distributed manipulated information.
The law has triggered domestic and international controversy, with journalists’ unions and civil society groups warning that vague definitions of “false” or “fabricated” information could encourage self-censorship and enable abuse. Critics have urged the adoption of clearer standards and have called on the president to veto the bill. The debate is shaped by South Korea’s recent experience with disinformation, including the circulation of misleading YouTube content that preceded the 2024 declaration of martial law by then-President Yoon Suk Yeol.
The legislation has also drawn criticism from the United States, which views such digital regulations as a threat to free expression and to U.S.-based technology companies. U.S. officials have expressed concern that the revised network law extends beyond addressing deepfakes and could jeopardize technological cooperation, echoing similar disputes over digital regulation in Europe and elsewhere. Despite these objections, the bill passed the National Assembly by a wide margin, highlighting the growing global tension between countering disinformation and preserving media freedom in an era of AI-driven content manipulation.
Source: Biometric Update, J. McConvey. South Korea law targeting deepfakes, disinformation gets frosty reception from US. [online] Published 2 January 2026. Available at: https://www.biometricupdate.com/202601/south-korea-law-targeting-deepfakes-disinformation-gets-frosty-reception-from-us
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)


