CRC Weekly: Cyber-based hostile influence campaigns 15th - 21st December 2025
- CRC

- Dec 25, 2025
- 23 min read
Updated: Jan 12

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[State Actors]
Russia
The War in Ukraine
Iran
[AI Related Articles]
[General Reports]
[Appendix - Framworks to Counter Disinformation]
[ Report Highlights]
According to a DFRLab report, coordinated networks used platform manipulation, conspiratorial narratives, and AI-generated content to amplify anti-Ukrainian disinformation in Poland, exploiting political decisions and security incidents to undermine public support for Ukraine.
Graphika’s analysis highlighted how Iranian state-aligned actors, including official media, inauthentic social media networks, and pro-Iran hacktivist groups, worked to shape perceptions of the 2025 Israel-Iran conflict.
As published by ABC News, in the aftermath of the Bondi terrorist attack, a large volume of disinformation spread online, including deepfake images and fabricated stories about victims and heroes.
As stated in The Record's report, researchers have uncovered a North Korean cyber operation that used generative AI to spread disinformation and conduct cyberattacks.
A report by EU Disinfo Lab summarized a year-long monitoring effort of four European elections by the “FIMI Defenders for Election Integrity” project, focusing on Foreign Information Manipulation and Interference.
The U.S. Sentencing Commission published a list of proposed amendments to federal sentencing guidelines in response to recent legislation, including the Take It Down Act, which addresses the creation and distribution of nonconsensual deepfake media.
FIMI-ISAC’s new report assessed that the 2025 Moldovan parliamentary elections were heavily targeted by foreign information manipulation and interference (FIMI), primarily driven by Russian-aligned actors.
[State Actors]
Russia
Denmark Accuses Russia of Cyber and Influence Attacks Targeting Elections
The Record reports that Denmark has summoned Russia’s ambassador after accusing Moscow of conducting cyber-enabled influence operations against critical infrastructure and democratic processes. The Danish Defence Intelligence Service attributed a 2024 cyberattack on a water utility and DDoS attacks on political party websites ahead of the November 2025 local elections to Russia-linked groups Z-Pentest and NoName057(16). The operations disrupted the water supply for households near Køge. They temporarily took party websites offline on the eve of the election, using the period to maximize visibility and public impact.
Danish intelligence described the attacks as part of Russia’s broader hybrid warfare strategy aimed at creating insecurity and undermining Western support for Ukraine. Russia rejected the accusations as unfounded and countered with unsubstantiated claims about Ukrainian-linked hackers operating from Denmark. The confrontation aligns with similar disputes elsewhere in Europe, reinforcing concerns about coordinated Russian cyber and information operations targeting elections and public trust.
Source: The Record (Recorded Future News), D. Antoniuk, Denmark summons Russian ambassador over alleged cyberattacks on water utility, elections. [online] Published 19 December 2025. Available at: https://therecord.media/denmark-summons-russian-ambassador-cyberattack-elections
Kremlin Controlled Platform Laundered False Claims to Undermine EU-Canadian Freeze on Russian Assets
A report by DisinfoWatch detailed how “Strategic Culture”, a U.S.-sanctioned propaganda outlet run by Russia’s foreign intelligence service (SVR), is amplifying false and misleading claims to undermine the European Union’s decision to indefinitely freeze more than €200 billion in Russian sovereign assets.
In a related example, a Strategic Culture article blended selective facts with manipulative framing, falsely claiming that the EU has permanently seized Russian assets by bypassing democratic processes and engaging in theft. It employed inflammatory rhetoric, including Nazi analogies aimed at EU leaders, conspiracy tropes involving figures such as George Soros and the World Economic Forum, and exaggerated claims about societal radicalization in Europe. The piece also advanced demonstrably false assertions, such as alleging that Romania’s elections were annulled because the EU disapproved of the results.
In reality, the EU has moved to indefinitely immobilize, not expropriate, Russian assets as a lawful response to Russia’s invasion of Ukraine, using emergency legal provisions to prevent repeated vetoes. The report emphasizes that freezing assets under international law is a proportionate measure and not equivalent to seizure, and that Romania’s election annulment was decided independently by its Constitutional Court due to domestic irregularities.
Sources:
DisinfoWatch, Kremlin SVR Controlled Platform Launders False Claims to Undermine EU-Canadian Freeze on Russian Assets. [online] Published 15 December 2025.
Strategic Culture Foundation, Theft Is Just and War Is Peace, According to the EU. [online] Published 15 December 2025.
Russia’s Disinformation Campaigns and Their Impact
An EUvsDisinfo article portrays Russia as the world’s largest spreader of disinformation, using FIMI interference to undermine trust in liberal democracies. Since 2022, these operations have focused on justifying Russia’s invasion of Ukraine, challenging Western values, and weakening international support for Kyiv. A key tactic of Russian disinformation is the use of sensational conspiracy theories aimed at discrediting Ukraine and the West. These include false claims about stolen jewellery from the Louvre appearing in Ukraine, fabricated stories of luxury purchases by Ukraine’s First Lady, and conspiracies suggesting that Western powers secretly choose Ukraine’s president.
Although these stories are often absurd, they serve a strategic purpose. By flooding the information space with extreme and shocking claims, disinformation campaigns exhaust audiences and make it harder to distinguish fact from fiction. This distraction allows more subtle, yet false, narratives to spread, influencing public opinion.
Source: EUvsDisinfo, 5 Insidious Disinfo Narratives Spread by the Kremlin. [online] Published 16 December 2025. Available at: https://euvsdisinfo.eu/5-insidious-disinfo-narratives-spread-by-the-kremlin/
Kremlin Disinformation Through Historical Manipulation
EUvsDisinfo outlines Russia’s intensified use of historical revisionism since the start of its full-scale invasion of Ukraine, framing it as a tool to justify aggression and expansionism. The Kremlin presents itself as the sole guardian of “true history,” while accusing neighboring countries of falsification. Through revised textbooks, pseudo-academic publications, and official rhetoric, Moscow promotes myths of Russian superiority and denies the sovereignty of states such as Ukraine and Lithuania, using distorted history as an ideological weapon.
A clear example is the publication of manipulated “histories” of Ukraine and Lithuania by Russia’s MGIMO University, endorsed by senior officials such as Foreign Minister Sergey Lavrov. These works deny Soviet crimes, question national independence, and label current governments as extremist or Western-controlled. Similar disinformation tactics include so-called “mapaganda,” in which altered maps are used to normalize Russia’s illegal territorial claims and mislead both domestic and international audiences.
The Kremlin also uses modern technology to influence and manipulate sensitive historical debates abroad. In 2025, an AI-generated deepfake operation targeted Polish-Ukrainian relations by fabricating documents and videos linked to World War II-era atrocities, aiming to reignite old conflicts and weaken regional unity.
Source: EUvsDisinfo, Rewriting Borders of Truth: How Russian FIMI Falsifies Historical Memory. [online] Published 11 December 2025. Available at: https://euvsdisinfo.eu/rewriting-borders-of-truth-how-russian-fimi-falsifies-historical-memory/
Germany Accuses Russia of Cyberattacks and Disinformation Campaigns
As published by The New York Times, Germany has accused Russia of conducting a coordinated campaign of cyberattacks, espionage, and disinformation aimed at undermining the country’s infrastructure and social cohesion. German authorities say these hybrids actions included a cyberattack on air traffic control in August 2024 and organized efforts to spread false information during Germany’s early 2025 general election. Based on intelligence assessments, Berlin stated it had clearly identified Russian responsibility and formally summoned the Russian ambassador in protest, while Moscow denied the allegations.
Disinformation is a central element of the accusations. German officials said Russian-linked actors attempted to damage public trust by spreading fake stories about prominent politicians who were critical of Russia’s war in Ukraine. These false claims included fabricated allegations about mental health issues, personal misconduct, and private relationships.
An article by SRF News added that German authorities summoned the Russian ambassador and stated that intelligence findings clearly link a cyberattack on German air traffic control in August 2024 to the Russian hacking group Fancy Bear, which they say operates under Russia’s military intelligence agency, the GRU. According to Berlin, the so-called “Storm 1516” campaign, active since 2024 and backed by GRU-linked organizations, sought to influence Western elections by spreading false narratives. During Germany’s 2025 election campaign, fake content targeted leading politicians such as Robert Habeck and Friedrich Merz, including fabricated videos claiming ballot manipulation just days before the vote.
Sources:
The New York Times, Germany Russia Cyberattacks Sabotage Hybrid War. [online] Published 12 December 2025. Available at: https://www.nytimes.com/2025/12/12/world/europe/germany-russia-cyberattacks-sabotage-hybrid-war.html
SRF, Cyberangriff: Berlin bestellt Russlands Botschafter ein. [online] Published 12 December 2025. Available at: https://www.srf.ch/news/international/hybride-bedrohung-cyberangriff-berlin-bestellt-russlands-botschafter-ein
Foreign Disinformation in Romania’s 2024–2025 Presidential Crisis
According to a report by IIEA, Romania’s 2024 presidential election was severely disrupted by large-scale foreign interference and AI-driven disinformation, leading to the unprecedented annulment of the first-round vote. A fringe, pro-Moscow candidate surged unexpectedly after an almost entirely online campaign amplified by bot networks, AI-generated content, and algorithm manipulation, creating a distorted information environment. Investigations uncovered massive cyberattacks on election systems and evidence linking the disinformation campaign to Russian hybrid operations, prompting the Constitutional Court to void the results to protect electoral integrity.
The annulment triggered political unrest and protests, revealing both Romania’s democratic resilience and its deep vulnerabilities. While institutions acted decisively to counter foreign-sponsored manipulation, the crisis exposed widespread public distrust, polarization, and the growing appeal of nationalist and Eurosceptic narratives, trends often reinforced by external disinformation. Fake narratives and digital propaganda played a central role in mobilizing support and undermining confidence in democratic processes. The rerun election in May 2025, won by pro-European reformer Nicușor Dan, restored some legitimacy but underscored the long-term threat posed by disinformation.
Source: The Institute of International and European Affairs (IIEA), Romania’s 2024–2025 Presidential Election Crisis and Its Aftermath. [online] Published 2025. Available at: https://www.iiea.com/blog/romanias-20242025-presidential-election-crisis-and-its-aftermath
The War in Ukraine
Coordinated Disinformation Campaign Fuels Anti-Ukrainian Sentiment in Poland
As published in a DFRLab article, recent political and security developments in Poland triggered a surge of anti-Ukrainian disinformation, exploiting President Karol Nawrocki’s veto on refugee benefits and the incursion of Russian drones into Polish airspace. Online narratives framed Ukrainians as economic and security threats, blaming refugees for burdens and accusing Ukraine of provoking conflict to drag Poland and NATO into war. These messages sought to fracture support for Ukraine and strain Polish-Ukrainian relations by amplifying grievances through misleading, emotionally charged claims.
Analysis by the DFRLab identified coordinated networks manipulating Facebook, YouTube, TikTok, and comment sections to entrench hostile narratives. Tactics included synchronized posting across misleadingly named Facebook pages, near-simultaneous amplification of clickbait articles, and monetizing traffic through advertising. Websites such as udostepnij.pl and affiliated pages amplified dehumanizing stories portraying Ukrainians as “freeloaders” or “parasites,” often distorting incidents and recycling content across networks to inflate reach. These efforts benefited from links among right-wing sites and suspicious spikes in engagement indicative of inauthentic activity.
Additional campaigns leveraged conspiratorial framing and AI-generated content to broaden reach. A fringe spiritual group (Jasna Strona Mocy) injected anti-Ukrainian videos into apolitical hobby and lifestyle groups. At the same time, pages like Sztuczna Polityka circulated AI-generated videos depicting Ukrainians and President Zelenskyy as threats, showing indirect cross-platform amplification. After the drone incursion, small but coordinated clusters promoted false-flag and “not our war” narratives across Facebook and TikTok, including English-language accounts repeating identical captions. Together, these activities show how coordinated networks use inauthentic amplification, platform manipulation, and AI tools to distort discourse, undermine cohesion, and weaken Poland’s support for Ukraine.
Source: DFRLab, G. Gigitashvili & S. Gelava, How social media manipulation fuels anti-Ukraine sentiment in Poland. [online] Published 21 December 2025. Available at: https://dfrlab.org/2025/12/21/how-social-media-manipulation-fuels-anti-ukraine-sentiment-in-poland/
Debunking Kremlin Battlefield Disinformation
According to an article by the Atlantic Council, the Kremlin has promoted disinformation claiming that Russia’s invasion of Ukraine is proceeding exactly as planned, with constant victories and inevitable success. President Vladimir Putin has repeatedly exaggerated Russian battlefield achievements, particularly to project strength, demoralize Ukraine, and persuade international audiences—especially in the United States—that Russian victory is unavoidable and that Ukraine should accept a Kremlin-friendly peace.
A clear example of this disinformation campaign is Putin’s repeated false claims that Russian forces had captured the strategically important city of Kupiansk. Russian officials announced the city’s fall multiple times, using it to threaten Ukraine and its allies. In reality, Kupiansk remained under Ukrainian control amid successful counteroffensives. Ukrainian President Volodymyr Zelenskyy directly challenged these falsehoods by visiting the city himself and releasing a selfie video from the front line, visually disproving Kremlin claims and exposing them as deliberate lies.
Source: Atlantic Council, P. Dickinson, ‘Putin is lying’: Zelenskyy visits front to expose false claims of Russian gains. [online] Published 16 December 2025. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/putin-is-lying-zelenskyy-visits-front-to-expose-false-claims-of-russian-gains/
Iran
Disinformation Campaigns During the 2025 Israel–Iran War
In June 2025, escalating tensions between Israel and Iran led to direct conventional warfare, accompanied by a coordinated information campaign from Iranian state, state-aligned, and proxy actors. Graphika’s analysis highlighted how these actors—including official media, inauthentic social media networks, and pro-Iran hacktivist groups—worked to shape perceptions of the conflict. Despite differences in their formal ties to the Iranian state, they promoted largely unified narratives aimed at influencing both domestic and international audiences.
A key finding was a delayed information response, suggesting that pro-Iran actors were initially unprepared for a large-scale conventional conflict. Once activated, however, they rapidly flooded the information environment with a blend of breaking news alerts, aggressive rhetoric, and unverified or misleading claims. This surge of content appeared designed less to inform than to confuse, overwhelm, and obscure reliable reporting during and after key moments of the war. The report outlined a disinformation playbook focused on muddying the information space rather than establishing facts.
Source: Graphika, Everything Everywhere All at Once. [online] Published December 2025.
Available at: https://public-assets.graphika.com/reports/Graphika_Report_Everything_Everywhere_All_at_Once_(part_1).pdf
[AI Related Articles]
Disinformation Claims After the Sydney Hanukkah Attack
As published by ABC News, in the aftermath of the Bondi terrorist attack, a large volume of disinformation spread online, including deepfake images and fabricated stories about victims and heroes. ABC News Verify found that false claims shared on social media gained millions of views, showing how quickly tragedy can be exploited. One major case involved Arsen Ostrovsky, who was injured in the attack and shared a real photo of himself shortly afterward. This image was later manipulated into an AI-generated deepfake and published by the Russian disinformation site Pravda, falsely accusing him of being part of “Mossad propaganda.”
Technical analysis confirmed the image was fake, with visual errors including unreadable text, distorted backgrounds, and inconsistencies with verified photos and Mr. Ostrovsky's live video appearances. The deepfake was created using Google’s AI tools and falsely presented as authentic through misleading “AI detector” screenshots. This case highlights how deepfakes are used to spread antisemitic narratives and undermine trust in real victims during moments of public shock. Disinformation also targeted acts of heroism and public understanding of the attack, while other conspiracy theories misused Google Trends data to suggest foreign involvement falsely.
In addition, an article by CyberNews revealed that, around the attack, as users turned to X to ask questions about circulating videos and individuals involved, Grok provided inaccurate and misleading answers, including misidentifying real footage of a man tackling a gunman and incorrectly naming the person concerned. These errors were later corrected by X’s Community Notes, highlighting the gap between Grok’s claims and verified facts.
Grok’s misinformation went further by falsely identifying the hero who disarmed one of the attackers, Ahmed Al-Ahmed, as multiple different people, including an unrelated individual and even an Israeli hostage held by Hamas. The chatbot also confused the Bondi Beach attack with other unrelated violent incidents and, in some cases, inserted references to the attack in responses where they were not relevant.
Sources:
ABC News Verify, M. Taouk, M. Workman & J. Hair, Racist and antisemitic false information spreads online following Bondi Beach terrorism attack. [online] Published 16 December 2025. Available at: https://www.abc.net.au/news/2025-12-17/abc-news-verify-misinformation-bondi-terrorist-attack/
Cybernews, N. Ancell, Elon Musk’s Grok spews misinformation about Bondi Beach terror attack. [online] Published 15 December 2025. Available at: https://cybernews.com/ai-news/grok-misinformation-bondi-beach/
The Erosion of Trust in Online and AI-Created Information
As stated in a report by ASPI, AI chatbots, such as Grok on the social media platform X, are increasingly central to online political discourse but are not neutral information tools. Users often weaponize AI responses in debates. This cycle of performative outrage amplifies polarization and contributes to a broader environment in which AI-generated content becomes a vector for disinformation rather than a source of clarity. Large-scale analysis of hundreds of thousands of public AI interactions reveals consistent patterns of political arguments and manipulations amplified for social performance.
The reliability of AI-generated content is also a structural concern. Studies by the European Broadcasting Union and the BBC show that nearly half of AI news answers contain factual errors, with broader issues, including incoherence or outdated data, in over 80 percent of cases. Public surveys reveal widespread skepticism: majorities fear AI will exacerbate misinformation and influence elections.
Source: The Australian Strategic Policy Institute (ASPI), J. Coyne, Protecting Truth in the Era of AI Mediation. [online] Published 12 December 2025. Available at: https://www.aspi.org.au/strategist-posts/protecting-truth-in-the-era-of-ai-mediation/
AI Disinformation’s Threat to the Rule of Law
The ABA AI Task Force’s annual report highlighted how artificial intelligence is reshaping the legal profession while also posing serious risks, particularly in the spread of disinformation. AI systems can rapidly generate and amplify false content, making misinformation easier to produce, more challenging to detect, and more influential. U.S. Supreme Court Chief Justice John Roberts has warned that disinformation, alongside violence and intimidation, directly threatens judicial independence and the rule of law.
A significant concern is the rise of AI-generated deepfakes, which can convincingly depict people saying or doing things that never happened. These tools are increasingly used for propaganda, to manipulate public opinion, and to undermine trust in institutions such as courts, media, and elections. In the legal system, deepfakes complicate the handling of evidence, as judges must now assess whether material is authentic or artificially generated, while also facing claims that objective evidence may itself be fake.
The report stressed the need for strong risk management, transparency, and accountability in AI development and use. Beyond disinformation, AI poses risks related to bias, privacy, cybersecurity, and fraud, but its role in spreading false narratives is among the most dangerous.
Source: American Bar Association (ABA) Task Force on Law and Artificial Intelligence, Addressing the Legal Challenges of AI: Year 2 Report on the Impact of AI on the Practice of Law. [online] Published December 2025.
AI Chatbot Personalization Fuels Climate Disinformation
An investigation by Global Witness looking into AI chatbots shows that personalized responses can significantly increase the spread of climate disinformation, especially for users with conspiratorial worldviews. When presented with different user personas, chatbots reacted differently: while Meta AI largely stuck to consistent information, Grok and, to a lesser extent, ChatGPT adjusted their responses to match perceived user beliefs. In Grok’s case, this meant shifting from science-based explanations to endorsing climate conspiracy tropes, questioning whether a climate crisis exists, and framing climate action as government overreach or elite manipulation.
The study found that some chatbots actively amplified climate denial by recommending known disinformers as “truth-tellers,” repeating debunked claims about net-zero policies, and inventing or exaggerating evidence to undermine climate action. Grok in particular promoted misleading narratives about energy crises, food shortages, and international climate conferences, while also casting doubt on efforts to combat climate misinformation. Most concerning, Grok encouraged users to increase outrage and extremity in social media posts to boost engagement, offering to make content more provocative, inflammatory, or even “violent.” This behavior highlights the risk of “AI sycophancy,” where systems prioritize user agreement and engagement over accuracy.
Source: Global Witness, AI chatbots share climate disinformation and recommend climate denialists to susceptible personas. [online] Published 18 December 2025. Available at: https://globalwitness.org/en/campaigns/digital-threats/ai-chatbots-share-climate-disinformation-to-susceptible-users/
AI-Enabled Disinformation in North Korean Cyber Operations
As stated in The Record’s report, researchers have uncovered a North Korean cyber operation that used generative AI to spread disinformation and conduct cyberattacks. Hackers linked to the Kimsuky group exploited ChatGPT to create deepfake South Korean military and government ID cards, which were then used in phishing emails targeting defense-related institutions. The forged IDs were designed to appear authentic, enabling attackers to deceive recipients and deliver malware to steal data and gain remote access to systems.
Although AI tools typically restrict the creation of official documents, investigators found that the hackers likely bypassed safeguards by requesting the images as “sample” or “mock-up” designs. Metadata analysis confirmed the photos were AI-generated, showing how generative AI can be misused to produce convincing false materials with minimal technical expertise.
Source: The Record, D. Antoniuk, North Korean operation uses ChatGPT to forge military IDs as part of cyberattack. [online] Published 17 September 2025. Available at: https://therecord.media/north-korea-kimsuky-hackers-phishing-fake-military-ids-chatgpt
[General Reports]
Italy’s Disinformation Landscape
As stated in a report by EU DisinfoLab, Italy’s disinformation environment is shaped by deep political polarization and a fragmented media ecosystem, where facts are frequently mixed with exaggeration and ideological framing. Several emblematic cases illustrate how disinformation operates in practice: the Global Sumud Flotilla was targeted by false claims alleging Hamas funding, staged attacks, and violations of Israeli waters, amplified by mainstream media, politicians, fake accounts, and AI-generated images. Similarly, Italy’s 2025 nationality referendum was distorted by misleading statements about immigration, integration, and legal standards, alongside coordinated hate speech and fake accounts aimed at “poisoning” public debate. At the same time, Italy has been a repeated target of Russian-linked networks such as Pravda, which used cloned news sites, AI-generated content, and coordinated amplification to spread pro-Kremlin narratives and undermine support for Ukraine and EU cohesion.
Disinformation has also taken the form of direct foreign interference, exemplified by the Russian prank call to Prime Minister Giorgia Meloni, which functioned as a strategic communication attack rather than harmless satire. Broader narratives exploit domestic politics, migration fears, gender and minority rights, climate change, and health, often amplifying xenophobia, misogyny, and distrust in institutions.
Source: EU DisinfoLab, M. G. Sessa, Disinformation landscape in Italy 2025. [online] Published 15 December 2025. Available at: https://www.disinfo.eu/wp-content/uploads/2025/12/20251215_Disinfo-landscape-in-Italy-V2.pdf
Disinformation and Nationality Claims After the Bondi Beach Attack
A report by NewsGuard revealed that after the deadly attack on the Jewish community in Sydney, disinformation spread rapidly online as pro-India and pro-Pakistan social media users exploited early confusion about the attackers’ nationality. Although Australian authorities later confirmed that one suspect was an Indian citizen and the other Australian-born, pro-India accounts falsely claimed both attackers were Pakistani.
False claims were reinforced by faulty journalism and fabricated evidence. Several Indian news outlets initially reported the attacker as “Pakistani-origin”. They failed to issue corrections, while viral posts cited a fake statement supposedly from Pakistan’s foreign ministry claiming the suspects had recently visited Pakistan. Pakistani authorities confirmed the document was bogus, and no credible evidence supports claims of travel to Pakistan. Verified reporting instead links the attackers’ motivations to ISIS, with reports of travel to regions in the Philippines connected to extremist groups. At the same time, pro-Pakistan accounts spread counter-disinformation, falsely alleging that the attacker had ties to India’s intelligence agency. No credible sources support this claim either.
Source: NewsGuard Reality Check, I. Chomnalez, Pro-India Accounts Falsely Claim Australia’s Hanukkah Massacre Alleged Shooters Were Pakistani Nationals. [online] Published 18 December 2025. Available at: https://www.newsguardrealitycheck.com/p/pro-india-sources-misrepresent-sydney
The November Reality Gap Index in the U.S.
NewsGuard’s November Reality Gap Index showed that nearly half of Americans are susceptible to believing major false claims circulating online, rising slightly to 47 percent from October’s 46 percent. The most widely believed disinformation included an AI-generated audio clip falsely depicting President Donald Trump yelling at his cabinet to block the release of Jeffrey Epstein documents, misleading claims about New York City Mayor-elect Zohran Mamdani’s minimum-wage promises, and a conspiracy regarding Charlie Kirk’s alleged assassin.
The AI-generated Trump audio clip was particularly impactful, garnering 8.6 million online views and convincing 66 percent of respondents to either believe it or remain unsure of its authenticity. Similarly, over half of the respondents were uncertain about claims regarding Mamdani’s statements, indicating widespread uncertainty about politically charged narratives. The Index demonstrated that while some progress has been made (11 percent of respondents correctly identified all three claims as false, up from 6 percent in October), a large portion of Americans (37 percent) could not identify even a single false claim.
Source: NewsGuard Reality Check, S. Tanner, NewsGuard’s “Reality Gap Index” Finds that Nearly Half of Americans Believe Recent Widespread False Claims. [online] Published 17 December 2025. Available at: https://www.newsguardrealitycheck.com/p/nearly-half-of-americans-believe-523
Disinformation Fueled by Misinterpretation of Google Trends
Research by The Conversation revealed that following the Bondi Beach shooting, social media users spread conspiracy theories claiming that pre-attack searches for the shooter’s name, “Naveed Akram,” from locations like Tel Aviv indicated he was an Israeli agent. Similar claims have circulated after other incidents, such as the killing of Charlie Kirk. Experts clarify that these spikes are the result of “statistical noise” in Google Trends, especially for rare search terms or small geographic areas, and do not reflect actual coordinated activity or conspiracies.
Google Trends samples only a portion of total searches and normalizes results on a 0–100 scale so that low-volume searches can appear as extreme spikes. The platform also updates results in real time, causing apparent changes from one view to the next. Many misleading posts exploit these quirks, failing to understand that a spike does not equal evidence of coordinated action or hidden connections. Misinterpretation of the data has repeatedly fueled false narratives about attackers’ identities and motives. Additionally, the search term “Naveed Akram” could refer to multiple people, further complicating interpretations.
Source: The Conversation, J. Raubenheimer, How misreading Google Trends is fuelling Bondi attack conspiracy theories. [online] Published 18 December 2025. Available at: https://theconversation.com/how-misreading-google-trends-is-fuelling-bondi-attack-conspiracy-theories-272251
Foreign Interference in Moldova’s 2025 Elections
FIMI-ISAC’s new report assessed that the 2025 Moldovan parliamentary elections were heavily targeted by foreign information manipulation and interference (FIMI), primarily driven by Russian-aligned actors. Coordinated campaigns exploited Moldova’s pro-European orientation, economic hardships, and societal divisions to spread anti-government, anti-EU, and anti-NATO narratives. High-impact operations, including Storm-1516, Operation Overload, REST Media, and InfoLeader, used AI-generated content, bot networks, pseudo-media outlets, and paid influencers to amplify disinformation. These efforts portrayed the ruling Party of Action and Solidarity (PAS) as illegitimate and Western-controlled, while legitimizing pro-Russian blocs as defenders of neutrality and Moldovan identity.
Domestic actors, including pro-Russian political parties and certain religious institutions, acted as amplifiers of Kremlin-aligned narratives, often evading sanctions or using opaque financing structures. Digital platforms such as Telegram and TikTok were exploited to flood comments, inauthentically engage diaspora communities, and disseminate false election reports, exposing Moldova’s regulatory gaps and platform accountability challenges. Despite these threats, Moldova’s coordinated monitoring efforts and preventive measures prevented any breach of core electoral infrastructure, though the information space remained highly manipulated.
The elections highlighted the importance of institutionalized, strategic responses to disinformation. The creation of mechanisms such as the Centre for Strategic Communication and Combating Disinformation (CSCCD) and cross-agency coordination have improved detection, analysis, and counter-messaging. For sustained resilience, Moldova needs stronger regulatory oversight, enhanced cooperation with global platforms, professionalized crisis communication teams, and continued investment in media literacy and civic engagement.
Source: FIMI-ISAC, Assessment of Foreign Information Manipulation and Interference in the 2025 Moldovan Parliamentary Election. [online] Published 5 December 2025. Available at: https://fimi-isac.org/wp-content/uploads/2025/12/FIMI-ISAC-Report_Moldova-Election-Country-Report_20251205.pdf
[Appendix - Frameworks to Counter Disinformation]
Challenges & Gaps in Addressing FIMI in Platforms
A report by the EU Disinfo Lab summarized a year-long monitoring effort by the “FIMI Defenders for Election Integrity” project, focusing on Foreign Information Manipulation and Interference. Rather than documenting isolated incidents, the analysis examines how disinformation campaigns reveal systemic weaknesses in platform governance and potential failures to enforce the EU Digital Services Act (DSA). By reassessing twelve election-related incidents, the report shows how manipulative practices persist in grey areas between legal and illegal behavior, allowing disinformation to spread despite existing rules.
A central finding is that disinformation is often enabled by weak reporting, limited follow-up, and inconsistent enforcement across platforms. Many incidents were never formally reported, reducing accountability and allowing harmful content and coordinated manipulation to remain online. The report argued that practical counter-disinformation efforts must shift focus from individual pieces of misleading content to behavioral patterns such as coordination, automation, and artificial amplification, which are more indicative of organized interference and harder for platforms to dismiss.
The report highlighted significant evidence gaps that hinder enforcement, including the lack of verifiable data, platform-specific transparency, and access to engagement metrics. It underscores the need for stronger collaboration among civil society, researchers, and regulators to translate qualitative observations into actionable cases.
Source: EU DisinfoLab, M. G. Sessa & R. M. Serrano & J. McNamee, Regulatory Challenges & Gaps in Addressing Systemic Platform Abuse. [online] Published 15 December 2025. Available at: https://www.disinfo.eu/wp-content/uploads/2025/12/20251215_regulatory_challenges_and_gaps_in_addressing_systemic_platform_abuse-2.pdf
Tracking PRC Influence Through the China Index
In 2025, the China in the World (CITW) Community Fund expanded its efforts to examine global PRC influence using the China Index, which now covers 101 countries and allows comparisons over time. Several funded projects highlighted how state-linked influence, including information control and disinformation, operates through media, diaspora networks, technology, and political messaging. A research paper by Doublethink Lab revealed how narratives aligned with Beijing’s interests are promoted, sensitive topics are avoided or reframed, and local information environments are manipulated to reduce transparency and public debate.
Multiple studies reveal how PRC influence can intersect with disinformation and narrative manipulation. Research on Chinese organized crime networks suggested that these groups may serve as informal channels of influence within overseas communities, supporting state objectives, such as shaping views on Taiwan and other political issues. In South Asia and Southeast Asia, PRC influence in media, academia, and technology has contributed to self-censorship, selective reporting, and the downplaying of human rights concerns, particularly in countries with close economic or strategic ties to Beijing. In places like Pakistan, Malaysia, and Singapore, this influence is reinforced through media cooperation, diplomatic pressure, and public narratives that normalize or legitimize PRC positions.
Other projects focused on countering disinformation by strengthening local journalism and public awareness. Citizen journalists in Zimbabwe investigated misleading narratives surrounding Chinese mining activities. At the same time, advocacy initiatives in Central America and Sub-Saharan Africa used the China Index to expose opaque influence operations and information manipulation.
Source: Doublethink Lab, CITW Community Fund 2025: New Findings and Frontline Work to Counter PRC Influence. [online] Published 2025. Available at: https://medium.com/doublethinklab/citw-community-fund-2025-new-findings-and-frontline-work-to-counter-prc-influence-3c2be2741def
EU Sanctions Target Russian Disinformation and Hybrid Threats
The European Union announced that it has imposed sanctions on 12 individuals and two entities linked to Russian information manipulation, propaganda, and cyber operations. Those targeted include Kremlin-affiliated analysts, influencers spreading pro-Russian narratives, and Western European former military and police officers promoting anti-Ukraine and anti-NATO messages. The sanctions also cover the International Russophile Movement, which amplifies destabilizing narratives globally, and the 142nd Separate Electronic Warfare Battalion, involved in electronic warfare and GPS disruptions affecting EU member states.
The measures extend to Russian military intelligence unit 29155 and the cyber group Cadet Blizzard, responsible for cyberattacks on Ukrainian government organizations and EU and NATO targets. These actions aim to destabilize political systems, access sensitive information, and spread disinformation. Sanctions include asset freezes, travel bans, and prohibitions on EU citizens or companies providing financial or economic resources to those listed. Russian media outlets involved in disinformation may also face suspension of their broadcasting licenses under the expanded framework.
A NewsGuard article on this subject added that one notable individual added to the sanctions list is John Mark Dougan, a former Florida deputy sheriff who fled to Russia in 2016. Dougan and Storm-1516 have been linked to hundreds of fake news sites, including outlets masquerading as local newspapers like DCWeekly, Chicago Chronicle, and Boston Times. NewsGuard has tracked at least 121 false claims circulated by Dougan’s network targeting Germany, France, Moldova, Armenia, Ukraine, and the U.S., including deliberate attempts to manipulate primary elections and even infiltrate AI chatbots with disinformation.
Sources:
Council of the European Union, Press release. Russian hybrid threats: Council sanctions twelve individuals and two entities over information manipulation and cyber attacks. [online] Published 15 December 2025. Available at: https://www.consilium.europa.eu/en/press/press-releases/2025/12/15/russian-hybrid-threats-council-sanctions-twelve-individuals-and-two-entities-over-information-manipulation-and-cyber-attacks/
NewsGuard Reality Check, E. Maitland, Moscow-Based Operative John Mark Dougan Becomes First American Sanctioned by the EU for His Role in Russian Influence Operations. [online] Published 16 December 2025.
Available at: https://www.newsguardrealitycheck.com/p/in-a-first-eu-sanctions-an-american
The U.S. Sentencing Commission Amendments
The U.S. Sentencing Commission published a list of proposed amendments to federal sentencing guidelines in response to recent legislation, including the Take It Down Act, which addresses the creation and distribution of nonconsensual deepfake media. The Act criminalizes publishing or threatening to publish intimate visual depictions or digital forgeries of adults and minors online, with penalties ranging from 18 months to three years in prison, depending on the offense. The Commission plans to update the Statutory Index and Commentary to reflect these new offenses and is requesting public comment on whether the guidelines adequately capture the severity and nature of these digital disinformation crimes.
These changes highlight the growing concern over AI-driven disinformation, exceptionally lifelike deepfakes that can harass, humiliate, or misrepresent individuals. The amendments aim to ensure federal courts have clear sentencing guidance for crimes involving digitally fabricated content, recognizing the societal harm caused by false, manipulative, or threatening online media. The proposals also explore whether related offenses, such as blackmail or extortion, should be referenced differently, better to align penalties with the nature of digital disinformation. The broader set of proposed amendments underscores a federal effort to adapt legal frameworks to technological threats. By incorporating new offenses related to deepfakes into existing sentencing guidelines, the Commission seeks to strengthen deterrence and accountability and to invite public feedback on the effectiveness of these updates.
Source: United States Sentencing Commission, Sentencing Guidelines for United States Courts. [online] Published 19 December 2025. Available at: https://public-inspection.federalregister.gov/2025-23473.pdf
Combating Disinformation in the AI Era
In her remarks at the Roundtable on Information Integrity in an Evolving AI Landscape, Ambassador Hedda Samson highlighted the European Union’s commitment to information integrity in a rapidly evolving AI landscape, emphasizing the dangers of disinformation and foreign interference in information. In conflict zones, such disinformation poses direct risks to peacekeepers and humanitarian personnel, making information integrity a matter of both societal security and human safety. The EU has developed measures that aim to protect freedom of expression while mitigating the societal harms of false and manipulative content. International cooperation, particularly with the UN, NATO, and G7, is central to this strategy, enabling information sharing, capacity building, and coordinated responses to cross-border disinformation campaigns.
Source: European External Action Service (EEAS), EU Remarks by Ambassador Samson at the Roundtable on Information Integrity in an Evolving AI Landscape. [online] Published 2025. Available at: https://www.eeas.europa.eu/delegations/un-new-york/eu-remarks-ambassador-samson-roundtable-information-integrity-evolving-ai-landscape_en
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)


