Cyber-based hostile influence campaigns 16th - 22th February 2026
- CRC
- 5 minutes ago
- 17 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[Social Media Platforms]
[State Actors]
Russia
War in Ukraine
China
[AI Related Articles]
[General Reports]
False Claim that Mamdani Allowed Five Daily Muslim Prayer Broadcasts in NYC
Uganda's 2026 Election Marked by Repression and Internet Shutdown
UN Committee Condemns Disinformation Campaign Targeting Special Rapporteur Francesca Albanese
AI-Generated Disinformation Surges Ahead of Brazil’s Elections
Australian Senate Probes AI-Driven Climate Disinformation and Fossil Fuel Influence Networks
EU Warns of Coordinated Disinformation Targeting West Africa
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
A report by EU vs. Disinfo described how Russia has used disinformation as a long-term political tool to reshape Ukrainian history and identity.
Research by the China Media Project argued that China’s rapid expansion in artificial intelligence is closely linked to its long-standing strategy of information control and global influence.
A study by Harvard Kennedy School’s Misinformation Review examined how Australians identified political misinformation during the 2025 federal election.
In an X's tweet, the European Union in Ghana warns that coordinated disinformation campaigns, documented across 13 West African countries, including Ghana, pose a destabilizing regional threat and require heightened public awareness.
According to the United Nations Office for Disaster Risk Reduction (UNDRR) article, following climate-related disinformation after a major blackout, the EU and Spain advanced regulatory and legal measures, including enforcement of the Digital Services Act and endorsement of the UN Declaration on Climate Information Integrity, to curb algorithmic amplification and hold platforms accountable for spreading false climate narratives.
[State Actors]
Russia
Recurring False Narratives Used by the Kremlin
Since its full-scale invasion of Ukraine in 2022, Russia has intensified its disinformation campaigns, using false narratives as a central tool in its confrontation with Ukraine and the West. In an article, EU vs. Disinfo presented recurring false narratives used by the Kremlin.
One key claim falsely portrayed the European Union as prolonging the war out of "Russophobia" and military ambition, while another framed the conflict as a NATO proxy war against Russia, presenting Moscow as the victim rather than the aggressor. The Kremlin also promoted misleading narratives that EU sanctions have devastated Europe’s economy, despite evidence that sanctions were imposed in response to Russia’s invasion and have also strained Russia’s own economy. Additional disinformation sought to discredit Ukraine by portraying its leadership as corrupt, illegitimate, or even "Nazi", often relying on fabricated stories and fake sources.
Source: EUvsDisinfo, What the Kremlin wants you to believe about its war against Ukraine. [online] Published 20 February 2026. Available at: https://euvsdisinfo.eu/what-the-kremlin-wants-you-to-believe-about-its-war-against-ukraine/
False Claim Suggested U.S. Could Take Canadian Rare Earth Plant in Estonia
As reported by DisinfoWatch, a disinformation report by Pravda News Network falsely claimed that the United States could take control of a rare-earth processing facility in Estonia owned by Neo Performance Materials. Pravda Network, citing an Izvestia article, misquoted Estonia’s Minister of Foreign Affairs and turned a discussion of U.S. investment interest into evidence of a possible asset transfer. However, the original ERR report did not mention any plan or mechanism for the U.S. to seize or reassign the Canadian-owned facility, which operates under Estonian and European Union law. In reality, U.S. policy tools incentivize new capacity rather than "reassign" existing plants.
Source: DisinfoWatch, False insinuation Pravda News: U.S. could “take” rare earth processing from Canada’s Neo Performance Materials in Estonia. [online] Published 12 February 2025. Available at: https://disinfowatch.org/disinfo/false-insinuation-pravda-news-u-s-could-take-rare-earth-processing-from-canadas-neo-performance-materials-in-estonia/
The War in Ukraine
Kremlin Rewrites Ukraine’s Identity
In a detailed analysis, EUvsDisinfo highlighted how Russia has used disinformation as a long-term political tool to reshape Ukrainian history and identity. The Kremlin has repeatedly distorted historical facts, manipulated archives, and reframed narratives to fit an imperial vision. Ukrainian aspirations for freedom and independence are treated in Moscow not as historical facts but as provocations. This strategy continues a pattern from both imperial and Soviet times, when Ukrainian distinctiveness was absorbed into a broader Russian story.
A key example is Putin’s 2025 Decree No. 858, which sets a goal for 95% of residents in occupied Ukrainian regions to identify as Russian by 2036. While written in neutral bureaucratic language, the decree reflects a clear disinformation objective: to redefine identity through policy. After 2014, Russian narratives shifted from describing Ukrainians and Russians as “brotherly peoples” to claiming they are “one people,” effectively denying Ukraine’s separate existence. Any resistance to this narrative is blamed on interference by a hostile “collective West.” In this framework, control over territory is matched by control over memory, education, and public symbols. Disinformation is used not only to justify military actions but also to reshape identity itself.
Source: EUvsDisinfo, Total Recall: How Russia tried to erase the Ukrainian identity. [online] Published 22 February 2026. Available at: https://euvsdisinfo.eu/total-recall-how-russia-tried-to-erase-the-ukrainian-identity/
China
Chinese Globalization of State-Directed Disinformation
Research by the China Media Project argues that China’s rapid expansion in artificial intelligence is closely linked to its long-standing strategy of information control and global influence. The Chinese Communist Party (CCP) views AI not only as a driver of economic and technological power but also as a tool for shaping international narratives. Chinese large language models (LLMs) are increasingly adopted worldwide due to their high performance and low cost, yet testing shows that many embed “information guidance” aligned with official state positions. This goes beyond simple censorship, as models subtly promote pro-government narratives on issues such as human rights, territorial claims, religion, and China’s global reputation.
The spread of these models creates new risks for FIMI. Developers building on Chinese foundation models often struggle to remove embedded narrative biases, leading users to encounter CCP-aligned messaging without realizing its origin. In some cases, governments in countries such as India, Malaysia, and the UAE have adapted Chinese-based chatbots to restrict discussion of domestic human rights issues.
The report also highlighted security concerns. Open-source Chinese models have shown vulnerability to “jailbreaking,” allowing users to extract dangerous information, including instructions related to weapons or toxic substances. At the same time, the geopolitical risks of narrative influence remain underexamined compared to debates over technical AI safety. The authors concluded that without stronger transparency rules and awareness of disinformation tactics embedded in AI systems, the global adoption of Chinese models could normalize state-directed information control and weaken information integrity worldwide.
Source: China Media Project, A. Colville & Y. Mengli & J. Haime & D. Bandurski, Guided Intelligence: China’s AI Strategy and the Global Information Space. [online] Published 2025.
Available at: https://mpf.se/download/18.4626748819c47b558e3716/1770885147750/Guided-Intelligence.pdf
[AI Related Articles]
Audit Found High Disinformation Rates in AI Voice Responses
An audit by NewsGuard found that OpenAI’s ChatGPT Voice and Google’s Gemini Live frequently generated convincing audio content repeating false claims—45 percent of the time for ChatGPT and 50 percent of the time for Gemini Live when prompted to do so. These AI voice tools were especially vulnerable when asked to produce radio-style segments advancing disinformation, including pro-Kremlin narratives and fabricated political stories.
The audit tested the bots using false claims related to health, politics, global news, and foreign disinformation. Both ChatGPT Voice and Gemini Live were more likely to repeat political and foreign disinformation than health falsehoods, sometimes presenting fabricated events as breaking news. In contrast, Amazon’s Alexa+ refused to repeat any false claims and often explained why they were incorrect, achieving a 0 percent failure rate.
Source: NewsGuard, I. Blachez & I. Chomnalez & L. Marchl, Risky AI: ChatGPT and Gemini Readily Produce False Audio Claims, While Alexa+ Declines. [online] Published 19 February 2026. Available at: https://www.newsguardtech.com/special-reports/chatgpt-and-gemini-readily-produce-false-audio-claims-while-alexa-declines/
AI Revives Racist Stereotypes and Misinformation
Highlighting the darker side of technological progress, The Conversation reported that in 2025, advances in artificial intelligence transformed public communication while also accelerating the spread of disinformation rooted in racism. AI-generated content, including lifelike videos produced by tools such as Sora, enabled harmful narratives to circulate faster than fact-checkers could respond. During the U.S. government shutdown, fabricated videos depicting Black women abusing Supplemental Nutrition Assistance Program (SNAP) benefits went viral. Although clearly labeled as AI-generated, many viewers accepted them as truthful because they reinforced the long-standing racist trope of the "Black welfare queen." These clips, described as "digital blackface," demonstrate how AI can modernize and amplify deeply embedded stereotypes.
A similar pattern emerged in the revival of a Minnesota fraud case involving members of the Somali community. Despite prior legal resolutions, political rhetoric reignited the story, and AI-generated videos spread caricatures such as the "Somali pirate" narrative. Rather than encouraging informed debate about fraud or policy, the disinformation campaign fueled anti-Black, anti-immigrant, and Islamophobic sentiments. Even when investigations disproved widespread wrongdoing, the false framing persisted.
The article argued that AI does not create racism but accelerates its reach. Fact-checking alone proves insufficient when audiences are predisposed to believe familiar stereotypes. Disinformation succeeds because it taps into existing racial anxieties and political agendas, transforming old prejudices into viral digital spectacles.
Source: The Conversation, N. N. Ali, How AI resurrects racist stereotypes and disinformation — and why fact-checking isn’t enough. [online] Published 22 February 2026. Available at: https://theconversation.com/how-ai-resurrects-racist-stereotypes-and-disinformation-and-why-fact-checking-isnt-enough-270000
[General Reports]
False Claim that Mamdani Allowed Five Daily Muslim Prayer Broadcasts in NYC
NewsGuard identified a widely shared allegation as its False Claim of the Week: that Zohran Mamdani allowed mosques in New York City to broadcast the Islamic call to prayer five times daily, starting at 5 a.m. Users widely circulated two videos of New York City, with the call to prayer heard in the background, claiming the scenes showed the Adhan being broadcast at 5 a.m., thereby disturbing residents. The claim spread rapidly on social media, driven by anti-Islam activists such as Amy Mek and commentators including Eric Daugherty of Florida’s Voice News, gaining millions of views.
In reality, there was no evidence that Mamdani introduced any such change. The existing policy, introduced in 2023 under former mayor Eric Adams, allows mosques to broadcast the call to prayer without a permit only once on Friday afternoons and at sunset during Ramadan. NewsGuard found no official announcements, media reports, or public complaints indicating any expansion of broadcasts. Even the viral videos appeared inconsistent with winter conditions and sunrise times. Mamdani’s press office did not respond to requests for comment, and a representative of the NYPD declined to comment.
Source: NewsGuard, L. Marchl, Debunk: Mamdani Did Not Permit Mosques to Broadcast Calls to Prayer Five Times a Day. [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/mamdani-lets-nyc-mosques-broadcast
Uganda's 2026 Election Marked by Repression and Internet Shutdown
Examining the political climate surrounding Uganda’s January 2026 presidential election, the London School of Economics and Political Science (LSE) reported that the vote was characterized by repression, silence, and widespread disillusionment rather than the energetic opposition seen in previous cycles. Long-time President Yoweri Museveni secured a seventh term with 71.65 percent of the vote, but turnout fell to just 52.5 percent, reflecting deep public distrust. The campaign period saw intensified judicial harassment, detention of opposition figures, and the forced hiding or exile of critics, including opposition leader Bobi Wine.
A key feature of the election was a nationwide internet shutdown from 13 to 18 January 2026, which cut off more than 45 million people. Authorities justified the blackout as a measure against misinformation, but critics said it was intended to suppress dissent and disrupt opposition coordination. The shutdown severely affected political participation, economic activity, and the ability to report human rights abuses, while exemptions for banks and government services ensured economic continuity and reduced backlash from elites. Marginalized groups dependent on mobile money were hit hardest.
Source: London School of Economics and Political Science, M. Daka & K. Rukirabashaija, The 2026 Ugandan election was marked by digital strangulation rather than hope. [online] Published 20 February 2026.
Available at: https://blogs.lse.ac.uk/africaatlse/2026/02/20/the-2026-ugandan-election-was-marked-by-digital-strangulation-rather-than-hope/
Voters Misidentify Political Disinformation
A study published by the Harvard Kennedy School’s (HKS) Misinformation Review examined how Australians identified political misinformation during the 2025 federal election. Using a digital diary method, 38 participants collected 254 examples they believed were misinformation. However, independent fact-checking revealed a significant gap: only 10 percent of the examples were found to be false, while 37 percent were verified as true. Participants most often labeled content as misinformation when it contradicted their existing beliefs (28 percent) or appeared biased (22 percent). Very few (4 percent) relied on external fact-checking. The findings suggested that perceptions of disinformation are strongly shaped by partisan worldviews rather than objective verification.
The research also showed that most people do little when encountering suspected misinformation. The most common response was “doing nothing” (44 percent), followed by discussing it with others (21 percent). Information overload, political burnout, and low media literacy discourage active fact-checking. Psychological factors, such as the “illusory truth effect,” in which repeated claims feel true, further complicate the landscape. Participants were most confident in judging content false when it conflicted with their prior knowledge, even when professional fact-checkers later confirmed the information was accurate. Nearly half of the examples identified as misinformation were found on social media, and many originated from political actors or paid advertisements. The study highlighted a critical problem: what citizens perceive as disinformation often does not align with factual assessments. This mismatch allows misleading claims to circulate freely during elections, as voters rarely verify information and frequently reject accurate reporting that challenges their beliefs.
Source: Harvard Kennedy School, S. Park & J. Y. Lee & K. McGuinness & C. Fisher & J. Fulton, People rely on their existing political beliefs to identify election misinformation. [online] Published 2026. Available at: https://misinforeview.hks.harvard.edu/article/people-rely-on-their-existing-political-beliefs-to-identify-election-misinformation/
UN Committee Condemns Disinformation Campaign Targeting Special Rapporteur Francesca Albanese
The Coordination Committee of the Special Procedures of the UN Human Rights Council (OHCHR) condemned what it described as a coordinated disinformation campaign by several State Ministers targeting Francesca Albanese, the Special Rapporteur on human rights in the Palestinian territories occupied since 1967. The Committee said ministers relied on “manufactured facts,” attributing statements to Albanese that she did not make at the 17th Al Jazeera Forum, as part of a broader effort to discredit her work. It linked these actions to an organized campaign that previously resulted in what it called illegal sanctions against Albanese, with serious personal consequences, and warned of a growing pattern of politically motivated and malicious attacks against independent human rights experts, UN officials, and international judges engaged in accountability efforts.
The Committee emphasized that Albanese’s mandate is to investigate alleged violations of international law in the occupied Palestinian territories and argued that calls for her resignation, alongside persistent intimidation and coordinated personal attacks, undermine independent oversight mechanisms. It urged states to redirect their efforts toward accountability for alleged war crimes and crimes against humanity, rather than targeting investigators, and framed the attacks as contributing to a broader erosion of respect for international law and institutional integrity.
Source: OHCHR, Special Procedures’ Coordination Committee condemns attacks against Francesca Albanese by State representatives: “Promote justice, not disinformation”. [online] Published 17 February 2026. Available at: https://www.ohchr.org/en/press-releases/2026/02/special-procedures-coordination-committee-condemns-attacks-against-francesca
AI-Generated Disinformation Surges Ahead of Brazil’s Elections
LatAm Journalism Review (LJR) reported that Brazil is experiencing a sharp rise in AI-generated disinformation, characterized by hybrid content that blends fabricated elements with authentic media to enhance credibility and evade detection. A recent example involved a TikTok video falsely claiming that a new virus had reached Brazil ahead of Carnaval, created by stitching an AI-generated news anchor onto a legitimate broadcast while omitting contextual details that contradicted the alarmist claim. Fact-checkers identified the manipulation, but the case illustrates how increasingly sophisticated AI tools enable the production of realistic, emotionally charged narratives. According to Observatório Lupa’s February report, AI-generated fake content has increased by 308 percent since 2024, shifting from predominantly scam-related material to politically focused disinformation ahead of Brazil’s October general elections. Researchers note that these productions often combine compelling visuals, fabricated audio, and selective editing to distort public understanding.
Experts warn that the accessibility of generative AI tools, integrated into mainstream platforms and requiring minimal technical expertise, has dramatically lowered the barrier to producing manipulated content, including fake interviews, simulated conversations, and fabricated “selfie” videos attributed to public figures. Detection tools struggle to keep pace with the realism of these outputs, complicating efforts by media organizations and fact-checking teams. Brazil’s Superior Electoral Court is considering financial penalties for AI-generated fake news, reflecting institutional concern about electoral manipulation. However, analysts caution that traditional rebuttals may fail to counter viral falsehoods effectively, as AI-driven disinformation thrives on speed, emotional appeal, and algorithmic amplification that often outpaces corrective communication.
Source: LatAm Journalism Review, L. Coelho, AI-fueled disinformation surges ahead of Brazil’s elections. [online] Published 18 February 2026. Available at: https://latamjournalismreview.org/articles/ai-fueled-disinformation-surges-ahead-of-brazils-elections/
Australian Senate Probes AI-Driven Climate Disinformation and Fossil Fuel Influence Networks
During a Senate Select Committee hearing, ABC reported that Australia’s Senate Select Committee on Information Integrity heard evidence that coordinated disinformation campaigns, amplified by AI tools, bots, trolls, and engagement-driven social media algorithms, are delaying action on climate and renewable energy. Witnesses described how “AI-slopaganda,” astroturfing, and opaque third-party political campaigning contribute to the pollution of online information channels. Human Rights Commissioner Lorraine Finlay warned that algorithms prioritizing engagement over accuracy elevate extreme or sensational content, while foreign interference, deepfakes, and emerging technologies further erode public trust. Meta representatives stated that the company removes bots and dismantles coordinated inauthentic behavior but does not censor politicians unless they incite violence, prompting concern from senators about the platform’s responsibility in limiting harmful falsehoods.
The hearings also examined fossil fuel-linked lobbying and influence networks. Coal Australia defended transferring nearly $4 million, approximately 95 percent of Australians for Prosperity’s funding, to a third-party group that campaigned against Labor, the Greens, and independent candidates during the 2025 federal election, rejecting claims of astroturfing despite questions over transparency and voter awareness. Academic testimony highlighted the role of the Atlas Network, a global consortium of more than 500 free-market think tanks, in supporting decades-long, fossil fuel-funded campaigns promoting climate denial and anti-renewable narratives while obscuring funding origins. The inquiry further underscored how public debate is distorted by persistent climate science denial, exemplified by exchanges rejecting scientific consensus, and how correcting misinformation is resource-intensive compared to producing it. The committee’s findings point to a complex ecosystem in which digital manipulation, coordinated influence networks, and political amplification collectively undermine climate policy discourse and democratic trust.
Source: ABC, G. Hutchens. Senate committee features climate disinformation, the Atlas Network, and Dr Karl’s clash with One Nation. [online] Published 18 February 2026. Available at: https://www.abc.net.au/news/2026-02-19/climate-disinformation-senate-committee-dr-karl-atlas-network/106350002
EU Warns of Coordinated Disinformation Targeting West Africa
In a post on X, the European Union in Ghana stated that disinformation campaigns constitute a documented and ongoing risk across West Africa, noting that Ghana is among 13 countries in the region affected by at least 72 identified campaigns aimed at destabilization. The message frames these efforts as deliberate actions by “bad actors,” highlighting the scale and coordinated nature of the threat to regional stability.
By emphasizing that awareness matters, the post underscores the importance of public recognition of hostile information activities as a first line of defense. The communication situates disinformation as a systemic regional challenge rather than isolated incidents, reinforcing the need for vigilance against organized influence operations targeting West African states.
Source: X, European Union in Ghana (@EuropeInGhana) [online] Published 23 February 2026.
Available at: https://x.com/EuropeInGhana/status/2025937364342681926
[Appendix - Frameworks to Counter Disinformation]
The French response to Digital Foreign Interference
In a report, the Australian Strategic Policy Institute (ASPI) argued that the Western alliance faces a growing threat not only from adversaries such as Russia but also from misinformation circulating within allied political ecosystems. Examples included Trump-aligned accounts spreading false claims about France, questioning Europe’s military value, and misrepresenting NATO’s collective role in Afghanistan.
In response, France developed a proactive strategy to counter information manipulation. It established VIGINUM, a specialized agency that monitors and exposes foreign digital interference and trains diplomats to actively engage in information battles. French authorities also launched the “French Response” social media account, which uses humor, irony, and facts to debunk false claims and reach wider audiences. In addition, legal action against platforms accused of enabling political interference signaled France’s willingness to hold technology companies accountable and defend its information space.
The report concluded that France’s approach offers lessons for allies facing similar challenges. Effective responses include monitoring misinformation from both adversaries and partners, coordinating factual rebuttals, and reinforcing shared values such as burden-sharing within alliances like NATO. Middle powers, including Australia, can support efforts to build information resilience by sharing their experience of what works.
Source: Australian Strategic Policy Institute, E. Frécon & Fitriani, France strikes to address misinformation weakening Western alliance. [online] Published 20 February 2026. Available at: https://www.aspi.org.au/strategist-posts/france-strikes-to-address-misinformation-weakening-western-alliance/
Questionable Efforts to Fight Online Financial Scams
The European Commission announced that it has launched formal investigations under the Digital Services Act into major online platforms such as Apple, Google, Microsoft, and Booking[.]com to assess their efforts to prevent financial scams. The move comes amid a sharp rise in online fraud, with billions of dollars lost annually in the United States alone. Regulators are concerned that fraudulent app store products, fake hotel listings, and manipulated search results are enabling large-scale deception. While the EU’s approach seeks accountability from very large online platforms, critics question whether focusing primarily on these companies addresses the deeper roots of the problem.
A major driver of today’s fraud surge is AI-powered disinformation, particularly deepfakes and voice cloning. The number of deepfake files online has grown dramatically, and such tools are increasingly used to impersonate executives, clone voices, and create convincing fake websites. In this environment, disinformation is not just political—it has become a powerful tool for financial crime.
The essay argued that responsibility should extend beyond large platforms to the broader digital infrastructure where fraudulent content originates. Online marketplaces, domain registrars, and hosting services often serve as the entry points for scammers but face less scrutiny. A stronger focus on proactive identity verification could disrupt fraud at its source.
Source: The Hill, I. Bondar-Mucci, Efforts to fight online financial scams are missing the mark. [online] Published 22 February 2026. Available at: https://thehill.com/opinion/technology/5748005-online-platforms-fraud-prevention/
Building Immunity Against the Disinformation ‘Virus’
An article by the University of Lodz compared disinformation to a virus that spreads rapidly through everyday information channels. According to Dr. Krzysztof Grzegorzewski of the University of Lodz, disinformation is the deliberate spread of false information designed to appear true, while misinformation is shared in good faith, and malinformation involves the selective or harmful use of real facts. Unlike harmless gossip, disinformation is often intentional and strategically amplified, functioning like a “megaphone” that manipulates perception and distorts public debate.
The spread of this “information virus” is accelerated by clickbait and platform algorithms. Sensational headlines trigger dopamine-driven curiosity, encouraging users to click and consume more emotionally charged content. Algorithms then personalize and intensify this exposure, making disinformation travel faster than ever. Because technology evolves more quickly than human habits, individuals often struggle to adapt, allowing false or misleading narratives to circulate widely before they are critically examined.
To counter this pandemic of fake news, the expert emphasized early media education, critical thinking, and consistent fact-checking. Reliable journalism and science communication are presented as essential “vaccines” that strengthen public resilience. Initiatives such as “Science Inspires” promote clear, contextualized knowledge rather than sensationalism, helping audiences distinguish trustworthy information from manipulative content.
Source: University of Lodz, Disinformation Is a Virus. How to Fight the Fake News Pandemic? Polish Science Day. [online] Published 2026. Available at: https://www.uni.lodz.pl/en/news/details/disinformation-is-a-virus-how-to-fight-the-fake-news-pandemic-polish-science-day
EU and Spain Escalate Fight Against Climate Disinformation and Platform Amplification
Climate-related disinformation spread rapidly online after a ten-hour power outage in the Iberian Peninsula on April 28, 2025, the United Nations Office for Disaster Risk Reduction (UNDRR) reported, echoing earlier false narratives that blamed renewable energy for the 2021 Texas blackout. The article argues that such disinformation is deliberately engineered by public relations actors and professional propagandists and amplified by social media platform algorithms that prioritize engagement-driven content, including outrage and falsehoods. It frames climate disinformation as both a commercial and political enterprise, alleging that Big Tech platforms profit from algorithmic amplification while coordinated actors exploit crises to undermine trust in public institutions, science, and democratic governance. The piece also claims that European Union digital accountability efforts, particularly the Digital Services Act (DSA), are facing political pushback from MAGA-aligned U.S. lawmakers seeking to weaken regulatory safeguards.
In response, the European Union’s endorsement of the UN Declaration on Information Integrity on Climate Change is presented as a structural countermeasure, committing governments to protect science-based information, support independent journalism, and counter disinformation that distorts public understanding of climate issues. The article highlights enforcement actions under the DSA, fossil fuel advertising bans in cities such as The Hague, Amsterdam, and Florence, and Spain’s proposed reforms to hold platform executives legally accountable, criminalize algorithmic amplification of illegal content, monitor polarization, and enhance protections for minors. These measures are portrayed as a coordinated effort by European actors to shift from reactive moderation to systemic accountability, positioning digital regulation as a central tool in countering climate-related disinformation and safeguarding democratic stability.
Source: UNDRR, D. Schran & P. Newell, Spain to protect public against climate disinformation. [online] Published 19 February 2026.
Available at: https://www.preventionweb.net/news/spain-protect-public-against-climate-disinformation
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)