top of page

Cyber-based hostile influence campaigns 30th March - 5th April 2026

  • Writer: CRC
    CRC
  • 2 hours ago
  • 19 min read
Cover Image- Text: Weekly Media Update: Information Operations


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence, as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW).

Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns. 

Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media. 

We use the term cyber based hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc.

During the 30th of March to the 05th of April, 2026 we observed, collected and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities.



[Contents]



[State Actors]


Russia 


The War in Ukraine


China


Iran


[AI Related Articles]


[General Reports]


[Appendix - Frameworks to Counter Disinformation]




[ Report Highlights]


  • DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network.

  • The Australian Human Rights Commission released a report stating that financial and state-linked actors are increasingly using AI-generated personas, coordinated deception, and structured scam networks to conduct large-scale disinformation and fraud campaigns that exploit trust and evade detection.

  • The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check, the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history.

  • A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal.

  • According to an article published by The New York Times, Foreign adversaries are intensifying disinformation campaigns to spread anti-American narratives and sow division, while weakened U.S. institutional capacity and reactive strategies limit the effectiveness of countermeasures.


[ Report Summary]

  • According to an article by DFR Lab, despite EU sanctions on the Russian state outlet Sputnik, its successor platforms, the website 42TČen and the Telegram channel neČT24, have successfully continued operations by rebranding, shifting domains, and using infrastructure spread across several countries.

  • DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network.

  • According to a report by DisinfoWatch, RT’s X account amplified an edited clip of Tucker Carlson in which he suggested that the United States should pressure Canada into changing its government, "by force if necessary".

  • According to an article by EU VS Disinfo, after Ukrainian drone strikes hit the Russian Baltic oil ports of Ust-Luga and Primorsk, key export hubs for Russian oil and petroleum products, pro-Kremlin media quickly shifted the narrative away from the damage and Ukraine’s military capabilities.

  • NewsGuard's Reality Check reported that a pro-China X account operated from China has falsely circulated an old battlefield image, claiming it shows a U.S. soldier begging an Iranian drone for mercy.

  • As reported by CyberNews, the pro-Iran hacking group Handala has claimed responsibility for breaching the personal email account of FBI Director Kash Patel, just days after U.S. authorities seized several websites linked to the group.

  • NewsGuard's Reality Check reported that an AI-generated image falsely showing an Iranian missile inscribed with the slogan "No Kings" spread rapidly online after anti-Trump protests took place across the United States.

  • The Australian Human Rights Commission released a report stating that financial and state-linked actors are increasingly using AI-generated personas, coordinated deception, and structured scam networks to conduct large-scale disinformation and fraud campaigns that exploit trust and evade detection.

  • As reported by NewsGuard's Reality Check, Children’s Health Defense, the organization founded by U.S. Health Secretary Robert F. Kennedy Jr. is promoting a misleading claim that wireless radiation has caused health problems for 26 million adults across the U.S., Australia, and Canada.

  • The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check, the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history.

  • A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal.

  • A Bellingcat investigation suggested that UAE authorities have repeatedly downplayed or mischaracterized the impact of Iranian drone strikes on the country, while imposing strict controls on public reporting.

  • As reported by DFR Lab, a coordinated network of at least 25 TikTok accounts used AI-generated videos to spread anti-government narratives in Moldova and mobilize support for protests planned for the 25th of March 2026, which ultimately never took place.

  • The European Commission published that on the 31st of March 2026, it signed a 5 million euro grant agreement to support independent fact-checkers across the EU and beyond.

  • A report by hibrid.info examined misinformation trends in Kosovo between the 9th of October and the 31st of December 2025, during a politically sensitive period that included local and early parliamentary elections.

  • According to an article published by The New York Times, Foreign adversaries are intensifying disinformation campaigns to spread anti-American narratives and sow division, while weakened U.S. institutional capacity and reactive strategies limit the effectiveness of countermeasures.

[State Actors]


Russia

Sputnik Czechia’s Successor Adapted to Circumvent Sanctions

According to an article by DFR Lab, despite EU sanctions on the Russian state outlet Sputnik, its successor platforms, the website 42TČen and the Telegram channel neČT24, have successfully continued operations by rebranding, shifting domains, and using infrastructure spread across several countries. After Sputnik Czechia was blocked in 2022, the new platforms openly positioned themselves as its continuation, first using a Czech domain and later moving to an anonymously registered .com domain hosted outside Czech jurisdiction. Their Facebook and X accounts are also managed from abroad, raising concerns about foreign interference.


To reduce scrutiny, the outlet has shifted away from directly reposting Kremlin media and instead increasingly relies on original interviews with Czech and Slovak political figures and commentators who promote pro-Kremlin narratives. This strategy makes the content appear more local and organic while still amplifying messages critical of the West, supportive of Russia, and hostile to Ukraine. Telegram has become its main platform, where the channel posts large volumes of unattributed news briefs, videos, and images, including repurposed content from sanctioned Russian sources, while benefiting from a growing network of amplifiers across Czech-language channels.


Source: DFRLab. How Sputnik Czechia’s successor adapted to circumvent sanctions. [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/how-sputnik-czechias-successor-adapted-to-circumvent-sanctions/


Sanctioned Kremlin-Affiliated Content Persists Via Pravda Network

DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. In early February 2026, more than 20 Pravda-linked websites published coordinated articles in multiple European languages falsely claiming that Ukrainian President Volodymyr Zelenskyy had described the frontline situation as "catastrophic" during an alleged call with Belgian Prime Minister Bart De Wever. Like many Pravda Network publications, the content promoted pro-Kremlin narratives regarding Ukraine’s imminent loss in the war and was among dozens of articles about Ukraine posted by the network in a single day.


The Pravda Network, previously exposed as a sanctions-evasion operation linked to the Crimea-based company Tigerweb, continues to amplify and translate content from Russian state-affiliated outlets such as RT, RIA Novosti, and TASS into more than 80 languages across over 150 websites. By laundering content from sanctioned media and redistributing it through websites, Telegram channels, and social media platforms, the network exploits weak and uneven sanctions enforcement across EU member states. While Ukraine has built significant resilience against Russian disinformation domestically through media bans, fact-checking initiatives, and media literacy campaigns, it has far less control over narratives spread abroad.


Source: DFRLab. Sanctioned Kremlin-affiliated content persists via Pravda Network, attempting to shape Ukraine’s image abroad. [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/sanctioned-kremlin-affiliated-content-persists-via-pravda-network-attempting-to-shape-ukraines-image-abroad/


RT and Tucker Carlson Targeted Canada’s Sovereignty

According to a report by DisinfoWatch, RT’s X account amplified an edited clip of Tucker Carlson in which he suggested that the United States should pressure Canada into changing its government, "by force if necessary". Carlson made the remarks in a commentary dated to the 2nd of April 2026, and on the same day, RT reframed them under a provocative "51st state" and regime-change narrative, falsely implying that Canada is not a sovereign nation and portraying its government as oppressive. It also repeated a misleading claim that Canada has "killed nearly 100,000" citizens through its MAID program. The report argued that this is a deliberate effort to exploit both the "51st" state narrative and the Alberta separatist movement issue.


Source: DisinfoWatch. RT & Tucker Carlson Target Canada’s Sovereignty, Urging US Sponsored Regime Change. [online] Published 3 April 2026. Available at: https://disinfowatch.org/disinfo/tucker-rt-alberta-canada-separation/


The War in Ukraine

Kremlin Blamed NATO for Ukrainian Strikes and Warned of a "Coup" in Hungary

According to an article by EU VS Disinfo, after Ukrainian drone strikes hit the Russian Baltic oil ports of Ust-Luga and Primorsk, key export hubs for Russian oil and petroleum products, pro-Kremlin media quickly shifted the narrative away from the damage and Ukraine’s military capabilities. Instead, Articles circulated through the Pravda network claimed that NATO and the UK’s MI6 coordinated the strikes, with some even alleging that the drones were launched from Estonia. 


Russian information channels have used the incident to reinforce the narrative that the West has opened a "second front" against Russia, particularly in the Baltic region. These claims present Russia as the victim of deliberate NATO escalation, and help lay the groundwork for potential retaliatory rhetoric toward neighboring Baltic states. In parallel, pro-Kremlin messaging has also turned toward Hungary’s upcoming elections. RT Arabic has accused the EU and Ukraine of plotting a "coup" against Prime Minister Viktor Orbán by allegedly using experienced organizers from Ukraine’s 2014 protests, repeating wider disinformation claims that Brussels is trying to remove Prime Minister Viktor Orbán through the parliamentary elections.


Source: EUvsDisinfo. Ukraine hits Russian oil ports, Kremlin blames NATO and warns of a coup in Hungary. [online] Published 2 April 2026. Available at: https://euvsdisinfo.eu/ukraine-hits-russian-oil-ports-kremlin-blames-nato-and-warns-of-a-coup-in-hungary/


China

Pro-China Sources Faked an Image of a U.S. Soldier NewsGuard's Reality Check reported that a pro-China X account operated from China has falsely circulated an old battlefield image, claiming it shows a U.S. soldier begging an Iranian drone for mercy. In reality, the photo predates the current Iran conflict and appears to come from the Russia-Ukraine war. According to reverse image searches, the image has been online since at least April 2024, and the soldier’s uniform matches that of Russian troops, not U.S. forces. The fact that such a post was allowed on X, a banned platform in China, suggested that the propaganda had Chinese government backing. Similar out-of-context or inauthentic imagery has recently been spread by social media users to portray U.S. soldiers cowering before Iranian forces. Source: NewsGuard Reality Check. Wrong War, Wrong Victim, Wrong Victor. [online] Published n.d. Available at: https://www.newsguardrealitycheck.com/p/wrong-war-wrong-victim-wrong-victor

Iran

Handala claimed a hack of FBI Director Personal Email

As reported by CyberNews, Handala said it had obtained all personal and confidential information of Kash Patel, including emails, conversations, documents, and even classified files, and that they are available for public download. The leaked material included a mix of casual personal photos, showing Patel relaxing and smoking a cigar, posing next to a statue, sitting in a classic car, and taking mirror selfies with a bottle of ultra-premium Cuban rum. U.S. officials confirmed the breach and said the leaked material appeared authentic. However, the FBI stated that the compromised data was historical in nature and did not include government information.


The incident comes amid an escalating cyber confrontation following the FBI and Department of Justice’s recent takedown of Handala’s online infrastructure. Prosecutors have linked the group to a broader Iranian cyber and psychological operations campaign involving data leaks, doxxing, and threats against journalists, dissidents, and Israeli-linked targets. Security experts warn that even a breach of a personal inbox can expose sensitive contacts, travel details, and other information that may enable future phishing or follow-on attacks, making the incident significant beyond its immediate contents. Handala has recently claimed attacks on major companies such as Stryker and Verifone. It also claimed to have compromised the personal email account of former Mossad research director Sima Shine, leaking more than 100,000 emails. These operations are designed not only to cause technical disruption but also to grab attention, generate headlines, and shape public perception.


Source: Cybernews. Handala claims hack of FBI Director Kash Patel’s personal email. [online] Published 27 March 2026. Available at: https://cybernews.com/news/handala-claims-hack-fbi-director-kash-patel-personal-email/


AI Image Featured Iranian Missile Inscribed With "No Kings"

NewsGuard's Reality Check reported that an AI-generated image falsely showing an Iranian missile inscribed with the slogan "No Kings" spread rapidly online. On the 28th of March 2026, anti-Trump demonstrators staged protests across the U.S. under the "No Kings" banner. Many of the protesters expressed opposition to the war in Iran. One day after the protests, an article from an Iran-based, pro-regime News Agency claimed that the IRGC had inscribed "NO KINGS" on missiles used in its latest attack against Israel, and added the image of the inscribed missile without specifying its source.


Pro-Iran accounts promoted the image as a tribute by Iran’s military to the American protesters, while pro-Israel and conservative users shared it to argue that U.S. liberals were aligned with Iran. However, NewsGuard analyzed the image using two AI-detection tools, both of which flagged it as manipulated or artificially produced. This isn’t the first time fabricated Iranian missile inscriptions have circulated on social media. In early March 2026, an AI-edited image of a missile with the phrase “In memory of the victims of Epstein Island" was spread by pro-Iran users.


Sources: NewsGuard Reality Check. AI Fakes an Iranian “No Kings” Missile — and Both Sides Fall for It. [online] Published 6 April 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-fakes-an-iranian-no-kings-missile


[AI Related Articles]


AI-Driven Deception and Industrialized Scam Networks Redefine Online Disinformation Threats

The Australian Human Rights Commission released a report, coinciding with international trans day of Visibility. The report outlines a rapidly evolving disinformation and fraud ecosystem in which financially motivated threat actors, including loosely organized groups and highly structured criminal syndicates, systematically exploit digital platforms using deception at scale. These actors rely heavily on impersonation tactics, creating credible but false personas (e.g., medical professionals, landlords, or bereavement services) to build trust and manipulate targets. Increasingly, they leverage generative AI to enhance realism, producing culturally tailored, multilingual, and context-aware content, as well as synthetic imagery and conversations that mimic authentic human interaction. AI is also used to fabricate entire online infrastructures, websites, business identities, and communication flows, allowing adversaries to operate with greater efficiency and evade detection. 

Operationally, these campaigns follow a structured "attack chain" involving infrastructure setup, persona development, engagement through social engineering, and eventual exploitation for financial or data extraction. Tactics include directing victims off the platform to controlled environments, deploying hyper-personalized narratives, and exploiting moments of vulnerability such as grief, financial hardship, or housing insecurity. Networks are often transnational and coordinated, with evidence of industrialized workflows, role specialization, and adaptive strategies in response to enforcement pressure. In parallel, influence operations, some state-linked, use AI-generated content and false identities to amplify political narratives, simulate grassroots legitimacy, and manipulate public discourse across multiple regions and languages.


Source: Australian Human Rights Commission. Equal Identities: A Human Rights Review of Trans and Gender Diverse Experiences in Australia. [online] Published 2024. Available at: https://humanrights.gov.au/media/documents-files-PDFs/strategic-communications/EQUAL_IDENTITIES_FINAL_DIGITAL.pdf


[General Reports]


RFK Jr. Group Revived Long-Debunked Claim that WiFi is Dangerous

As reported by NewsGuard's Reality Check, Children’s Health Defense, the organization founded by the U.S. Health Secretary Robert F. Kennedy Jr. is promoting a misleading claim that wireless radiation has caused health problems for 26 million adults across the U.S., Australia, and Canada. The claim is based on a March 2026 study that relied entirely on self-reported online survey responses from 3,400 participants who believed their symptoms were linked to wireless devices. While misinformation sites presented the findings as proof that WiFi causes harm, the study itself only estimated the prevalence of people who believe they are sensitive to wireless radiation and did not establish any causal connection.


In fact, the researchers explicitly stated that the study was not designed to prove that wireless technology caused the reported symptoms, nor to clinically validate the condition known as electromagnetic hypersensitivity (EHS). Major health authorities, including the World Health Organization and Australia’s radiation safety agency, continue to state that there is no scientific evidence that low-level electromagnetic fields from WiFi or wireless devices cause these health effects, and EHS is not recognized as a medically established condition.


Source: NewsGuard Reality Check. RFK Jr. Group Revives Long-Debunked Claim that WiFi is Dangerous. [online] Published 31 March 2026. Available at: https://www.newsguardrealitycheck.com/p/rfk-jr-group-revives-long-debunked


The Rise of AI Deepfakes During the Iran War

The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check, the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history.


However, the crisis is not limited to fake content alone: authentic footage is increasingly being dismissed as AI-generated, creating a dangerous environment in which real evidence can be easily discredited. A key example involved an authentic "proof-of-life" video posted by Israeli Prime Minister Benjamin Netanyahu after false claims circulated that he had been killed in a missile strike (for further information, see W12 March Cyfluence Report). The rise of AI deepfakes and dismissal of real footage are two sides of the same coin.


Source: NewsGuard Reality Check. Real Is the New Fake. [online] Published date not specified. Available at: https://www.newsguardrealitycheck.com/p/real-is-the-new-fake



India’s Ruling Party is Using AI to Boost Hate Speech 

A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. Of 499 visual social media posts reviewed from the BJP’s official state accounts in December 2025, 194 were assessed as meeting the UN’s definition of hate speech, and 31 of those included clear signs of AI-generated content. Much of the messaging portrayed Bengali-origin Muslims as "foreigners", “infiltrators", or threats to local communities, often using religious symbols such as skull caps to reinforce identity-based stereotypes.


This content is closely tied to heightened tensions between India and Bangladesh, particularly after political instability in Bangladesh and rising nationalist sentiment on both sides of the border. AI-generated visuals were used to intensify emotionally charged narratives around immigration, religion, and public safety, including posts depicting opposition figures as supporters of alleged "foreigners". Compared with opposition parties, there appeared to be a particularly high concentration of hateful messaging in the AI-generated imagery posted by the BJP’s accounts. BJP West Bengal did not respond to multiple requests for comment by Bellingcat via phone and email.


Source: Bellingcat. How India’s Ruling Party is Using AI to Boost Hate Speech in States Near Bangladesh. [online] Published 31 March 2026. Available at: https://www.bellingcat.com/news/2026/03/31/india-bjp-hate-speech-ai/


False Claim that "Easter" Removed from Chocolate to Avoid Offending Muslims 

As NewsGuard's Reality Check reported, Far-right and anti-immigrant social media accounts have been spreading a false claim that Cadbury removed the word "Easter" from its chocolate egg packaging in 2026 to avoid offending Muslims. The allegation, widely shared online ahead of Easter, accused the company of attempting to "erase" Christianity and quickly gained significant engagement across platforms.


However, the claim is false. NewsGuard confirmed that the products shown in viral images still display "Happy Easter" on their packaging, and the circulating photos simply do not show that part of the box. Cadbury’s parent company, Mondelez International, also stated that the brand has used the word "Easter" in its marketing for over 100 years and continues to do so. This is the latest in a series of recurring false claims targeting Cadbury with accusations of changing its branding to appease Muslims, including earlier AI-generated fake images and misleading claims from previous years.


Source: NewsGuard Reality Check. False Claim of the Week: Cadbury removed the word “Easter” from its chocolate eggs in 2026 to avoid offending Muslims. [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/i/193098983/false-claim-of-the-week-cadbury-removed-the-word-easter-from-its-chocolate-eggs-in-2026-to-avoid-offending-muslims



The UAE Rewrites the Story of Iranian Strikes

A Bellingcat investigation suggested that UAE authorities have repeatedly downplayed or mischaracterized the impact of Iranian drone strikes on the country, while imposing strict controls on public reporting. Following the start of the conflict, officials warned that sharing images or videos of strikes was illegal, leading to arrests, detentions, and expedited trials for dozens of people accused of posting "misleading" content online. At the same time, official statements often described fires and explosions as the result of debris from successful interceptions, even when open-source videos and satellite imagery appeared to show direct drone impacts on key sites such as Fujairah Port, Jebel Ali Port, hotels in Dubai, residential buildings, and even areas near Dubai International Airport.


Several high-profile incidents revealed clear discrepancies between official accounts and observable evidence. The investigation also pointed to a coordinated effort to shape public perception, including pro-government influencer campaigns promoting the UAE as safe and secure despite ongoing attacks and casualties. Critics and press freedom groups argued that the conflict is being used to further restrict independent reporting and public access to information.


Source: Bellingcat. How the UAE Rewrites the Story of Iranian Strikes. [online] Published 2 April 2026. Available at: https://www.bellingcat.com/news/2026/04/02/war-uae-iran-infuencer-dubai-conflict-drone-successful-strike-intercept-fire/



TikTok Accounts Used AI-Generated Videos to Incite Protests in Moldova

As reported by DFR Lab, a coordinated network of at least 25 TikTok accounts used AI-generated videos to spread anti-government narratives in Moldova and mobilize support for protests planned for the 25th  of March 2026, which ultimately never took place. The campaign targeted President Maia Sandu and the ruling Party of Action and Solidarity (PAS), portraying them as corrupt and illegitimate, and as responsible for Moldova’s political and economic problems. Many of the accounts shared nearly identical profile imagery. They displayed coordinated behavior, including distinct "seeder" accounts that produced content, amplifier accounts that boosted visibility, and influencers who expanded the campaign’s reach to wider audiences.


The network relied heavily on fabricated videos showing synthetic protest crowds and emotionally charged calls for demonstrations, while also promoting pro-Russian and anti-EU narratives. Some content idealized life during the Soviet era and framed Moldova’s European integration as a source of decline. The campaign operated in both Romanian and Russian. Several known local influencers with anti-EU and pro-Russian positions also helped amplify the messaging, contributing to millions of views and interactions across the network.

Although no physical protests took place on the 25th of March, the online campaign continued beyond that date. Moldovan police publicly warned that the activity may be part of a destabilization effort. The report noted that the observed behavior appears to violate TikTok’s policies on platform integrity and political content.


Source: Digital Forensic Research Lab (DFRLab). TikTok accounts use AI-generated videos to incite anti-government protests in Moldova. [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/tiktok-accounts-use-ai-generated-videos-to-incite-anti-government-protests-in-moldova/


[Appendix - Frameworks to Counter Disinformation]


EU Commission Boosts Independent Fact-Checking With a €5 Million Grant

The European Commission published that on the 31st of March 2026, it signed a 5 million euro grant agreement to support independent fact-checkers across the EU and beyond. The initiative is titled "Fact-Checkers for European Resilience Against Disinformation: Network for Emergency, Protection and Technology". The commission stated that the initiative will strengthen fact-checking capacity in all EU languages and foster cooperation among relevant players. It will also establish a protection scheme for fact-checkers, covering assistance in matters such as legal affairs, cybersecurity, and psychological support, as well as developing an independent European repository of fact-checks.


Source: European Commission. Commission boosts independent fact-checking with a €5 million grant under the European Democracy Shield. [online] Published 31 March 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/commission-boosts-independent-fact-checking-eu5-million-grant-under-european-democracy-shield


Resilience to Misinformation in Kosovo’s Election Period

A report by hibrid.info examined misinformation trends in Kosovo between the 9th of October and the 31st of December 2025, during a politically sensitive period that included local and early parliamentary elections. It found a significant spread of false and misleading content across social media, foreign media outlets, and online platforms in Albanian, Serbian, and English. The most common forms included fake news, fabricated polls, false statements attributed to public figures, and manipulated images and videos, all of which had the potential to influence voters and undermine trust in the electoral process. The analysis also identified narratives with an interethnic dimension, particularly affecting northern Kosovo and Serb-majority communities, increasing the risk of social polarization and local tensions.


A notable trend during the election period was the use of AI-generated and manipulated content to falsely depict political support, events, or statements, making it harder for the public to distinguish fact from deception. Signs of foreign information manipulation and interference (FIMI) were also highlighted, with narratives aimed at shaping public opinion and destabilizing democratic processes. To address these risks, the report recommended stronger electoral monitoring, closer cooperation between fact-checkers, civil society, and EU institutions, improved platform oversight, and broader public awareness efforts to strengthen resilience against misinformation and protect democratic integrity.


Source: European External Action Service (EEAS). Resilience to Misinformation in Kosovo’s Election Period. [online] Published 2 April 2026. Available at: https://www.eeas.europa.eu/sites/default/files/2026/documents/Resilience%20to%20Misinformation%20%20in%20Kosovo%E2%80%99s%20Election%20Period.pdf



U.S. Struggles to Counter Rising Foreign Disinformation Campaigns

An article published by The New York Times describes a renewed effort by the U.S. State Department to counter foreign disinformation campaigns amid a surge in anti-American narratives linked to the war in Iran. According to a diplomatic cable, foreign adversaries, including Russia, China, and Iran, are actively deploying influence operations to sow division within the United States and its allies while promoting alternative geopolitical narratives. These campaigns are amplified across both social media and traditional media, with platforms such as X identified as key vectors for the spread of false or manipulative content. The State Department’s response includes directing diplomats to actively counter misleading narratives online, deepen coordination with Pentagon information operations, and leverage tools such as Community Notes to challenge propaganda without restricting free speech.


However, the article highlights structural weaknesses in the U.S. response to disinformation, noting that prior dismantling of key institutions, such as elements of the Global Engagement Center and disruptions to government-funded media outlets like Voice of America, has reduced the country’s capacity to counter hostile influence. This has created an environment in which adversarial narratives can proliferate more effectively, compounded by declining perceptions of U.S. credibility in the global information space. Critics characterize current efforts as reactive rather than strategic, emphasizing that inconsistent policy decisions and reduced institutional capability have undermined the effectiveness of counter-disinformation initiatives, even as foreign actors continue to exploit information channels to shape perceptions and influence audiences.


Source: The New York Times. Trump’s Foreign Disinformation Claims About Iran. [online] Published 1 April 2026. Available at: https://www.nytimes.com/2026/04/01/business/trump-foreign-disinformation-iran.html




[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website.










bottom of page