top of page

Cyber-based hostile influence campaigns 23rd - 29th March 2026

  • Writer: CRC
    CRC
  • 45 minutes ago
  • 17 min read
Cover Image- Text: Weekly Media Update: Information Operations


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.  


During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.



[Contents]



[State Actors]


Russia 


China


Iran


[General Reports]


[Appendix - Frameworks to Counter Disinformation]




[ Report Highlights]




[State Actors]


Russia

Russia's Promotion of Separatism Abroad

A report by EUvsDisinfo argued that the Kremlin promotes separatist movements abroad while harshly suppressing similar ideas at home. Russia has repeatedly supported or amplified secessionist narratives in Western countries such as the United States, Canada, and Spain, often through coordinated disinformation campaigns and online networks. Examples include backing "Texit" rhetoric, reviving Alberta independence claims, and spreading misinformation around Catalonia and Brexit.


In places like Moldova and Georgia, Moscow maintains influence through breakaway territories such as Transnistria and South Ossetia, using them as tools of pressure. In Estonia, this has taken the form of a disinformation campaign targeting the border town of Narva, where a majority of the population is ethnically Russian. However, inside Russia, any public support for separatism is criminalized, with activists facing imprisonment and organizations labeled as extremist or terrorist. Russia’s talk of sovereignty and territorial integrity is pragmatic rather than principled. It is used to justify repression at home and to destabilize countries abroad.


Source: EUvsDisinfo. Secession for you, prison in Russia: Moscow’s selective love for self-determination. [online] Published 24 March 2026. Available at: https://euvsdisinfo.eu/secession-for-you-prison-in-russia-moscows-selective-love-for-self-determination/


Russian Hybrid Tactics in Europe 2022 to 2025

A report by The Soufan Center analyzed 255 Russian hybrid operations across 6 European countries between 2022 and 2025, highlighting a strategy built on cost asymmetry. Russia conducts low-cost, deniable, and scalable actions, ranging from disinformation and espionage to sabotage and infrastructure probing, while forcing targeted countries to bear the financial, political, and security burden of responding.


Russia adapts its tactics to local contexts. In Western Europe, it has focused on intelligence gathering, infrastructure surveillance, and symbolic acts designed to inflame tensions, while in countries like Moldova and Georgia, it has combined long-term influence operations with political interference to shape strategic outcomes. Estonia, despite facing continuous pressure, has shown resilience due to strong public awareness and institutional preparedness. A key shift over time has been the increased use of intermediaries, often low-level recruits, to maintain deniability, alongside a move toward more direct and physical actions in 2025.


The report concluded that the impact of these operations depends less on Russia’s capabilities and more on the strength of targeted societies. Countries with resilient institutions, transparent communication, and strong civil society are better able to withstand interference. To counter this threat, Europe must both raise the cost of Russian actions through coordinated responses and reduce vulnerabilities by strengthening governance and social cohesion, while carefully balancing public communication to avoid amplifying the intended disruptive effects.


Source: The Soufan Center, C. Broekaert & N. Lyubarsky & C. Clarke, & J. Shelzi. PRIMING, DESTABILIZING, COERCING: Russian Hybrid Tactics in Europe 2022–2025. [online] Published 2026. Available at: https://thesoufancenter.org/wp-content/uploads/2026/03/TSC-Report-Priming-Destabilizing-Coercing-Russian-Hybrid-Tactics-in-Europe-2022-2025.pdf


Disinformation Campaign Targeting Kaja Kallas

According to a report by DisinfoWatch, Kremlin-aligned actors are spreading a coordinated disinformation campaign that distorts history and attacks Estonian Prime Minister Kaja Kallas. Her warning that Russia’s territorial demands follow a familiar pattern was deliberately twisted into claims that regions like Donbas are historically Russian and that Estonia’s sovereignty is questionable. These narratives, amplified by state-linked media and proxy accounts, rely on insults, selective history, and imperial mythology rather than credible evidence or legal standing.


The claims are demonstrably false. Estonia’s independence was legally restored in 1991 based on continuity from its pre-Soviet statehood, and the Soviet annexation was widely recognized as illegal. Similarly, Russia’s claims over Donetsk and Luhansk have been rejected by international bodies, including the UN and the European Council, which reaffirm Ukraine’s territorial integrity. Outlets such as RT have also been identified and sanctioned for their role in spreading disinformation and conducting influence operations.


Source: DisinfoWatch. Kremlin-linked X cluster targets Estonia’s sovereignty and Kaja Kallas. [online] Published 27 March 2026. Available at: https://disinfowatch.org/disinfo/kremlin-linked-x-cluster-targets-estonias-sovereignty-and-kaja-kallas/


China

Disinformation Denying Uyghur Forced Labor in China

As revealed by DisinfoWatch, a post from the Chinese Embassy in Canada exemplifies a coordinated disinformation effort aimed at denying well-documented human rights abuses in Xinjiang. Triggered by Canadian MP Michael Ma’s concerns about forced labor in Chinese EV production, the message dismisses such allegations as "blatant lies" spread by "anti-China" actors. This framing follows a familiar authoritarian pattern: discredit critics, label evidence as fabricated, and shift attention toward protecting trade relations.


However, substantial evidence contradicts these claims. The Canadian government has acknowledged credible reports of forced labor and imposed import restrictions tied to Xinjiang. International bodies, including the UN, have also identified persistent patterns of abuse that may amount to crimes against humanity. Independent investigations have further linked Xinjiang-produced materials to global automotive supply chains, reinforcing concerns about forced labor in EV production.

This narrative serves the strategic purpose of deflecting scrutiny, protecting China’s economic interests, and reframing human rights concerns as politically motivated interference. It is part of a long-standing denial campaign that has consistently portrayed allegations of abuses in Xinjiang as fabricated.


Source: DisinfoWatch. Chinese Embassy in Canada Exploits Michael Ma comments to deny forced-labour. [online] Published 28 March 2026. Available at: https://disinfowatch.org/disinfo/chinese-embassy-in-canada-exploits-michael-ma-comments-to-deny-forced-labour/


Iran

Pro-Iranian Nasir Security Targets the Energy Sector in the Middle East

A report by Resecurity highlighted the activities of Nasir Security, a relatively new and low-profile cyber group believed to be linked to Iran or its proxies. The group primarily targets the energy sector in the Middle East, focusing on supply chain vendors, including contractors in engineering, construction, and safety. Rather than targeting major energy companies directly, the actors exploit weaker third-party systems using techniques such as spear phishing, business email compromise, and cloud data exfiltration. The stolen data is often authentic but originates from vendors, obscuring the true source of the breach and creating confusion about the attack's scale.


Nasir Security combines cyber operations with disinformation tactics, exaggerating the volume and impact of its alleged breaches. The group has claimed large-scale data theft from companies in the UAE, Oman, Iraq, and Saudi Arabia, but investigations suggested these claims are overstated and based on limited third-party compromises. Their activity appeared more ideological than financially motivated, aiming to project strength, fuel geopolitical narratives, and create uncertainty amid the ongoing conflict involving Iran.


According to the report, numerous independent assessments confirmed that none of the Iran-linked, pro-Iranian groups and/or state-sponsored groups are making any meaningful impact on the Iran conflict. At the same time, Resecurity highlighted the supply chain cybersecurity risks that Iran could exploit, and recommended that enterprises stay vigilant and accelerate third-party cybersecurity monitoring and vendor risk assessments.


Source: Resecurity. Pro-Iranian Nasir Security is Targeting The Energy Sector in the Middle East. [online] Published 23 March 2026. Available at: https://www.resecurity.com/blog/article/pro-iranian-nasir-security-is-targeting-the-energy-sector-in-the-middle-eastTop Of Page


Disinformation Trends in the 2026 Iran War

According to NewsGuard’s Reality Check, within the first 25 days of the Iran war, at least 53 false claims circulated online, attracting hundreds of millions of views and averaging roughly two disinformation posts per day. The disinformation shows three key patterns: a strong bias toward pro-Iran messaging, a shift from reused or misrepresented images to fully AI-generated visuals, and a growing tactic of dismissing legitimate journalism as fake or AI-generated. This last trend is particularly concerning, as it attempts to erode trust in credible media by falsely labeling accurate reporting as disinformation.


A different NewsGuard report added that the vast majority (about 92%) of the claims promoted pro-Iran narratives, often exaggerating military successes or inventing major events, such as the destruction of Israeli strategic sites or the deaths of senior leaders like Benjamin Netanyahu. These claims were entirely baseless but aimed to shape public perception and morale. Although some false claims were amplified by Iranian-linked outlets, most originated from decentralized pro-Iran social media networks worldwide. The goal of this disinformation is not to inform but to influence emotions and shape perceptions before facts can be verified.


For example, as reported in another NewsGuard's Reality Check, a widely shared video claiming to show Iranian missiles striking a U.S. Navy ship in the Strait of Hormuz on the 25th of March 2026 has been debunked as false. The footage, circulated by pro-Iran social media accounts and viewed millions of times, actually originates from a video game, not a real military event. Analysis of the video revealed several clear indicators of its artificial origin, including a visible mouse cursor, unrealistic visual effects, and the depiction of a ship class no longer in service.


Sources:


Disinformation and Hybrid Coercion in Iran’s War Strategy

A Center for Strategic and International Studies (CSIS) article outlined how Iran is conducting a multidomain "punishment campaign" that combines military, economic, cyber, and informational tactics to pressure the United States and Israel indirectly. Disinformation plays a central role in this approach. Alongside missile and cyber operations, Iran deploys computational propaganda and targeted influence campaigns to magnify the psychological impact of disruptions. By targeting interconnected systems, such as energy, finance, and infrastructure, Iran amplifies both the material and informational effects of its actions.


Strategically, this campaign aims to weaken coalition unity and pressure governments through economic and psychological strain rather than battlefield victory. Countering this strategy requires not only military and defensive measures, but also active efforts to detect, expose, and disrupt false narratives that support Iran’s broader coercive campaign.


Source: CSIS, B. Jensen. Iran’s Next Move: How to Counter Tehran’s Multidomain Punishment Campaign. [online] Published 23 March 2026. Available at: https://www.csis.org/analysis/irans-next-move-how-counter-tehrans-multidomain-punishment-campaign


[General Reports]


AI-generated YouTube channels Spread Fake News Reports

The Digital Forensic Research Lab (DFRLab) reports that a network of more than two dozen YouTube channels uses AI-generated content to mimic legitimate news reporting while inserting fabricated geopolitical events. These channels, publishing in English and Russian, combine synthetic anchors, automated narration, AI-generated visuals, and coordinated posting patterns to produce large volumes of content at low cost. Collectively, they have amassed nearly 2 billion views and nearly 2 million subscribers. The operation relies on sensationalist titles, uniform branding, and repeated content across channels, with clear signs of coordination such as synchronized uploads and thematic shifts.


A key tactic is blending factual reporting with false claims in the same style, making it difficult for viewers to distinguish real from fabricated events. For example, some Ukraine-related videos falsely reported attacks on logistical infrastructure in Mykolaiv and alleged strikes on military infrastructure in the Polish city of Rzeszów. Other videos suggested imminent diplomatic ruptures between Russia and Azerbaijan and dramatized the US capture of Venezuelan President Nicolás Maduro using AI-generated footage. The network also shows signs of centralized production, including duplicated videos, shared assets, and minimal human oversight, with some content still containing visible AI-generation artifacts.


While it is unclear whether the channels are directly monetized, their content is eligible for advertising and benefits from algorithmic amplification. The report raises concerns about violations of YouTube’s misinformation policies and broader regulatory implications, particularly under the EU Digital Services Act, as undisclosed synthetic media at scale poses risks to information integrity and public discourse.


Source: Digital Forensic Research Lab (DFRLab), I. Adam & E. Buziashvili. AI-generated YouTube channels co-opt war coverage to farm nearly two billion views. [online] Published 23 March 2026. Available at: https://dfrlab.org/2026/03/23/ai-generated-youtube-channels-co-opt-war-coverage-to-farm-nearly-two-billion-views/


Pierre Poilievre’s misinformation on Joe Rogan’s podcast

A report by The Conversation examined Pierre Poilievre’s appearance on the controversial Joe Rogan Experience podcast and argued that the Canadian opposition leader spread or failed to challenge several misleading claims. Rogan’s podcast is one of the world’s longest-running, averaging 11 million listeners per episode. The interview aimed to reach a large international audience and exposed millions of listeners to disputed or inaccurate statements.


Poilievre, citing no evidence, told Rogan that Canada admits one million immigrants per year. A number significantly higher than stated on the Canadian government website. Inflating immigration numbers is a known rhetorical tactic in far-right online spaces, where it functions to fuel anxieties about demographic change. He also downplayed the environmental and health effects of Alberta’s oil sands. Moreover, Canada is the world's largest exporter of canola oil, and Poilievre failed to push back against Rogan’s health misinformation about seed oils. Poilievre also repeated unsupported claims about Canada’s safer supply drug program and about the Liberal government's actions' impact on inflation during and following the COVID-19 pandemic. The report concluded that Poilievre's spread of false claims is dangerous because it fosters divisiveness and distrust among Canadians, particularly on immigration and public health.


Source: The Conversation, J. Hodson & B. I. Wiens & N. Ruest & S. MacDonald. Fact check: Pierre Poilievre’s misinformation on Joe Rogan’s podcast disrespects Canadians. [online] Published 24 March 2026. Available at: https://theconversation.com/fact-check-pierre-poilievres-misinformation-on-joe-rogans-podcast-disrespects-canadians-278864


Disinformation and Climate Information Integrity in Australia

Australia’s Senate inquiry highlighted growing concern in Australia about the widespread impact of misinformation and disinformation on climate change and energy debates. Surveys show that a large majority of Australians encounter false or misleading information online, particularly on climate-related issues. Examples included claims that wind turbines harm whales or that community batteries pose major safety risks, which have influenced local decisions and fueled public anxiety.


Disinformation is often strategically produced and amplified by powerful actors, including corporations, governments, and political groups. These campaigns frequently aim to delay climate action by spreading doubt about scientific evidence and promoting misleading narratives. Tactics include "astroturfing" (fake grassroots campaigns), the use of bots and trolls, and increasingly, AI-generated content. The inquiry also highlighted how misinformation affects social cohesion and democratic processes. Climate-related falsehoods have contributed to division within communities, harassment of advocates, and confusion about scientific realities.


The report concluded that disinformation is not just about false facts but about manipulating public discourse. It exploits existing beliefs, polarizes opinions, and weakens trust in institutions and science. Addressing this challenge requires stronger regulation of digital platforms, greater transparency, and coordinated efforts to expose and counter deliberate falsehoods while preserving open democratic debate.


Source: The Senate Select Committee on Information Integrity on Climate Change and Energy. The Integrity Gap: Restoring Trust in the Climate and Energy Debate. [online] Published March 2026. Available at: https://apo.org.au/sites/default/files/resource-files/2026-03/apo-nid333872.pdf


The Return of Claims that Trump's Assassination Attempt Was Staged

According to an article by NewsGuard's Reality Check, a Washington Post report that Russian intelligence once considered staging an assassination attempt on Hungary’s prime minister, Viktor Orban, to boost his chances in Hungary’s 12th of April 2026 parliamentary election, has reignited claims that Trump staged his shooting in July 2024.


Following the publication of the report, Anti-Trump social media users began claiming that the assassination attempt on Donald Trump was also staged to generate political sympathy. These claims quickly gained traction online, drawing significant engagement. Actually, there is no credible evidence to support assertions that the assassination attempt on Trump, in which a bullet grazed his ear, was staged.


The Washington Post report was based on intelligence documents. The plan, described as a potential "gamechanger", aimed to shift the campaign away from economic concerns toward emotional themes such as security and stability. Although the proposal was never carried out and has been dismissed by the Kremlin as disinformation, it highlighted the strategic importance Moscow places on maintaining Orbán, one of its closest allies within the EU and NATO, in power. Beyond this proposal, the report pointed to broader Russian efforts to influence Hungary’s political landscape, including disinformation campaigns, support for pro-government narratives, and attempts to discredit opposition figures.


Sources: 



AI-Generated Audio of Clinton Criticized the Iran War

As reported by NewsGuard's Reality Check, a network of YouTube channels has been using AI-generated audio to impersonate former U.S. President Bill Clinton, falsely portraying him criticizing Donald Trump’s handling of the war in Iran.  144 such videos have accumulated more than 10 million views, often featuring realistic voice imitations paired with static images. While some videos include small disclosures, many viewers appear to believe the content is genuine.


NewsGuard also found Clinton deepfakes discussing topics such as state elections in Florida and Texas and U.S.-Canada relations. Similar AI-generated audio commentary on the Iran war and other political topics has also targeted other former presidents, including Barack Obama and George W. Bush. The report suggested that financial incentives, rather than purely political motives, are driving this activity. The videos generate advertising revenue through YouTube’s monetization system, benefiting from high engagement and low production costs. YouTube has since removed several of these channels for violating its policies.


Source: NewsGuard, S. Rubinson. AI YouTube Channels Put Words in Bill Clinton’s Mouth About the Iran War, Drawing Millions of Views. [online] Published 25 March 2026. Available at: https://www.newsguardrealitycheck.com/p/bill-clinton-on-youtube-bashes-trump


Orbán Doubled Down on Anti-Ukrainian Campaign to Secure Reelection

Hungary’s ruling Fidesz party has intensified its anti-Ukrainian rhetoric ahead of the country’s most competitive election in 16 years, as reported by The Jamestown Foundation, using recent tensions with Kyiv to strengthen its campaign. Disputes over the Druzhba oil pipeline, controversial statements by Ukrainian officials, and unverified allegations of threats and financial interference have been used by the government and pro-government media to portray Ukraine as a hostile actor. The strategy appears aimed at mobilizing voters by exploiting existing skepticism toward Ukraine and fears of involvement in the war.


The rise in popularity of the main opposition Tisza party has consolidated despite multiple failed efforts by Fidesz to counter the new challenger through various tactics. These include a since-debunked artificial intelligence-generated document promoted as Tisza’s “secret austerity program”, personal attacks against Magyar over his private life, and labeling him as a Ukrainian agent.


Source: The Jamestown Foundation, P. Fazekas. Orbán Doubles Down On Anti-Ukrainian Campaign To Secure Reelection. [online] Published 25 March 2026. Available at: https://jamestown.org/orban-doubles-down-on-anti-ukrainian-campaign-to-secure-reelection/


Disinformation After the Bondi Attack

As reported by ABC News, following the Bondi attack in Sydney, a real image of survivor Arsen Ostrovsky was rapidly weaponized in a wave of disinformation. His selfie, sent to his wife, went viral but was quickly reframed by online conspiracy communities as "evidence" that the attack was staged. False claims emerged suggesting his injuries were fake and that the attack was orchestrated by Israeli actors. These narratives relied on familiar tactics such as questioning victim behavior, introducing baseless links to intelligence agencies, and labeling victims as "crisis actors".


The disinformation spread quickly across platforms like Telegram, X, and Reddit, evolving from speculation to more sophisticated manipulation. Within hours, AI-generated images were created to "prove" the conspiracy, showing Ostrovsky with fake blood being applied. These fabricated visuals were widely shared internationally, even among users who recognized them as false. At the same time, authentic reporting and real evidence were dismissed as fake. Importantly, this wave of disinformation appears to have been driven less by coordinated state actors and more by decentralized networks and "conspiracy entrepreneurs" seeking attention and profit.


Source: ABC News, J. Robertson & M. Connaughton. This man went viral after surviving Bondi. Then the internet took a dark turn. [online] Published 27 March 2026. Available at: https://www.abc.net.au/news/2026-03-28/how-bondi-beach-survivor-became-face-of-conspiracy-theory/106499580


Orban Spokesperson Misrepresented 2021 Lawsuit to Smear Journalist Catherine Belton

Hungary’s international spokesman Zoltán Kovács is misrepresenting a 2021 lawsuit involving sanctioned Russian oligarch Roman Abramovich to discredit journalist Catherine Belton, as highlighted in a report by DisinfoWatch. The case, brought against Belton and her publisher over her landmark book “Putin’s People,” was widely seen as a politically motivated SLAPP suit aimed at intimidating her and undermining her reporting on Vladimir Putin’s network. Kovács is now reviving it to challenge her recent Washington Post reporting on Hungary’s upcoming election and alleged Russian links.In reality, the 2021 case did not disprove Belton’s work. The lawsuit was settled without damages, only minor amendments were made to the book, and its central findings remained intact. Abramovich’s close ties to Putin were later reaffirmed in EU sanctions records. Meanwhile, Belton’s more recent reporting on Hungary and Russia has prompted broader international scrutiny, including follow-up coverage by AP, which reported that the European Commission sought clarification from Hungary after the Washington Post allegations, and by Reuters, which reported longstanding regional suspicions.


Source: DisinfoWatch. Orban spokesperson Recycles Oligarch SLAPP to Smear Journalist Catherine Belton. [online] Published 29 March 2026. Available at: https://disinfowatch.org/disinfo/orban-spokesperson-recycles-oligarch-slapp-to-smear-journalist-catherine-belton/


[Appendix - Frameworks to Counter Disinformation]


Disrupting the foundations of FIMI

An analysis by EUvsDisinfo argues that foreign information manipulation and interference (FIMI) should be understood as a structured “supply chain” of deception, where influence operations rely on coordinated resources, infrastructure, and intermediaries. These campaigns require funding, personnel, and technology, with actors often outsourcing activities to contractors and commercial providers to ensure plausible deniability and complicate attribution.


The report highlights that FIMI ecosystems are highly interconnected, involving not only state and non-state actors but also overlaps with organized crime networks that provide technical infrastructure, global reach, and operational cover—such as hosting fake news websites or managing bot networks.


To counter these threats, the analysis emphasizes the need to disrupt the underlying structures that enable FIMI rather than focusing solely on individual pieces of content. This includes targeting financial flows, dismantling enabling infrastructure, and increasing the operational costs for perpetrators, reflecting a broader shift toward systemic and preventive approaches in countering information manipulation.


Source: EUvsDisinfo. Disrupting the foundations of FIMI. [online] Published 27 March 2026. Available at: https://euvsdisinfo.eu/disrupting-the-foundations-of-fimi/


Trump Administration Accused of Turning Voice of America into a Partisan Propaganda Outlet

As reported by The hill, a coalition of current and former Voice of America (VOA) journalists, alongside press freedom organizations PEN America and Reporters Without Borders, filed a federal lawsuit in the U.S. District Court for the District of Columbia against the Trump administration, the U.S. Agency for Global Media (USAGM), its acting CEO Michael Rigas, and former USAGM director Kari Lake. The plaintiffs allege that USAGM leadership sought to transform VOA's newsroom into a partisan instrument of the executive branch, compelling journalists to reproduce White House talking points nearly verbatim and to disseminate imagery of President Trump in a manner characteristic of authoritarian personality cults. The complaint frames censorship and propaganda as complementary tools of the same strategic objective, arguing that these directives violate both the federal statutes governing VOA's editorial independence and constitutional protections, and that they fundamentally undermine U.S. credibility among the foreign audiences VOA is mandated to serve. Among the concrete tactics alleged are the suppression of politically inconvenient coverage, the replacement of independent editorial judgment with state-directed messaging, and the cancellation of wire service agreements with the Associated Press and Reuters in favor of a proposed arrangement with the right-wing One American News Network.


The most operationally significant allegations concern VOA's Persian-language service, which broadcasts into Iran during the ongoing U.S.-Israeli military campaign. According to the lawsuit, transmissions to Iranian audiences have systematically omitted casualty figures from U.S. airstrikes, excluded perspectives from international leaders outside the administration, and minimized coverage of a strike on an elementary school, with a Lake-appointed official requiring pre-approval for all guest appearances across the Persian, Kurdish, and Afghan broadcast services. The plaintiffs further allege that Lake and Rigas suppressed interviews, video footage, and reporting on anti-government protests within Iran, and banned coverage critical of certain factions opposed to the Iranian regime from the Persian Service entirely. The lawsuit frames these interventions as particularly damaging given VOA's foundational mandate: to serve as an independent information source for audiences living under authoritarian media environments. By subordinating editorial independence to political messaging, the plaintiffs argue, VOA risks becoming indistinguishable from the state-controlled outlets its target audiences already contend with domestically, effectively neutralizing one of the United States' principal strategic communications assets.


Sources: 


[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website.










 
 
bottom of page