Weekly Report: Cyber based influence campaigns 23rd - 29th of June 2025
- CRC
- 15 hours ago
- 20 min read
Updated: 4 hours ago

[Listen to the Podcast]
[Report Highlights]
CheckFirst's third report, dated June 26, 2025, reveals how the Russian disinformation campaign Operation Overload specifically targets six countries: France, Germany, Moldova, Poland, Ukraine, and the United States. By flooding media outlets and fact-checkers with targeted false information, the campaign aims to overwhelm and paralyze their efforts.
The Robert Lansing Institute reports that Russia orchestrated a failed coup attempt in Serbia using disinformation, paramilitary networks, and religious influencers to destabilize the country and obstruct its pro-Western trajectory.
The UK Defence Journal reports that dozens of pro-Scottish independence accounts on X, believed to be part of an Iranian disinformation campaign to weaken the UK, went dark simultaneously after Israeli airstrikes disrupted Iranian cyber infrastructure, exposing a direct link between physical attacks and online influence operations.
A recent study by the Atlantic Council's Digital Forensic Research Lab (DFRLab) revealed that Elon Musk’s AI chatbot, Grok, played a troubling role in spreading disinformation during the early days of the Israel-Iran conflict.
A New York Times investigation highlights how artificial intelligence has evolved beyond novelty into a weaponized disinformation tool, deployed by foreign regimes and domestic actors to sow distrust in democracies worldwide.
NewsGuard Reality Check reports that a false claim on X about China sending military aid to Iran, stemming from misinterpreted flight data, was amplified by a pro-Iran commentator, some mainstream media, and notably, multiple AI chatbots, revealing a significant flaw in how misinformation spreads.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS
STATE ACTORS
[Russia]
[China]
[Iran]
AI-RELATED ARTICLES
GENERAL REPORTS
STATE ACTORS
[Russia]
From Headlines to Lies: Global Events as Vehicles for Disinformation
The article by EU vs. Disinfo shows how the Kremlin uses global attention focused on high-level geopolitical events, ranging from the Israel-Iran conflict to NATO’s historic summit and EU support for Ukraine, to ramp up its disinformation efforts. Faced with a rapidly evolving international landscape, Russian propaganda reverted to familiar falsehoods: portraying NATO as an aggressor, Europe as Russophobic, and Russia as an eternal victim. These narratives were strategically promoted to capitalize on global events' visibility and validate Russia's confrontational worldview.
The Kremlin recycled long-standing myths of Western hostility, falsely claiming that NATO's defense initiatives threaten Russia and that its very existence is expansive. Pro-Kremlin voices attempted to undermine Western unity by exaggerating internal divisions within NATO and framing increased defense spending as fiscally irresponsible or indicative of impending collapse. Simultaneously, Europe was vilified for supporting Ukraine and tightening sanctions on Russia, reinforcing the illusion of a hostile West bent on weakening Moscow.
Source:
EUvsDisinfo, 2025. The Kremlin’s self-fulfilling curse. [online] Available at: https://euvsdisinfo.eu/the-kremlins-self-fulfilling-curse/
History, Rewritten – Generation, Redefined
EU vs. Disinfo highlights how the Kremlin’s disinformation strategy has infiltrated the Russian education system to indoctrinate youth with a distorted view of national history. New textbooks, co-authored by regime-loyal writers, present a highly manipulated narrative of Russia’s past. The country is depicted as an “eternal victim” and a “reluctant warrior.” This campaign of historical revisionism reframes acts of aggression as noble acts of defense, from Soviet invasions in the 20th century to the annexation of Crimea and the war against Ukraine.
Complex historical contexts are simplified, omitted, or reinterpreted. Military expansion is portrayed as liberation, while war crimes, dissent, and public debate are ignored.
This rewriting of history is not merely about fostering patriotism; it functions as a long-term instrument of state-sponsored disinformation. The textbooks glorify Russian militarism, downplay Western contributions to global conflicts, and criminalize criticism of the Red Army. At the same time, they reinforce the narrative that Russia has never initiated a war, but merely responded to threats. The intended outcome: a generation shaped by a state-driven historical narrative, prepared to interpret future military actions as necessary and legitimate responses to external threats.
Source:
EUvsDisinfo, 2025. Russia’s Military History: Never in the Wrong. [online] Available at: https://euvsdisinfo.eu/russias-military-history-never-in-the-wrong/
Operation Overload: Attacking Democracy’s Immune System
In its third report, researchers from CheckFirst examine the development of Operation Overload, a Russian information campaign targeting six countries: France, Germany, Moldova, Poland, Ukraine, and the United States. The campaign seeks to attract the attention of media and fact-checkers by overloading their capacity. First documented in June 2024, it has since expanded in scope and platform presence. Narrative themes include anti-Ukrainian rhetoric, election interference, gender-based disinformation, smear campaigns, and calls to violence. A key tactic is content amalgamation, which is publishing the same message across platforms to simulate credibility.
Between January 2024 and May 2025, 997 deceptive emails were sent to more than 245 media and research organizations, 704 of which arrived after September 2024. Spikes occurred around major political or global events, such as the Paris Olympics or national elections.
The actors also operate on Telegram, X (formerly Twitter), Bluesky, and, since May 2025, on TikTok. AI-generated content is increasingly used to impersonate well-known public figures, mainly journalists and academics. Since September 2024, around 600 content items have been identified, a 1.5-fold increase over the previous year. Logos from 180 institutions and the identities of more than 180 individuals were misused.
CheckFirst warns of declining platform moderation and calls for stronger moderation and legal action, especially enforcing the EU Digital Services Act.
Source:
CheckFirst, Atanasova, A., Poldi, F. & Kuster, G., 2025. Operation Overload: More Platforms, New Techniques, Powered by AI. [online] Available at:
Selective Truths: RT Takes Aim at the New MI6 Director
DisinfoWatch has reviewed the facts and concludes that the allegations against Blaise Metreweli are part of a Russian state broadcaster RT influence campaign. RT claimed that Metreweli’s grandfather, Constantine Dobrowolski, was a Nazi collaborator during World War II. This information is historically documented, but it is presented without essential context.
Metreweli’s father, born Dobrowolski in 1943 in occupied Ukraine, was raised in England by his stepfather and took the surname Metreweli. He had no affiliation with or knowledge of his biological father’s Nazi past. Ms Metreweli neither knew nor met her paternal grandfather. The tactic recalls past efforts, including those targeting Chrystia Freeland, where ancestry is used to delegitimize public figures.
Source:
DisinfoWatch, 2025. RT Recycles Nazi Allegations to Undermine New MI6 Chief Through False Guilt by Association. [online] Available at: https://disinfowatch.org/disinfo/rt-recycles-nazi-allegations-to-undermine-new‑mi6‑chief‑through‑false‑guilt‑by‑association/
“Nobody Leaves the Family": The Coup Attempt in Serbia
The Robert Lansing Institute outlines how Russian influence campaigns helped pave the way for the failed coup attempt in Serbia. According to the report, Serbia's alleged military support for Ukraine was a key trigger for Russia’s actions, which Moscow perceived as a symbolic challenge to its traditional influence in the Balkans. The report points to long-standing ties between Russian intelligence and sectors of Serbian society, including the military (with officers trained in Russia), security services (notably infiltration risks within the BIA), the Orthodox Church, and far-right political movements.
The coup attempt occurred amid economic instability, political fragmentation, and growing public dissatisfaction. These conditions have made Serbia especially vulnerable to external manipulation. Russian influence operations targeted the Serbian public through state-backed media such as RT Balkan and Sputnik Serbia, along with nationalist networks, veteran groups, and clerics aligned with Moscow. EU integration and normalization with Kosovo were framed as betrayal and “spiritual surrender” to the West.
Drawing parallels with Armenia, the report concludes that Russia reacts to perceived geopolitical drift by deploying disinformation, ideological pressure, and covert tactics to derail reform, create chaos, and reassert control.
Source:
Robert Lansing Institute, 2025. The Coup Attempt in Serbia — Kremlin Influence, Balkan Instability, and Strategic Fallout. [online] Published 26 June 2025. Available at: https://lansinginstitute.org/2025/06/26/the-coup-attempt-in-serbia-kremlin-influence-balkan-instability-and-strategic-fallout/
[China]
China's Use of Quanzhen Taoism to Spread Disinformation
A recent analysis by ReligioScope reveals how the Chinese Communist Party (CCP) strategically leverages religious institutions, specifically Quanzhen Taoism, as instruments of political influence beyond mainland China. At the core of this effort are the United Front Work Department (UFWD) and the Chinese Taoist Association (CTA), seemingly aligning religious practice to Party objectives.
In Taiwan, Quanzhen practitioners have reportedly come under growing pressure to participate in CCP-organized events, including ideologically framed “religious exchanges,” visits to Party-affiliated temples, and subtle expectations to echo Beijing’s positions publicly. These tactics form part of a broader campaign to project influence through cultural and spiritual channels, extending the CCP’s reach into politically and religiously autonomous societies.
Source:
ReligioScope, Swenson Daly, M., Infiltrating the Tao. [online] Available at: https://www.religioscope.org/papers/03.pdf
[Iran]
Manufacturing Victory: Iran’s Disinformation Efforts
In an updated analysis, NewsGuard outlines how Iranian state media and affiliated channels launched a coordinated disinformation campaign following the Israeli strikes on nuclear and military facilities in Tehran on June 13, 2025. The aim was to downplay Israel’s military success while portraying Iran’s retaliation as effective.
To date, 26 specific false claims have been identified and disseminated across 78 websites. These included AI-generated images and fabricated reports of captured Israeli soldiers (find more information in the Weekly Review W25). The primary sources were channels linked to the Islamic Republic of Iran Broadcasting (IRIB) and military-affiliated Telegram accounts. Platforms such as YouTube, TikTok, and X (formerly Twitter) were primarily used for distribution.
The strategy reflects a familiar pattern in Iran’s information operations: official outlets, anonymous websites, and digital platforms push pro-Iranian narratives, project regime stability, bolster Tehran’s strategic interests, and mislead the international public.
Sources:
NewsGuard, Sadeghi, M., Howard, S. & Lin, C., 2025. Iranian State-Affiliated False Claims Tracker: 26 Myths about the War and Counting. [online] Available at: https://www.newsguardtech.com/special-reports/israel-iran-conflict/
The Fiction of Retreat: Iran’s Disinformation on U.S. Withdrawal
NewsGuard has documented another targeted instance of Iranian disinformation amid the recent escalation between the United States and Iran. At the center is a false claim that the U.S. Joint Chiefs of Staff had ordered a complete withdrawal of American troops from the Middle East. This baseless narrative emerged shortly after the U.S. airstrikes on Iranian nuclear facilities on June 21, 2025, and quickly spread on social media, particularly on X (formerly Twitter), where it garnered hundreds of thousands of views.
The claim was amplified primarily by pro-Iranian and pro-Russian accounts, many of which have previously been involved in similar disinformation efforts. Notably, the timing coincided with Iran’s retaliatory missile strike on the U.S. military base Al Udeid in Qatar on June 23.
The likely aim of the disinformation was to project an image of American retreat or weakness, framing Iran’s response as bold and practical. This serves both a domestic propaganda function and an international strategic message. Official U.S. military sources, however, have denied the claim: no such statements appear on the websites or social media accounts of U.S. Central Command or the Joint Chiefs of Staff.
Sources:
NewsGuard Reality Check, Komar, S., 2025. No, 40,000 U.S. Troops Were Not Evacuated from the Middle East. [online] Published 24 June 2025. Available at: https://www.newsguardrealitycheck.com/p/no-40000-us-troops-were-not-evacuated
Hybrid by Design: Iranian Hacktivists Target Saudi Games
According to Infosecurity Magazine, the pro-Iranian hacktivist group Cyber-Fattah orchestrated a significant data breach targeting the Saudi Games 2024 registration platform. The incident, part of a broader Iranian information operation, exposed sensitive personal and financial data of athletes and officials. Cybernews additionally reported on a broader Saudi-linked data leak and a DDoS attack on the U.S.-based social media platform Truth Social. Notably, the timing shortly after U.S. airstrikes on Iranian nuclear facilities suggests a coordinated cyber response.
Data exfiltration, service disruption, and narrative manipulation illustrate how hacktivist groups deploy multi-pronged cyber tactics to spread uncertainty, influence public perception, and destabilize digital communication infrastructures in geopolitical conflict. This targeted action represents a complex example of hybrid warfare: it combines technical attacks with strategic disinformation to undermine trust, establish a narrative of insecurity, and exploit digital platforms as channels for geopolitical messaging against regional rivals.often called “Torture TV”—as a core component of Tehran’s psychological warfare.
Sources:
Infosecurity Magazine, Mascellino, A., 2025. Cyber Fattah Leaks Data from Saudi Games in Alleged Iranian Operation. [online] Available at: https://www.infosecurity-magazine.com/news/cyber-fattah-leaks-data-saudi-games/
Cybernews, Lapienytė, J., 2025. US Strike on Iran Sends Online Ripples: Major Saudi Leak, DDoS on Truth Social. [online] Available at: https://cybernews.com/cyber-war/major-saudi-leak-ddos-on-truth-social/
The Usual Suspects Are Missing: Tehran’s Fake Scots Go Silent
The UK Defence Journal reports the sudden disappearance of dozens of pro-Scottish independence accounts on X, immediately following the Israeli airstrikes on Iranian military and cyber infrastructure on June 12, 2025. According to the report, the network behind these accounts was operated by Iran’s Islamic Revolutionary Guard Corps (IRGC), which managed over 80 accounts posing as British users. Between 2022 and 2024, these profiles posted an estimated 250,000 tweets promoting pro-independence and anti-UK narratives.
The simultaneous takedown of the accounts, coinciding with widespread power outages and internet blackouts in Iran, strongly suggests centralized control from inside the country, likely disrupted by the Israeli strikes.
This incident illustrates how state-sponsored disinformation campaigns exploit domestic political divides in Western democracies to sow discord and erode national cohesion. It also highlights the vulnerability of social media platforms to coordinated influence operations and shows how real-world military actions can dismantle digital propaganda networks in an instant.
Source:
UK Defence Journal, Allison, G., 2025. Dozens of pro-Indy accounts go dark after Israeli strikes. [online] Available at: https://ukdefencejournal.org.uk/dozens-of-pro-indy-accounts-go-dark-after-israeli-strikes/
AI RELATED ARTICLES
Grok, We Have a Problem: Disinfo in the Israel–Iran War
A recent analysis by the Atlantic Council’s Digital Forensic Research Lab (DFRLab) reveals that Elon Musk’s AI chatbot Grok played a problematic role in spreading disinformation during the early days of the Israel-Iran conflict. Designed to help users verify facts, Grok instead produced contradictory and inaccurate responses, particularly when asked about AI-generated content and widely circulated fake visuals.
In one example, Grok gave conflicting answers within a minute regarding an airport allegedly struck by Iran. It alternately claimed the location was in Beirut, Gaza, or Tehran, none of which was accurate. In several instances, the chatbot misidentified events or confirmed fabricated claims as fact.
According to DFRLab, this failure highlights a problematic trend: as platforms scale back human moderation and fact-checking, users increasingly rely on AI tools like Grok or Perplexity, only to receive misinformation dressed as authoritative responses. Both bots, for instance, incorrectly affirmed fake stories such as China supplying weapons to Iran.
Grok has shown similar vulnerabilities in the past: it previously amplified the debunked far-right conspiracy theory of “white genocide” in South Africa, a striking example of how AI systems, without proper oversight, can uncritically repeat and spread harmful narratives.
Sources:
DFRLab, Ponce de León, E. & Chenrose, A., 2025. Grok struggles with fact-checking amid the Israel-Iran war. [online] Available at: https://dfrlab.org/2025/06/24/grok-struggles-with-fact-checking-amid-israel-iran-war/
The Post-Truth Machine: AI-driven Disinformation Threatens Democracy
The New York Times warns that AI-driven disinformation is destabilizing democracies.
Electoral manipulation via deepfakes:
In Poland, a fake AI-generated video falsely showed Donald Trump endorsing far-right politician Slawomir Mentzen. Though fabricated, the clip spread rapidly on TikTok, a clear example of targeted disinformation aimed at swaying voters. The result was reinforcement of far-right mobilization.
Foreign interference through AI:
In Romania, a Russian influence campaign used AI to manipulate the first round of the 2024 presidential election to such an extent that the result was annulled. A fringe candidate surged ahead via inauthentic TikTok promotion. The court-ordered rerun marks a precedent for election nullification due to AI-led interference.
Erosion of public trust:
In Germany and the United States, AI-generated content circulated false quotes and images of political candidates. These campaigns aimed to undermine trust in democratic institutions and polarize public debate, often reinforcing far-right conspiracy narratives.
Platform failures and lack of oversight:
TikTok removed 7,300 AI-generated posts during Romania’s runoff election but admitted that many were not labeled as synthetic. Major platforms are too slow or ineffective in curbing such manipulation, allowing disinformation to spread unchecked.
Conclusion:
The New York Times investigation outlines the impact of AI-driven disinformation on democratic processes. It shows how political discourse is increasingly undermined while effective regulatory, political, or technological responses remain lacking. The report warns that democracies must find ways to respond to prevent lasting damage from digital manipulation.
Source:
The New York Times, Myers, S.L. & Thompson, S.A., 2025. A.I. Is Starting to Wear Down Democracy. [online]
Available at: https://www.nytimes.com/2025/06/26/technology/ai-elections-democracy.html?
The Flight That Never Was: AI Boosts False China-Iran Claim
NewsGuard reveals how, following Israeli airstrikes on Iranian targets in June 2025, a targeted wave of disinformation took hold: a false claim that a Chinese military cargo plane flew to Iran went viral, based on a misread of flight data. Anonymous accounts on X and pro-Iran commentator Jackson Hinkle pushed the narrative, despite explicit denials from Flightradar24 and Cargolux. Yet the false story found traction in outlets like The Telegraph and Epoch Times.
The real force behind the campaign lies in the use of generative AI chatbots like Grok, Perplexity, and Meta’s bot, which repeatedly confirmed the false claims without verification. This highlights how AI tools amplify disinformation and make it seem more credible to users. As platforms scale back human fact-checking, more users turn to these AI systems, falling into the trap of targeted misinformation.
This case exemplifies modern hostile influence tactics: combining human manipulation with automated dissemination makes disinformation faster, broader, and harder to control, especially in geopolitical crises. Such operations deliberately undermine democratic discourse, an urgent challenge for policymakers, society, and technology alike.
Source:
NewsGuard Reality Check, Lin, C., 2025. False Claim that China is Supporting Iran in the War with a Chinese Military Cargo Plane; Chat Bots Boost It. [online] Available at: https://www.newsguardrealitycheck.com/p/false-claim-that-china-is-supporting
GENERAL REPORTS
Disinformation Undermines Polish Democracy Amid Contentious Election
According to a report by Global Issues, Poland’s recent presidential election, narrowly won by nationalist Karol Nawrocki, has become a case study in how disinformation and foreign interference can influence democratic processes. The campaign was marred by coordinated online manipulation, with over 2,400 fake accounts targeting liberal candidate Rafał Trzaskowski or promoting Nawrocki. Investigations revealed a flood of misleading content on TikTok and Facebook, heavily skewed toward far-right narratives, often laced with anti-Ukrainian and anti-immigration conspiracy theories. These efforts contributed to an increasingly polarized electorate and undermined confidence in the electoral process.
The campaign mirrored Kremlin-style influence operations and coincided with unprecedented international support for Nawrocki from far-right circles, including former U.S. President Donald Trump and the Conservative Political Action Conference. With Prime Minister Donald Tusk surviving a confidence vote but facing a hostile presidency, Poland now confronts potential institutional paralysis. Judicial reforms crucial to restoring EU funding will likely stall, and Nawrocki’s foreign policy stance could weaken Poland’s support for Ukraine.
Source:
Global Issues, Pousadela, I.M., 2025. Poland’s Democratic Deadlock. [online] Available at: https://www.globalissues.org/news/2025/06/25/40264
Trump and Hannity's Post-Bombing Disinformation
Wired reported that President Donald Trump and his closest supporters, including Fox News host Sean Hannity, have been using digital disinformation campaigns to portray the US airstrikes on Iranian nuclear facilities as a complete and decisive victory. These narratives were primarily spread through Trump’s platform, Truth Social, and other social media channels. Instead of relying on information from his intelligence agencies, satellite imagery, or on-the-ground reporting, Trump posted on Truth Social a screenshot of an anonymous X account claiming to conduct open-source intelligence, stating that “Fordow is gone.” Sean Hannity amplified this false claim by sharing a video of an explosion on Instagram, which was footage from an Israeli airstrike in Syria.
While military officials and experts contradicted Trump’s portrayal and cautioned against premature assessments, Trump continued to assert that the Fordow facility had been “completely obliterated.” His early declaration on Truth Social shaped public discourse and inspired supporters who hailed the bombing as the end of the conflict. At the same time, Trump later raised the possibility of an extended military engagement and even “regime change,” a stance disputed within his administration.
Political opponents criticized both Trump and Hannity for spreading misleading information that damages public discourse and undermines democratic oversight. This case exemplifies how digital platforms and social media can be weaponized as tools of hybrid warfare to advance political agendas, erode trust in reliable information, and deepen societal divisions.
Source:
Wired, Myers, S.L. & Thompson, S.A., 2025. Donald Trump and Sean Hannity Set Off a Wave of Disinformation After Iran Bombing. [online] Available at: https://www.wired.com/story/donald-trump-sean-hannity-disinformation-iran-bombing/
Digital Fog of War: AI Slop and Information Control in the Iran Conflict
POLITICO’s Weekly Cybersecurity warns that amid escalating Israel‑Iran tensions, AI-generated “slop”—including deepfakes, manipulated images of destroyed military hardware, and synthetic videos falsely depicting attacks—has proliferated across social media. These fabricated visuals, some shared by world leaders, state-backed outlets, and partisan influencers, spread rapidly, exploiting algorithms and emotional resonance to shape public perception before fact-checkers can respond.
The strategy combines synthetic media production, rapid bot amplification, and state-driven narrative control, especially with information blackouts or censorship designed to limit counter-narratives. “The combination of state censorship and AI-powered misinformation is a new digital battlefield, and the collateral damage is public trust,” said Dave Gerry, CEO of cybersecurity firm Bugcrowd. The implications are significant: democracies now face a multifront information battlefield where trust in visual evidence is eroded, fact-checking defenses lag behind AI-enabled manipulation, and authoritarian regimes gain an advantage through coordinated, real-time influence operations.
Source:
Politico, Nickel, D., 2025. AI Slop Spreads in Israel-Iran War. [online] Available at: https://www.politico.com/newsletters/weekly-cybersecurity/2025/06/23/ai-slop-spreads-in-israel-iran-war-00417791
Strategic Rivals Celebrate US’s ‘Soft Power Suicide’
As stated in a New York Times article, under the Trump administration, the United States scaled back or dismantled many of its key global communication tools, including Voice of America and Radio Free Asia, platforms central to promoting democratic values and countering authoritarian propaganda. This retreat was celebrated by rivals like Russia and China, who saw an opportunity to expand their influence. Kremlin-backed RT and China's Global Times openly rejoiced at the weakening of U.S. media infrastructure. At the same time, nations like Hungary, Cambodia, and Cuba followed suit in applauding America's withdrawal from the global information battlefield.
In the absence of U.S. leadership, authoritarian states moved to fill the vacuum. Russia, China, Turkey, and others ramped up investments in state-run global media outlets, disinformation campaigns, and cultural outreach, deploying fake accounts, algorithmic manipulation, and state-aligned influencers to flood international platforms with narratives that distort truth and undermine democratic ideals. Fact-based reporting was increasingly replaced by polarizing, often deceptive messaging aimed at reshaping global perceptions in favor of authoritarian models.
Experts warn this U.S. "soft power suicide" has not only weakened American global credibility but also emboldened adversaries to weaponize disinformation unchecked. As China's Xinhua and Russia’s Sputnik expand reach in Africa and Asia, and Western trust in U.S. messaging declines, the struggle for global influence has entered a new phase, one where truth competes against algorithm-boosted falsehoods, and where the United States, once a leader in promoting free expression, is increasingly sidelined in the battle for hearts and minds.
Source:
New York Times, Hsu, T., 2025. As U.S. Dismantles Voice of America, Rival Powers Hope to Fill the Void. [online]
Available at: https://www.nytimes.com/2025/06/24/business/media/us-china-russia-global-communications.html
Nationalist Networks and Global Threats: The GNCA’s Role in Disinformation Campaigns
An article by Global Influence Ops examines the Global National Conservative Alliance (GNCA), a burgeoning global political movement uniting right-wing and far-right factions. A key aspect of the GNCA's disinformation is its strategic use of influence operations and the exploitation of its networks by foreign actors, notably Russia and China, to spread disinformation and undermine democratic institutions. This involves eroding checks and balances and establishing patronage networks, which can then be leveraged to propagate narratives disguised as legitimate political discourse that serve authoritarian interests.
The actors involved are the various components of the Global National Conservative Alliance, including movements like MAGA in the US, and foreign states such as Russia and China, who act as amplifiers and exploiters of these networks. The broader implication is a significant threat to the integrity of democracy worldwide. By championing national sovereignty, protectionist trade, and cultural exclusion, the GNCA creates fertile ground for foreign interference and the proliferation of misleading information, ultimately weakening democratic norms and institutions on a global scale.
Source:
The Conversation, Sinclair H.C., Most Americans believe misinformation is a problem — federal research cuts will only make the situation worse, 2025, [online]; Available at: https://theconversation.com/most-americans-believe-misinformation-is-a-problem-federal-research-cuts-will-only-make-the-problem-worse-255355
[Download Report]
GLOSSARY
Information Operations
Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.
Hybrid Warfare
It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.
Cyber Warfare
Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.
Cyfluence Attack
Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.
Soft Warfare
All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.
CIB
Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,
emphasizing both coordination and inauthentic behavior.
FIMI
The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.
Hostile Influence Campaign (HIC)
An information operation sought to influence a targeted audience for a hostile cause.
Digital Impact on Discourse (DID)
Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.
Misinformation
A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.
Disinformation
Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".
Inauthentic Behavior
Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.
Fake users
AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.
Unidentified users
A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).
Sockpuppet accounts
A sock puppet or sockpuppet is an online identity used for deception.
Bots
Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.
Repurposed accounts
Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.
Fake website
Is a website designed for fraudulent or scam activity, hiding its real purpose.
Deep Assets
These are non-human deep cover assets, divided into two sub-categories:
Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).
Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.
Real platforms
Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.
Astroturfing
Takes place when a coordinating actor creates a false impression of grassroots support.
Cyberbullying
is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.