Weekly Report: Cyber based influence campaigns 16th - 22nd of June 2025
- CRC
- 5 days ago
- 21 min read

[Listen to the Podcast]
[Report Highlights]
As published by La Derecha Diario, Argentina’s State Intelligence Secretariat (SIDE) has uncovered a covert Russian spy network operating within the country, focused on disinformation and influence campaigns aligned with Kremlin interests.
According to a Graphika report, a covert network of over 1,000 X accounts has been exposed for engaging in coordinated disinformation efforts surrounding the U.S.-China trade war.
New Eastern Europe reports that foreign meddling was lower than expected despite Poland's "election umbrella" strategy of monitoring, training, and cybersecurity to counter anticipated Russian interference in its June 1st presidential election.
The ICT reports that Iran is using AI-generated disinformation on TikTok to fabricate battlefield victories, ridicule Israeli leadership, and shape global perceptions following Israel’s June 2025 strikes.
The Oxford Internet Institute reports that Republican tweets on X were flagged as misleading nearly twice as often as Democratic ones, highlighting a partisan imbalance in misinformation identified by the platform’s Community Notes system.
Logically and EU DisinfoLab report that CIB networks are spreading climate disinformation across Europe through conspiratorial, culture-war, and pro-fossil fuel narratives to undermine public trust and stall climate action.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS
SOCIAL MEDIA PLATFORMS
[X]
STATE ACTORS
[Russia]
[China]
[Iran]
AI-RELATED ARTICLES
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
SOCIAL MEDIA PLATFORMS
[X]
X Corp. Challenges New York Law on Disinformation Reporting
As published by TechXplore, Elon Musk’s X Corp. has filed a lawsuit against a New York law requiring large social media companies to report how they define moderate disinformation, hate speech, and other harmful content. X argues the law violates the First Amendment by compelling disclosure of internal moderation practices, claiming it pressures platforms to limit constitutionally protected speech. The law, known as the “Stop Hiding Hate Act,” requires companies with over $100 million in annual revenue to submit semiannual transparency reports; noncompliance is punishable by daily fines.
X contends that the law reflects “viewpoint discriminatory motives,” pointing to lawmakers’ refusal to amend the bill following a similar law's rejection in California. The platform positions itself as defending user rights and editorial independence, stating this legal battle is about resisting government overreach into content moderation. Critics, however, argue that transparency regarding disinformation and hate moderation is essential and does not infringe on free speech.
Advocacy groups, such as Reporters Without Borders, argue that requiring platforms to explain how they handle misinformation is the bare minimum for accountability in digital spaces. New York legislators insist the law is narrowly focused on transparency, not censorship, allowing users to make informed choices about the platforms they engage with.
Source:
TechXplore, Zinin A., Musk’s X sues to block New York social media transparency law, 2025, [online]; Available at: https://techxplore.com/news/2025-06-musk-sues-block-york-social.html
STATE ACTORS
[Russia]
Argentina Identifies Russian Agents Involved in Hostile Influence
La Derecha Diario states that Argentina’s State Intelligence Secretariat (SIDE) has uncovered a covert Russian spy network operating within the country, focused on disinformation and influence campaigns aligned with Kremlin interests. The findings are detailed in a report that SIDE has not yet released. The operation, “La Compañía,” was allegedly tied to the notorious “Lakhta Project,” a global interference initiative previously led by Russian oligarch Yevgeniy Prigozhin. Russian nationals Lev Andriashvili and Irina Iakovenko, residing in Argentina, reportedly coordinated these efforts, which included recruiting local citizens, promoting false narratives, and establishing connections with sympathetic organizations.
The network's activities included creating and disseminating manipulated political content on social media, infiltrating civil society groups, organizing focus groups to gather sensitive data, and collecting political intelligence to aid foreign operations. Authorities warn that these disinformation tactics were designed to destabilize Argentina’s institutions and distort democratic processes, posing a serious threat to national sovereignty and public discourse. In response, Argentine officials reaffirmed their commitment to safeguarding internal security and resisting foreign influence.
Source:
La Derecha Diario, Editorial Team, These are the Russian spies SIDE caught in Argentina, 18. Juni 2025, [online]; Verfügbar unter: https://derechadiario.com.ar/us/argentina/these-are-the-russian-spies-that-side-argentina-captured
Africa’s Health Crisis and Russian Influence
EUvsDisinfo reports that Russian-linked actors are exploiting the Mpox outbreak in Africa for a disinformation campaign. The virus, primarily found in Central and West Africa, saw a new strain spread rapidly in 2023, leading the WHO to declare a global health emergency in 2024. The campaign, mainly active in francophone regions, spreads false claims that mpox was deliberately created by Western powers to harm African populations and benefit pharmaceutical companies. It also falsely links outbreaks to LGBTIQ+ communities, increasing stigma.
Kremlin state media praises russian vaccines while casting doubt on Western aid, aiming to portray the West and organizations like the WHO as responsible for outbreaks and unsafe vaccines, positioning Russia as Africa’s trusted partner. This disinformation fuels fear around diseases such as mpox, Marburg, and Ebola, deepening social divisions.
The campaign resembles the 1980s Soviet “Operation Denver,” which falsely claimed HIV/AIDS was a U.S. bioweapon and contributed to over 330,000 preventable deaths in South Africa due to delayed treatment.
Source:
EUvsDisinfo, Mpox, myths and media manipulation, 2025, [online]; Available at: https://euvsdisinfo.eu/mpox-myths-and-media-manipulation/ttps://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/
The Kremlin Exploits Crises for Hostile Influence
EUvsDisinfo highlights that Russia strategically leverages current crises to strengthen its geopolitical influence. A recent example is the escalation between Israel and Iran, which Russian media use to portray Iran as a peaceful nation while criticizing Israeli military actions. At the same time, Russia’s invasion of Ukraine receives limited coverage. This selective application of international law reinforces Russia’s position and weakens its opponents.
Additionally, Russia seeks to present itself as a mediator in the Middle East, primarily serving its strategic interests. Despite this stance, reports of civilian casualties and attacks on critical infrastructure in Ukraine remain largely unaddressed. The crisis in the Middle East also benefits Russia economically, with rising oil prices supporting its strained economy, while international attention is diverted away from Ukraine.
These developments illustrate how Russia uses ongoing conflicts to shape international norms and pursue its geopolitical goals.
Source:
EUvsDisinfo, Never let a crisis go to waste, 2025, [online]; Available at: https://euvsdisinfo.eu/never-let-a-crisis-go-to-waste/
[China]
Graphika Report Reveals Hostile Chinese Influence Operation
Graphika’s findings describe a coordinated digital disinformation campaign involving over 1,000 fake X accounts to influence public perception of the US-China trade conflict and global tariffs. The accounts used counterfeit identities, copied content from legitimate users, and amplified specific hashtags and media articles. The campaign sought to portray US trade policies, particularly President Trump’s tariffs, as harmful to the American and global economies. Sub-networks focused on countries such as the UK, Canada, and Japan, depicting them as resisting US pressure.
Beyond trade issues, the network also promoted China’s broader geopolitical agenda, including criticism of US military presence in Japan and favorable coverage of Chinese tourism initiatives. Although no direct attribution to a known actor was established, the content and tactics indicate a likely alignment with Chinese state interests.
Source:
Graphika, China-Aligned Network Poses as Grassroots Voices in Effort to Covertly Boost Online Narratives Critical of US Tariffs and Trade Policies, 2025, [online]; Available at: https://graphika.com/reports/tariff-tirade#download-form
Chinese Influence Efforts in Latin America
According to the Global Influence Operations Report (GIOR), China is rapidly expanding its propaganda network in Latin America through sophisticated disinformation strategies aimed at shaping public opinion and suppressing criticism. As reported by Diálogo Americas on June 13, 2025, Beijing employs various tactics, including journalist training programs, strategic media partnerships, and covert networks. These efforts involve specific actors, such as the Chinese publishing group Caixin Media, state-funded training programs for Latin American journalists, and collaborations with local outlets, including Peru's América TV and El Comercio. They also leverage platforms like BYD's Zhengzhou plant tours and state media groups like China Media Group.
The broader implications of this expanding Chinese influence operation highlight a comprehensive campaign to dominate media landscapes and erode trust in independent journalism across the region. China utilizes strategies like the "Paperwall" campaign, involving fake local newspaper websites (123 sites across 30+ nations), and "borrow a boat to go out to sea" tactics to disseminate pro-Beijing content in foreign media, both openly and covertly, often camouflaging it with commercial press releases to obscure its true origin. This strategic manipulation of information aims to neutralize local media on issues related to Beijing, devalue professional journalistic values, and ultimately advance China's political and economic interests in Latin America by controlling narratives and suppressing critical voices.
Sources:
GIOR, Beijing’s Expanding Propaganda Network in Latin America, 2025, [online]; Available at: https://www.global-influence-ops.com/beijings-expanding-chinese-propaganda-network-latin-america/
Diálogo Américas, China’s Influence Operations in Latin American Media, 2025, [online]; Available at: https://dialogo-americas.com/articles/chinas-influence-operations-in-latin-american-media/
[Iran]
Iran's AI-Fueled Disinformation After Israel Attacks
The recent developments in the Israel-Iran conflict vividly illustrate how digital hostile influence has become a key instrument in modern warfare. Following Israeli airstrikes in June 2025, Iranian state-controlled media orchestrated a coordinated effort, leveraging advanced AI technology to create a manipulated information environment. NewsGuard identified at least 16 targeted false claims across more than 50 platforms, accompanied by AI-generated images and videos, including fabricated reports of downed Israeli F-35 fighter jets and Prime Minister Netanyahu fleeing (NewsGuard).
These Iranian efforts use digital technologies to boost the credibility of their narratives and strategically shape public perception. Simultaneously, Russian actors intensify their digital campaigns aimed at undermining the trust in Western weapons systems (GIOR).
In the Israel-Iran context, methods are employed to question the reliability of Israeli defense systems—such as the Iron Dome—and Western arms supplied to Israel, aiming to weaken confidence in Israel’s military capabilities and indirectly strengthen Iran’s position.
France 24, citing AFP, reports that advanced AI tools like Google’s Veo 3 increasingly blur the line between reality and fabrication. AI-generated videos and clips repurposed from war-themed video games rapidly circulate on social media platforms and are sometimes falsely validated by AI chatbots like xAI’s Grok. Weak content moderation on major platforms exacerbates the trust crisis and facilitates the rapid spread of deceptive content.
This case underscores how digitally hostile influence, empowered by technological innovation, is growing increasingly sophisticated, more complex to detect, and globally interconnected. A combined approach of advanced detection technologies, international cooperation, robust regulation, and strengthened societal resilience is necessary to confront this escalating threat effectively.
Sources:
NewsGuard, Sadeghi M., Howard S., Lin C., Iranian State-Affiliated False Claims Tracker: 22 Myths about the War and Counting, 2025, [online]; Available at: https://www.newsguardtech.com/special-reports/israel-iran-conflict/
NewsGuard, Sadeghi M., When Down is Up: Top 10 Iranian State Media False Claims of Losses by Israel, 2025, [online];
Available at: https://www.newsguardrealitycheck.com/p/when-down-is-up-top-10-iranian-state
Global Influence Operations Report (GIOR), Israel-Iran Conflict and AI Disinformation: 100 Million Views on Fake Content, 2025, [online]; Available at: https://www.global-influence-ops.com/israel-iran-conflict-ai-disinformation-100m-views-on-fake-content/
France 24, Tech-fueled misinformation distorts Iran-Israel fighting, 2025, [online]; Available at: https://www.france24.com/en/live-news/20250621-tech-fueled-misinformation-distorts-iran-israel-fighting
Social Media Actors Fuel Confusion Amid the Israel-Iran Conflict
The Centre for Information Resilience (CIR) article on the Israel-Iran escalation highlights key challenges for OSINT practitioners. It provides essential insights into the complexity of modern information environments in conflict situations. It emphasizes the need for a systematic approach, technical expertise, and critical analysis to ensure reliable source evaluation.
Distinguishing authentic from false information in real time proves especially challenging when direct on-the-ground verification is unavailable. The widespread use of outdated or misattributed images and videos can significantly distort situational assessments.
Examples of widely circulated inaccurate content are highlighted, alongside the increasing presence of highly realistic AI-generated images. This situation demands that OSINT analysts consistently refine their detection techniques and uphold a critical mindset toward potentially manipulated material.
Furthermore, the analysis reveals that even social media accounts with large followings are unreliable sources. Careful verification and cross-referencing with multiple trusted sources are therefore essential.
Finally, the report stresses the critical role of satellite and geospatial data as independent and dependable tools for verifying events and damages. Their systematic and methodical integration into OSINT workflows is crucial for producing accurate and reliable situational awareness.
Sources:
CIR, Israel–Iran: Monitoring updates, verifying damage and debunking misinformation, 2025, [online]; Available at: https://www.info-res.org/articles/israel-iran-monitoring-updates-verifying-damage-and-debunking-misinformation/
Islamic Republic of Iran Broadcasting's Global Propaganda Efforts Revealed
The Global Influence Operations Report (GIOR) reveals how the Iranian state broadcaster IRIB airs coerced televised confessions produced in collaboration with the Ministry of Intelligence and the Islamic Revolutionary Guard Corps (IRGC). These confessions are used to intimidate domestic dissent and deliberately shape international narratives.
The report highlights IRIB’s extensive reach, with 19 national channels and broadcasts in 32 languages. Additionally, the broadcaster conducts covert online influence campaigns involving propagandistic proxy websites and botnets, amplifying regime-aligned messages while concealing their origins.
Key actors include IRIB and its security-affiliated partners, such as the IRGC and intelligence services, digital proxies, and seemingly “independent” media outlets distributing content globally. FDD researchers Bridget Toomey and Toby Dershowitz emphasize the role of IRIB-produced coerced programming—often called “Torture TV”—as a core component of Tehran’s psychological warfare.
Sources:
Global Influence Operations Report (GIOR), Iran’s IRIB Propaganda Network: “Torture TV” Operations Exposed, 2025, [online]; Available at: https://www.global-influence-ops.com/iran-irib-torture-network-exposed/
Foundation for Defense of Democracies (FDD), Toomey B., Dershowitz T., 10 Things to Know About Tehran’s Propaganda Network, the Islamic Republic of Iran Broadcasting, 2025, [online]; Available at: https://www.fdd.org/analysis/2025/06/18/10-things-to-know-about-tehrans-propaganda-network-the-islamic-republic-of-iran-broadcasting/
Evolution of Iran’s AI-Powered TikTok Campaign
A recent article by the ICT reveals a marked escalation in Iran’s use of AI-generated content on TikTok, reflecting a strategic evolution in its information warfare operations during the current conflict. Compared to previous influence campaigns, including those following the October 7 attacks, this latest attack demonstrates a significant leap in scale and technical execution.
This transformation is driven by a convergence of increased Iranian capability and the proliferation of advanced generative tools. The campaign’s multilingual architecture suggests a deliberate attempt to target segmented audiences with tailored narratives. Arabic and Farsi content promotes regional solidarity and anti-Israel messaging. Hebrew-language materials aim to induce psychological pressure within Israel. English-language videos target global public opinion, while content in East Asian languages appears designed to exploit local narratives and broaden geographic reach.
The report identifies several core narrative frames: simulated large-scale destruction in Israeli urban centres, a portrayal of sustained Iranian military momentum, and the use of satire to delegitimize Israeli and U.S. leadership. These themes are amplified through synthetic video and imagery to manipulate perception and erode adversarial morale.
Perhaps most remarkable is the emerging tactic of weaponizing AI-powered fact-checking tools to authenticate fabricated content falsely. This reflects an adaptive approach to undermining trust in legitimate information channels.
Source:
International Institute for Counter-Terrorism (ICT), Haberfeld D., Azani E., Iranian TikTok Campaign Seeks to Shape War Perceptions Using AI, 2025, [online]; Available at: https://ict.org.il/iranian-tiktok-campaign-seeks-to-shape-war-perceptions-using-ai/
AI RELATED ARTICLES
South Korea’s AI Strategy Prioritizes Innovation, Overlooks National Security
As published by CSIS, in January 2025, South Korea passed the "AI Basic Act," a comprehensive legislative effort to promote artificial intelligence development and build public trust in its use. Set to take effect in January 2026, the law outlines national policies supporting AI innovation, industry development, data infrastructure, ethical standards, transparency requirements, and safety obligations. It also introduces regulations for high-risk and generative AI systems.
Experts argue that South Korea’s institutional approach remains fragmented despite its scope. The current strategy mainly focuses on ethics and economic growth, leaving critical gaps in addressing malicious information operations and national security threats.
A significant limitation is the act’s exclusion of defence and security applications. Article 4 explicitly states that AI's national defence and security uses fall outside the law’s scope, based on the assumption that other agencies will manage these areas separately. However, there is no legal framework beyond the act to guide the use of AI in defence or security, making coordinating and implementing effective strategies challenging.
Experts recommend a security-centred update to South Korea’s AI policy to address these shortcomings. This would include establishing a dedicated institutional framework for AI in national defence, clarifying the roles of relevant agencies, and potentially creating a new organization focused on AI-related national security threats. Strengthening technical capabilities in AI-driven threat detection and cyberattack prediction is also essential.
Ultimately, for South Korea to fully leverage AI, it must recognize the technology not only as a driver of innovation but also as a strategic asset in national defence.
Sources:
Center for Strategic and International Studies (CSIS), Bae S., Kim S.J., AI Security Strategy and South Korea’s Challenges, 2025, [online]; Available at: https://www.csis.org/analysis/ai-security-strategy-and-south-koreas-challenges
Disinformation and Deepfakes in the 2025 Canadian Election
DFR Lab published an article reporting on the viral deepfake video that circulated, depicting Canadian Prime Minister Mark Carney announcing sweeping auto regulations ahead of Canada’s 2025 federal election. Although labelled as AI-generated and debunked by fact-checkers, the clip still racked up millions of views on TikTok and X, showing how misinformation can thrive even after exposure.
The video’s reach extended well beyond Canada. Many U.S.-based users, already immersed in their political tensions, shared and reacted to the false narrative with alarm. Influencers played a key role in amplifying the content, often reposting the claims without attribution or context. Even after the platforms removed the original footage, echoes of the misinformation persisted.
This incident highlights the growing challenge of AI-generated disinformation during politically sensitive periods. Fabricated content that imitates public figures or references polarizing policies can spread faster than social media platforms can respond. Experts caution that as AI tools become more sophisticated and widely available, traditional content moderation may be insufficient to contain the spread of false but convincing narratives. DFR claims that the Carney deepfake offers a warning: in the digital age, political perception can be easily manufactured and manipulated.
Source:
Digital Forensic Research Lab (DFRLab), Deepfake video of Canadian Prime Minister reaches millions, 2025, [online];
Available at: https://www.dfrlab.org/2025/06/19/deepfake-video-of-canadian-prime-minister-reaches-millions-on-tiktok-x/
GENERAL REPORTS
Mistrustful and skeptical, Australians still turn to social media for news
According to an article by The Conversation, as traditional news consumption declines, more Australians are turning to social media for information, particularly from influencers, yet trust in these sources remains low. According to the report, 57% of Australians see influencers as the leading source of misinformation, surpassing concerns about activists, political actors, and even the news media. Despite social media’s growing role, especially among younger users on platforms like TikTok, Australians remain deeply concerned about what is real or fake online, with 74% expressing worry.
A third of respondents believe platforms like TikTok and YouTube aren’t doing enough to remove harmful or misleading content. At the same time, recent political efforts to regulate misinformation, such as the now-withdrawn ‘Combatting Misinformation and Disinformation Bill’ illustrate the complexity of the debate over who should be responsible for truth online: platforms, governments, or users themselves.
The conversation suggests that improving media literacy is a vital solution. The survey found that individuals who have received news literacy education are more likely to verify information, trust credible sources, and engage with the news more frequently overall. However, only 24% of Australians reported receiving such training.
Source:
Jamestown Foundation, Bolkunets D., YouTube Restrictions on Belarus and Russia Strengthen Minsk’s Propaganda, 2025, [online]; Available at: https://jamestown.org/program/youtube-restrictions-on-belarus-and-russia-strengthen-minsks-propaganda/
Sudan’s Digital War: A Laboratory for Hostile Influence Operations
An article by LSE details how Sudan’s civil war has escalated into the digital domain. LSE claims that Sudan has, in effect, become a laboratory for hostile digital influence campaigns which are unregulated, coordinated, and increasingly effective.
Armed actors, particularly the Rapid Support Forces (RSF), have turned social media into a battleground for narrative control. Platforms like TikTok, WhatsApp, and Telegram are used to distribute emotionally charged, stylized content crafted to bypass critical scrutiny and provoke viral reactions. Disinformation is not incidental but systematic, multilingual, and designed to sway local and global audiences.
Campaigns such as [#TheArmyKillsCivilians] and [#ProtectSudaneseCivilians] flood the digital space with polarizing messages, while hashtags like [#HumanitarianAid] target international sentiment to shape diplomatic outcomes. Arabic content tends to evoke loyalty and fear, while English-language posts are tailored to elicit sympathy and frame the RSF in a more favourable light abroad.
These operations are timed with military actions or negotiations, revealing a deliberate strategy to manipulate perception and influence political trajectories. In the absence of digital governance, Sudan exemplifies how fragile states become proving grounds for aggressive influence operations.
What is unfolding in Sudan is not just narrative warfare. It is an unregulated testbed for the next generation of digital conflict, where truth is malleable and perception can be weaponized.
Source:
LSE, Ahmed Adam L.M., Digital propaganda and the steering of mass narratives have shaped the war in Sudan, 2025, [online]; Available at: https://blogs.lse.ac.uk/africaatlse/2025/06/16/digital-propaganda-and-the-steering-of-mass-narratives-have-shaped-the-war-in-sudan/
Partisan Asymmetry in Misinformation Sharing on X
The Oxford Internet Institute (OII) report in new study published in PNAS, that two thirds of posts flagged by X’s Community Notes as potentially misleading were written by Republicans, compared to just one third written by Democrats, underscoring a significant partisan asymmetry in misinformation sharing.
The analysis from OII included English-language tweets and found that across topics like health, politics, science, and economy, Republican posts were disproportionately flagged—e.g., 81% flagged in health-related tweets and 73% in politics.
“Our findings provide strong evidence of a partisan asymmetry in misinformation sharing which cannot be attributed to political bias on the part of fact-checkers or academic researchers”, said Professor Mohsen Mosleh, Associate Professor of Social Data Science at the Oxford Internet Institute.
However, the study credits X’s crowd-sourced fact-checking mechanism, Community Notes, for reliably identifying misleading content through a diverse rating system that minimizes partisan bias.
Source:
Oxford Internet Institute, New study finds Republicans flagged for posting misleading tweets twice as often as Democrats on X/Twitter’s Community Notes, 2025, [online]; Available at: https://www.oii.ox.ac.uk/news-events/new-study-finds-republicans-flagged-for-posting-misleading-tweets-twice-as-often-as-democrats-on-x-twitters-community-notes/
Unpacking Climate Disinformation: Key Insights from the HEAT Project
The HEAT project (Harmful Environmental Agendas & Tactics), produced by Logically and DisinfoLab, has recently concluded its comprehensive study of climate-related misinformation narratives across Germany, France, and the Netherlands. This targeted investigation offers crucial insights into the actors, platforms, and techniques actively shaping climate discourse in these diverse European contexts.
Four critical findings emerged:
Persistent Conspiratorial Narratives: A pervasive distrust in institutions and science fuels the popularity of conspiratorial narratives, particularly around geoengineering and HAARP. These stories erode public confidence in climate action.
Climate Policy as "Culture War": Climate policy is increasingly framed as a "culture war" issue. Partisan attacks and populist rhetoric brand climate action as authoritarian, often exploiting national anxieties about economic stability or regional autonomy to amplify skepticism.
Russia-Linked Amplification ("Portal Kombat"): The project verified Russia-linked "Portal Kombat's" role in amplifying disinformation. Its strategy—rebranding international content for local audiences via Telegram and low-engagement Facebook "copypasta"—demonstrates an agile model for narrative laundering that scales rapidly.
"Big Oil" Influence: While narratives aligning with fossil fuel interests were prominent, direct "Big Oil" disinformation campaigns were limited. These narratives were primarily diffused through partisan or ideological networks, indicating an indirect yet significant influence.
These findings underscore an urgent need for enhanced platform accountability and harmonized, EU-level regulatory responses. Understanding how shared disinformation frames engage with local conditions is paramount for countering efforts to undermine informed public debate on climate.
Source:
The HEAT project (Logically and EU DisinfoLab), A look at France, Germany, and the Netherlands, 2025, [online];
FRAMEWORKS TO COUNTER DISINFORMATION
Disinformation Research Under Political Attack in the U.S.
According to a report by The Conversation, the field of misinformation and disinformation research in the U.S. faces unprecedented political pressure, jeopardizing efforts to understand and counter online manipulation. Following a controversial executive order by President Donald Trump, ostensibly to "restore free speech," the National Science Foundation alarmingly cancelled numerous grants for studying misleading content. This move, framing legitimate research as censorship, directly clashes with public sentiment: a staggering 95% of Americans recognize misinformation as a significant problem.
Despite a 2024 Supreme Court ruling upholding the constitutionality of such research, Republican lawmakers continue their aggressive efforts to discredit academics, painting them as politically biased. This narrative persists even as studies consistently show conservative content is more prone to containing misinformation, logically explaining higher labelling rates. Tactics like "accusation in a mirror" are employed, casting researchers as censors while political actors themselves engage in suppressing information.
This politically driven backlash has led to harassment of researchers, especially women, and threatens to stall meaningful work that helps inform the public about manipulation tactics and media literacy. Fact-checking, proven to be one of the most effective tools against disinformation, is also under attack, despite being widely accepted as accurate, even by skeptical audiences when shown specific results.
Source:
The Conversation, Sinclair H.C., Most Americans believe misinformation is a problem — federal research cuts will only make the situation worse, 2025, [online]; Available at: https://theconversation.com/most-americans-believe-misinformation-is-a-problem-federal-research-cuts-will-only-make-the-problem-worse-255355
Poland Fights Russian Election Interference
As reported by New Eastern Europe's article, on June 1st, 2025, Poland saw the election of Karol Nawrocki, supported by the national-conservative Law and Justice (PiS) party. Throughout the campaign, Warsaw's ruling coalition consistently warned of foreign interference, explicitly from Russia. Given Poland's strategic significance and the shifting dynamics of support for Ukraine, an intense digital hostile information campaign from Moscow was widely anticipated, aiming to push Eurosceptic, anti-Western, and pro-Russian narratives.
However, experts noted a surprisingly subdued impact from Russian interference efforts compared to what was expected. This outcome suggests that Poland's diverse social media landscape, the limited constitutional powers of the presidency, and crucial lessons learned from Romania's recent electoral experience played a significant mitigating role.
Indeed, following both election rounds, the feared "Russian storm" of disinformation largely failed to materialize. As one expert, Agnieszka Lipińska of NASK, noted to New Eastern Europe, disinformation levels were significantly lower than during the 2024 European Parliament election.
Despite this relative quiet during the immediate electoral period, experts caution that the objective of foreign state-sponsored disinformation is often a more insidious, long-term erosion of public perception on key issues, rather than an immediate, overt impact on election results.
Source:
New Eastern Europe, Pyka A., Resisting foreign interference: Poland’s presidential election and the Russian challenge, 2025, [online]; Available at: https://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/
[Download Report]
GLOSSARY
Information Operations
Hybrid Warfare
Cyber Warfare
Cyfluence Attack
Soft Warfare
CIB
FIMI
Hostile Influence Campaign (HIC)
Digital Impact on Discourse (DID)
Misinformation
Disinformation
Inauthentic Behavior
Fake users
Unidentified users
Sockpuppet accounts
Bots
Repurposed accounts
Fake website
Deep Assets
Real platforms
Astroturfing
Cyberbullying
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.