Weekly Report: Cyber based influence campaigns 09th - 15th of June 2025
- CRC
- Jun 18
- 12 min read
Updated: Jun 19

[Listen to the Podcast]
[Report Highlights]
As revealed by EU vs. Disinfo, Russia has significantly expanded its FIMI operations, particularly across Africa, using covert strategies to influence public opinion and undermine democratic institutions. The report is based on a joint investigation by the European External Action Service, the French agency VIGINUM, and the UK’s Foreign, Commonwealth and Development Office.
New Eastern Europe highlights that during Poland’s 2025 presidential election, authorities feared significant Russian disinformation and interference. However, experts found the impact far less than expected, likely due to effective preventative measures.
The Disinformation Observer reports that the June 2025 Los Angeles immigration protests became a testing ground for coordinated disinformation campaigns, with foreign state media, U.S. politicians, and AI-generated content spreading false narratives to portray national instability and manipulate public perception.
The Jamestown Foundation reports that following Russia’s invasion of Ukraine, YouTube imposed restrictions and demonetized content creators from Russia and Belarus. Although aimed at curbing state propaganda, these measures ultimately failed to prevent disinformation.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS
STATE ACTORS
[Russia]
AI-RELATED ARTICLES
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
STATE ACTORS
[Russia]
Kremlin Expands Disinformation Operations Through African Front
EUvsDisinfo reports that the African Initiative, a supposedly independent news agency, is a Russian instrument for Foreign Information Manipulation and Interference (FIMI) in Africa. The report is based on a joint investigation by the European External Action Service (EEAS), the French agency VIGINUM, and the United Kingdom’s Foreign, Commonwealth and Development Office (FCDO). The initiative is led by Russian national Artem Sergeevich Kureev, allegedly linked to Russian intelligence services. It plays a central role in Russia’s influence operations on the African continent, particularly following Moscow’s strategic reorientation after its full-scale invasion of Ukraine in February 2022.
The African Initiative operates mainly in French- and English-speaking parts of Africa, with a notable focus on countries like Burkina Faso, which serves as a regional hub. Its activities combine physical outreach—through local associations, media partnerships, and activist networks—with digital operations, including spreading disinformation via fake news sites, AI-generated content, and manipulative SEO techniques.
Sources:
EUvsDisinfo, African Initiative: Russian FIMI operation disguised as a news agency, 2025, [online]; Available at: https://euvsdisinfo.eu/african-initiative-russian-fimi-operation-disguised-as-a-news-agency/
VIGINUM, FCDO, EEAS, From Public Diplomacy to Covert Influence Operations, 2025, [online]; Available at: https://euvsdisinfo.eu/uploads/2025/06/VIGINUM_FCDO_EEAS_Technical_Report_African_Initiative.pdf
Surge in Cyber Activity Following Israeli Strikes on Iran
In a post on X, CyberKnow notes a sharp rise in cyber activity following Israel’s military strikes on Iran on June 13, 2025. A total of 83 hacktivist groups are currently active: 65 are aligned with Iran, 11 are anti-Iran, and six support Israel. Three of the pro-Iran groups are identified as pro-Russian. Reported cyberattack types include DDoS, website defacements, data breaches, doxing, social media takedowns, ransomware, and attacks targeting operational technology.
The number of anti-Iran groups exceeds that of pro-Israel groups, suggesting that much of the hacktivist activity is framed more around opposition to Iran than explicit support for Israel. The post does not assess the effectiveness of the attacks. However, the timing and scope of activity suggest that cyber operations are being launched rapidly in response to on-the-ground developments, underscoring their role in the broader conflict environment.
Source:
@Cyberknow20, Iran and Israel War Cybertracker #1, 2025, 15 June, 16:35, [online]; Available at: https://x.com/Cyberknow20/status/1934258425590141193
Disinformation in Poland’s 2025 Presidential Election
New Eastern Europe states that disinformation and foreign interference, particularly from Russia, were significant concerns that the government and cybersecurity officials expressed during Poland’s 2025 presidential election. The Kremlin has long pursued efforts to influence European democratic processes, including disinformation campaigns in Romania, France, and Germany. Given Poland’s strategic position and its shifting stance on Ukraine, fears of pro-Russian narratives infiltrating the country were significant. However, experts and state institutions found that interference was far more limited than initially feared.
In response, Poland developed a coordinated defense strategy known as the “election umbrella.” This strategy aims to counter disinformation, strengthen cybersecurity, and train key stakeholders such as election committees and journalists. Poland relied on proven measures from previous efforts and formally consolidated them into a unified approach. Despite early warnings of aggressive cyberattacks by Russian intelligence, including a high-profile breach of a ruling party’s IT system, the protective measures proved largely effective.
Experts attribute this relative success to several factors: Poland’s diverse social media landscape, the limited executive powers of its presidency, and valuable lessons learned from recent attacks on elections abroad. Europe-wide initiatives like the Democracy Shield and increased collaboration with technology companies supported these defenses, albeit with varying degrees of enforcement.
Source:
New Eastern Europe, Pyka A., Resisting foreign interference: Poland’s presidential election and the Russian challenge, 2025, [online]; Available at: https://neweasterneurope.eu/2025/06/10/resisting-foreign-interference-polands-presidential-election-and-the-russian-challenge/
AI RELATED ARTICLES
Social Media and AI Chatbots Fuel Disinformation About LA Protests
According to a Wired article, disinformation surrounding the protests in Los Angeles is spreading rapidly and is further amplified by AI chatbots like Grok and ChatGPT. In recent days, many people in Los Angeles have been protesting against increased Immigration and Customs Enforcement (ICE) raids. Among the images circulating were photos of National Guard troops sleeping on the floor, initially published by the San Francisco Chronicle and later shared by California Governor Gavin Newsom. Additionally, a picture of a pile of bricks was falsely portrayed as connected to the LA protests, though it was taken in New Jersey.
On platforms like X (formerly Twitter) and Facebook, users quickly spread false claims that the protesters were paid actors or that the images were doctored or taken out of context. These assertions were further amplified by conspiracy theorists and right-wing influencers, with politicians such as Senator Ted Cruz echoing the misleading content. Many users turned to AI chatbots like Grok and ChatGPT for clarification, but often received inaccurate or false information. Grok, for example, claimed that the National Guard photos originated from Afghanistan in 2021 or the U.S. Capitol. ChatGPT also stated the images were taken at the Kabul airport during the withdrawal from Afghanistan. Regarding the brick pile photo, Grok insisted it was from Los Angeles despite fact-checkers confirming it was taken in New Jersey. Despite corrections, Grok refused to retract its erroneous statements.
The role of AI chatbots is thus ambivalent. While intended to provide clarity, their inaccurate responses contribute to confusion. Combined with fast-moving political events and declining content moderation on social platforms, this creates a fertile ground for disinformation. The spread of false images and unfounded claims undermines public discourse and makes it harder to distinguish truth from fiction during times of crisis.
Sources:
Wired, Gilbert D., AI Chatbots Are Making LA Protest Disinformation Worse, 2025, [online]; Available at: https://www.wired.com/story/grok-chatgpt-ai-los-angeles-protest-disinformation/
LA Protests: Algorithms & Foreign Influence
The Disinformation Observer highlights that Russian and Chinese state media spread genuine and fake content during the immigration protests in Los Angeles in June 2025. They operated across various platforms and were often difficult to distinguish from domestic users. A notable example is a photo circulated by the Russian agency Sputnik, allegedly showing “pallets of bricks” at protest sites. In reality, the image originated from a construction site in New Jersey and was unrelated to the Los Angeles protests. Such false images and conspiracy theories were used to portray the protests as signs of instability and to push political messages.
A key factor in the rapid spread of this disinformation is the role of social media algorithms and AI-powered chatbots. These systems prioritize content that generates strong engagement, regardless of its truthfulness. As a result, sensational and emotionally charged false information gains greater visibility and spreads faster. The combination of disinformation and algorithmic amplification highlights the complexity of modern information warfare. This poses a significant challenge for democratic societies in the digital age.
Source:
The Disinformation Observer, Phil, This Week In Disinformation: LA Riots Edition, 2025, [online]; Available at: https://thedisinformationobserver.substack.com/p/this-week-in-disinformation-la-riots
AI Misidentifies Plane Crash Details and Spreads Misinformation
CyberNews reports that Google’s AI Overview wrongly identified the aircraft in the fatal Air India crash as an Airbus A330 instead of a Boeing 787. This error highlights growing concerns about AI hallucinations and misinformation in critical news events.
AI systems like Google’s Overview are non-deterministic, meaning they can produce different results for the same query. These hallucinations occur when AI presents contextually related but false information as fact, leading to confusion and misinformation.
Despite disclaimers that AI answers may contain errors, the authoritative tone of AI-generated content causes users to trust it implicitly. Such mistakes, especially during breaking news, can quickly spread false narratives and cause reputational harm.
Source:
Cybernews, Walsh M., Google’s AI overviews hallucinate Airbus in Air India crash – it was a Boeing, 13 June 2025, [online];
Available at: https://cybernews.com/news/google-ai-hallucinate-crash-india-boeing-airbus/t
GENERAL REPORTS
YouTube Restrictions Strengthen Kremlin-aligned narravtives
An article by the Jamestown Foundation states that following Russia’s invasion of Ukraine in 2022, YouTube imposed sanctions on Russian and Belarusian channels, blocked monetization, and restricted payment methods. Independent media lost significant revenue and reach as a result. Meanwhile, state-backed platforms like RuTube and VK Video benefited from these restrictions, gaining users and influence.
Belarusian state media quickly filled the gap on YouTube, significantly expanding their reach and often adopting Kremlin-aligned narratives to justify the war. Independent channels, on the other hand, were labeled “extremist,” with their audiences intimidated by repression and prison sentences. Many media outlets had to scale back their activities or operate from exile.
Russian opposition media saw a temporary increase in viewers, but their reach remains far behind state media. Today, the media landscape in both countries is dominated by state-controlled content. Sanctions and platform restrictions have significantly reduced space for independent journalism and amplified the spread of state propaganda.
Source:
Jamestown Foundation, Bolkunets D., YouTube Restrictions on Belarus and Russia Strengthen Minsk’s Propaganda, 2025, [online]; Available at: https://jamestown.org/program/youtube-restrictions-on-belarus-and-russia-strengthen-minsks-propaganda/
FRAMEWORKS TO COUNTER DISINFORMATION
EU Sanctions Target Russian Disinformation Amid Enforcement Challenges
The Royal United Services Institute (RUSI) reported that the European Union is employing sanctions as a key strategy to counter disinformation, primarily focusing on Russian state media outlets in response to the invasion of Ukraine. The core disinformation aspect addressed is the continuous broadcasting of Russian propaganda within the EU, often through "mirror websites" that circumvent bans. This strategy aims to limit the reach of malign narratives by directly targeting the platforms and entities that disseminate them. However, implementing and enforcing these sanctions face significant challenges, including monitoring pervasive online content and the need for a unified approach among EU member states.
The broader implications of this approach highlight the complex balance between combating foreign information manipulation and upholding freedom of speech within democratic societies. The article suggests that while restrictive measures, such as sanctions, are necessary, they are not self-sufficient. It underscores the critical need for complementary investments in societal resilience, fostering media literacy among the populace, and promoting independent journalism to create a more robust defense against disinformation. The debate also addresses domestic sources of misinformation, emphasizing the multifaceted nature of defending democratic information spaces.
Source:
RUSI, Gyimesi B., Defending Democracy: Sanctions on Disinformation, 2025, [online]; Available at: https://www.rusi.org/explore-our-research/publications/commentary/defending-democracy-sanctions-disinformation
[Download Report]
GLOSSARY
Information Operations
Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.
Hybrid Warfare
It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.
Cyber Warfare
Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.
Cyfluence Attack
Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.
Soft Warfare
All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.
CIB
Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,
emphasizing both coordination and inauthentic behavior.
FIMI
The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.
Hostile Influence Campaign (HIC)
An information operation sought to influence a targeted audience for a hostile cause.
Digital Impact on Discourse (DID)
Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.
Misinformation
A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.
Disinformation
Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".
Inauthentic Behavior
Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.
Fake users
AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.
Unidentified users
A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).
Sockpuppet accounts
A sock puppet or sockpuppet is an online identity used for deception.
Bots
Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.
Repurposed accounts
Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.
Fake website
Is a website designed for fraudulent or scam activity, hiding its real purpose.
Deep Assets
These are non-human deep cover assets, divided into two sub-categories:
Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).
Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.
Real platforms
Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.
Astroturfing
Takes place when a coordinating actor creates a false impression of grassroots support.
Cyberbullying
is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.