Weekly Report: Cyber based influence campaigns 19th - 25th of May 2025
- CRC
- May 26
- 17 min read
Updated: Jul 3

[Listen to the Podcast]
[Report Highlights]
Cybernews reports that Telegram founder Pavel Durov has accused France’s foreign intelligence chief of pressuring him to block conservative voices in Romania ahead of national elections—a request he rejected. The DGSE denies any attempt at political interference.
According to a report published by Graphika, a covert influence network aligned with Chinese interests has been uncovered on X. The network involves over 1,000 fake accounts designed to manipulate online discourse about U.S. Tariffs and Trade Policies.
As described in the media outlets The Record, NewsGuard's Reality Check, and DFRLab Romania, the recent presidential election has become entangled in a wave of disinformation and unproven allegations, as defeated far-right candidate George Simion calls for the results to be annulled.
DFRLab reports that a disinformation network linked to the French company DirectWay promoted false claims of election interference in Romania's 2025 presidential race to support a nationalist candidate and undermine democratic trust.
According to a report by ABC News, during the recent conflict sparked by a deadly attack in Pahalgam, Indian-administered Kashmir, disinformation surged online with alarming speed and sophistication.
In a recent report, The Record reveals that the European Union has introduced a new sanctions package targeting individuals and organizations involved in Russia’s hybrid warfare operations, including disinformation, sabotage, and espionage activities across Europe and Africa.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS
[X]
STATE ACTORS
[Russia]
[The War in Ukraine]
[China]
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS
[X]
Telegram’s Durov Accuses France of Political Censorship Attempt
Cybernews reports that Pavel Durov, founder of the messaging app Telegram, accused Nicolas Lerner, head of France’s foreign intelligence agency (DGSE), of asking him to block conservative voices in Romania ahead of national elections. The meeting allegedly occurred this spring at the Hôtel de Crillon in Paris, where Durov is under judicial supervision. He said he refused, stating that Telegram does not censor protest movements in any country, including Russia, Belarus, and Iran.
The DGSE denied the accusation, stating that meetings with Durov were strictly to remind him of his responsibilities in combating terrorism and child exploitation. It firmly rejected any involvement in electoral interference.
Elon Musk reacted by reposting Durov’s statement on X with the comment: “Wow.” Musk has repeatedly criticized European governments for alleged suppression of right-wing political voices.
Sources:
CyberNews, 2025. France asked the Telegram founder to ban conservative Romanian voices, he says. [online] Available at:
STATE ACTORS
[Russia]
Russia-Linked Disinformation Targets Elections in Romania and Poland
A report by The Record highlights increased Russian disinformation in Romania and Poland during their presidential elections. The Kremlin-backed campaign, known as Doppelgänger, used known tactics such as cloning official websites of institutions and media outlets to spread false narratives.
In Romania, the campaign aimed to erode trust in democratic institutions. It pushed claims of government abuse, large-scale electoral fraud, and false reports that the election had been cancelled. Authorities had warned of such activity before the first round. Despite the efforts, centrist candidate Nicușor Dan won the May 18 runoff, defeating far-right nationalist George Simion with 53.6% of the vote. His victory reaffirmed Romania’s pro-EU and pro-NATO course.
The campaign sought to undermine support for pro-European and pro-Ukrainian policies in Poland. Authorities detected foreign-funded disinformation on Facebook ahead of the first round. According to Ukraine’s military intelligence (HUR), the Doppelgänger campaign intensified in Poland, using fake accounts and bots on platforms like X to impersonate voters and amplify false messages. Key narratives included opposition to support for Ukraine, calls to exit the EU, and attacks on government policy.
The election now heads to a runoff on June 1 between centrist Rafał Trzaskowski and nationalist Karol Nawrocki. The outcome will shape Poland’s EU role and stance on Ukraine.
Source:
The Record, Antoniuk, D., 2025. Russia-linked disinformation floods Poland, Romania as voters cast ballots. [online] Available at: https://therecord.media/russia-disinformation-poland-presidential-election
Russia-Aligned TAG-110 Evolves Tactics in Ongoing Disinformation Campaign Targeting Tajikistan
In a May 2025 assessment, Recorded Future’s Insikt Group analyzed a cyber operation by the Russia-aligned actor TAG-110 targeting Tajikistan’s public sector. While the primary method was technical—phishing emails and macro-enabled Word documents—the campaign had a hostile influence dimension. The attackers embedded malware into files disguised as official communications, such as election schedules or defense-related notices. When opened, these files installed persistent code that granted long-term access to government, research, and educational systems.
This access allowed for more than surveillance. The timing and content of the attacks indicate an intent to influence internal decision-making during politically sensitive periods, including elections and military activity. The operation blurred the line between information and manipulation by impersonating trusted documents. The goal was not simply to gather data, but to shape perception and disrupt institutional integrity, subtly guiding outcomes in ways favorable to Russian strategic interests.
TAG-110’s campaign demonstrates how cyber capabilities can serve broader geopolitical objectives. The hostile influence aspect lies in using digital tools not just for intrusion, but to quietly steer political processes from within, without overt interference, yet with significant impact.
Source:
Recorded Future, 2025. Russia-Aligned TAG-110 Targets Tajikistan with Macro-Enabled Word Templates. [online]
Available at: https://go.recordedfuture.com/hubfs/reports/cta-2025-0522.pdf
[The War in Ukraine]
RAND Study Finds Limited Impact of Russian Propaganda Narratives
In a May 2025 study, researchers at the RAND Corporation analyzed the spread and impact of Russia’s most extreme propaganda narratives related to the war in Ukraine. The focus was on four core themes: claims of “denazification,” the dehumanization of Ukrainians through slurs and hate speech, antisemitic narratives targeting President Zelenskyy, and anti-Western rhetoric portraying Ukraine as a puppet of liberal or LGBTQ agendas. While these narratives have circulated widely on platforms like X and Telegram, RAND’s analysis reveals that their influence is more limited than often assumed.
The study examined over 43 million posts from 3.8 million users across 30 languages. The most virulent content, especially dehumanizing language, was primarily concentrated in Russian-language communities and struggled to gain traction internationally. On X, most users posting such content did not engage in dialogue; most extreme posts came from anonymous, unpopular accounts with little reach.
On Telegram, similar dynamics were observed: Russian-language channels were active and often widely forwarded, but the most toxic narratives failed to break out of niche audiences. In contrast, many pro-Ukrainian voices had larger followings and greater visibility, effectively challenging Russian messaging in digital spaces. Overall, RAND concludes that while Russian propaganda is aggressive in scale, its resonance beyond Russian-speaking networks remains shallow.
Source:
RAND Corporation, Treyger, E., Williams, H. J., & D'Arrigo, A., 2025. Measuring the Reach of Russia’s Propaganda in the Russia-Ukraine War. [online] Available at: https://www.rand.org/pubs/research_briefs/RBA3450-2.html
[China]
Graphika Exposes Chinese-Aligned Hostile Influence Campaign on X
A new report by Graphika has identified a covert, pro-Chinese influence network operating on X (formerly Twitter). The network appears to have been designed to shape international discourse around U.S. tariffs and trade policy. According to Graphika, over 1,000 fake accounts were identified. Using stolen content and carefully constructed counterfeit personas, the operators posed as authentic users from the United States, Canada, the United Kingdom, and Japan.
At the heart of the campaign was an effort to undermine the United States' trade policies under President Donald Trump. The fake profiles also promoted the narrative that Japan, Canada, and the United Kingdom were resisting political pressure from Washington—a portrayal deliberately framed as a supposed grassroots discourse.
Beyond trade-related topics, the network disseminated content aligned with China’s broader geopolitical agenda. This included critical narratives about the U.S. military presence in Japan and promotional content for Chinese government-backed tourism initiatives.
While the network could not be definitively linked to a specific state actor, Graphika concluded that the combination of tactics, content, and behavioral patterns strongly suggests a pro-Chinese influence operation, with notable similarities to previously documented activities attributed to Chinese state actors. Despite temporarily easing U.S.–China trade tensions, the report warns that covert efforts to sway Western public opinion will likely persist.
Source:
Graphika, le Roux, J., 2025. Tariff Tirade: China-Aligned Network Poses as Grassroots Voices in Effort to Covertly Boost Online Narratives Critical of US Tariffs and Trade Policies. [online]
Available at: https://public-assets.graphika.com/reports/graphika_report_tariff_tirade.pdf
GENERAL REPORTS
The Potential and Risks of Meta’s Community Notes Program
According to an article by The Conversation, Meta is preparing to launch its Community Notes program in Canada following its rollout in the U.S. in March 2025. The initiative allows users to add context to misleading posts. Notes are only made public if they receive approval from users with differing perspectives, reflecting a decentralized, consensus-based approach.
Key insights come from X (formerly Twitter), which has operated a similar system, launched initially as “Birdwatch”, since 2021. Studies indicate that Community Notes on X can lead to the voluntary deletion of flagged posts and encourage contributors to use more moderate, fact-based language. One of the program’s most widely praised features is transparency: X has made its data and algorithms publicly accessible, allowing independent researchers to monitor and evaluate the system.
However, significant weaknesses have also emerged. Fewer than 9% of submitted notes are published due to the high threshold for cross-perspective agreement. In politically polarized environments, this model often fails. Moreover, there is a real risk of manipulation by coordinated groups aiming to discredit accurate content through mass reporting.
Another critical limitation is that neither X nor Meta penalizes users who spread misinformation. Platforms avoid direct intervention, shifting responsibility to users. Critics argue that without consequences, Community Notes risks becoming a symbolic gesture rather than a meaningful tool against disinformation.
For Community Notes to be effective in Canada, Meta must address these structural flaws, learning from failures seen on X.
Source:
The Conversation, Borwankar, S., 2025. Meta’s Community Notes program is promising, but needs to prioritize transparency. [online] Available at: https://theconversation.com/metas-community-notes-program-is-promising-but-needs-to-prioritize-transparency-248324
Far-Right Candidate Challenges Romania Election Results
A recent Reality Check from NewsGuard details that after Romania’s presidential election on May 18, 2025, pro-Kremlin and nationalist X users spread false claims of electoral fraud in favor of Nicușor Dan, who defeated pro-Russian candidate George Simion in the runoff.
Baseless allegations of election fraud were amplified by French conspiracy influencers and by the Pravda network, a known Russian-controlled disinformation outlet. These claims accused France, Moldova, and the European Union of orchestrating the alleged manipulation. Romanian authorities and OSCE observers confirmed the election was free, fair, and held under democratic conditions.
The Record further reports that Simion called for the election to be annulled in response to the result. He cited claims by Telegram founder Pavel Durov, who alleged that French authorities had pressured the platform to silence “conservative voices in Romania.” French officials dismissed the accusation. Earlier this year, Romanian authorities annulled the first round of voting due to confirmed Russian interference, including a coordinated disinformation campaign on TikTok and other platforms.
Another report by DFRLab outlines how George Simion, once known for his anti-Russian unionist activism in support of Romanian-Moldovan reunification, has increasingly echoed Kremlin-aligned narratives. He has opposed military aid to Ukraine and has framed the Russia-Ukraine war as a “fraternal conflict.” Simion’s shift toward pro-Russian messaging has drawn praise from Kremlin-aligned Moldovan figures such as Igor Dodon and support from disinformation networks linked to fugitive oligarch Ilan Shor, known for spreading false narratives online (See our Weekly Report, W20, May 2025, for further details).
Source:
NewsGuard's Reality Check, Badilini, S., 2025. After Romania Elects Pro-EU Candidate, Pro-Russian Accounts Claim Election Was Stolen. [online] Available at: https://www.newsguardrealitycheck.com/p/after-romania-elects-pro-eu-candidate
The Record, Antoniuk, D., 2025. Defeated Romanian far-right candidate calls for court to annul election over alleged interference. [online] Available at: https://therecord.media/romania-election-annul-simion-george
DFRLab, Olari, V., 2025. From Bucharest to Chisinau: How pro-Kremlin networks shaped Romania’s 2025 election. [online]
Available at: https://dfrlab.org/2025/05/16/pro-kremlin-networks-shaping-romania-2025-election/
French-Linked Network Amplifies False Romanian Election Claims
A 2025 Digital Forensic Research Lab (DFRLab) investigation uncovered a network of 15 websites, three of which actively repost Romanian-language content from far-right and fringe sources, linked to the France-based company DirectWay. The company operates the news aggregator Ziar[.]com, identified as a source of disinformation, and the X account @_Direct_News, which promoted claims of election interference by the European Union and France during Romania’s presidential elections.
During the 2025 vote, both platforms circulated false claims declaring nationalist candidate George Simion the winner. Official results confirmed the victory of pro-European candidate Nicușor Dan. The campaign relied on material from outlets such as Realitatea Plus, which was fined for partisan election-day coverage.
Technical analyses showed that the network shares Google Analytics and AdSense codes, indicating centralized control. Historical data revealed that the network had targeted up to 13 African countries. Most of these domains were later redirected to direct[.]news, which publishes region-specific content for 55 African and 47 Asian countries.
An AdSense reverse lookup traced the network's administration to a Romanian national based in Lyon, France, listed as DirectWay’s director in the French corporate registry. The case study demonstrates how a modular, transnational digital infrastructure can be repurposed to spread digital hostile influence campaigns across regions, challenging electoral integrity and democratic stability.
Source:
DFRLab, Châtelet, V., 2025. Online network with French ties promotes election interference claims in Romania. [online] Available at: https://dfrlab.org/2025/05/23/online-network-with-french-ties-promotes-election-interference-claims-in-romania/
Europe at a Crossroads: Balancing AI Innovation and Regulation
In a recent analysis, Carnegie Europe characterizes the European Union as a global pioneer in AI governance, citing the AI Act as the first comprehensive legal framework for artificial intelligence. Complemented by instruments like the Digital Services Act and the GDPR, this approach reflects the EU’s commitment to ethics and fundamental rights. However, the Union is increasingly shifting toward innovation, with initiatives such as AI factories and the EuroStack project to enhance technological sovereignty.
The deregulatory turn has been framed as a necessary response to the U.S. and China's geopolitical pressures and technological competition. Yet the report suggests that Europe’s key barriers to innovation may lie more in structural weaknesses—such as limited access to venture capital, fragmented markets, and reliance on foreign infrastructure—than in regulation itself.
Recent policy changes, including the withdrawal of the proposed AI liability directive and including national security exemptions in the AI Act, may risk weakening oversight and fundamental rights protections.
The EU now faces a strategic dilemma: balancing its role as a global standard-setter in ethical AI and needing to remain technologically competitive. The regulation of dual-use AI, applicable in civilian and military contexts, remains particularly unresolved.
According to Carnegie Europe, a viable path forward would involve greater investment, sovereign digital infrastructure, and a binding framework for dual-use AI. A balanced approach linking innovation with responsible regulation may be key to preserving Europe’s autonomy and democratic values.
Source:
Carnegie Endowment for International Peace, Csernatoni, R., 2025. The EU’s AI Power Play: Between Deregulation and Innovation. [online] Available at: https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation?lang=en
Disinformation Arises in India-Pakistan Conflict
ABC News reports that following the deadly attack in Pahalgam and the subsequent military escalation between India and Pakistan, a parallel wave of disinformation spread rapidly across platforms like X, WhatsApp, Facebook, and YouTube. AI-generated deepfakes, recycled footage, and fabricated stories distorted public perception and fueled nationalist sentiment (we previously covered the developments between India and Pakistan in our Weekly Reviews 19 and 20).
Prominent examples included doctored images of Rawalpindi Stadium in ruins, a deepfake video of a Pakistani general appearing to admit defeat, and video game clips shared as real airstrikes. A fake Daily Telegraph front page praising Pakistan’s air force was also widely circulated.
ABC highlights how even mainstream media broadcast unverified content. One video showed a couple dancing in Kashmir, falsely framed as their final moments before death. Despite the couple confirming they were alive, the footage continued to spread.
The Digital Rights Foundation recorded a surge in hate speech, while India’s blocking of 17 Pakistani YouTube channels and several X accounts, including those of journalists, raised censorship concerns.
Fact-checkers like BOOM Live and AFP were overwhelmed. Of 437 X posts reviewed, 179 were from verified accounts, yet only 73 included community notes. Experts warn that disinformation will remain a powerful weapon in digital-age conflicts without stronger moderation and verification tools.
Source:
ABC News, Hogan, L., 2025. Misinformation war rages online amid India-Pakistan tensions. [online]
Available at: https://www.abc.net.au/news/2025-05-24/misinformation-online-war-kashmir-conflict-india-pakistan/105318696
FRAMEWORKS TO COUNTER DISINFORMATION
EU Renews Mission in Moldova to Combat Disinformation and Hybrid Threats
The European Union has extended the mandate of the EU Partnership Mission in Moldova (EUPM Moldova) until May 2027. The mission aims to strengthen Moldova’s resilience against hybrid threats, with a strong focus on foreign disinformation and information manipulation. Launched in 2023 at the request of the Moldovan government, EUPM Moldova is the EU’s first civilian mission explicitly designed to counter such threats. It provides strategic advice and operational support in crisis management, cybersecurity, and communication integrity.
One key achievement is the support for establishing Moldova’s Centre for Strategic Communication and Countering Disinformation (StratCom Centre). The mission provided training, tools, and EU best practices to help identify and respond to false or manipulative narratives. Since its launch, EUPM Moldova has carried out over 60 capacity-building activities. Specialized teams work closely with Moldovan authorities to improve institutional responses. The mission has become a strategic partner in building sustainable security and protecting democratic processes from foreign interference.
Source:
European External Action Service, 2025. EUPM Moldova: Moving forward towards sustainable security resilience in Moldova. [online] Available at: https://www.eeas.europa.eu/eupm-moldova/eupm-moldova-moving-forward-towards-sustainable-security-resilience-moldova_en
EU Sanctions Target Russia's Hybrid Warfare Threats
According to a report by The Record, the European Union has introduced a new sanctions package targeting individuals and entities involved in Russia’s hybrid warfare operations. The measures focus on actors linked to disinformation, sabotage, and espionage activities across Europe and Africa.
Those sanctioned include members of Russia’s military intelligence agency (GRU), individuals spreading pro-Kremlin narratives on social media, and companies providing technical infrastructure, such as web hosting and GPS jamming technologies, that support these efforts.
A key target of the sanctions is Voice of Europe, a media outlet reportedly secretly funded by pro-Russian Ukrainian politician Viktor Medvedchuk. The platform allegedly ran influence operations across the continent, including attempts to covertly finance candidates in the 2024 European Parliament elections.
The sanctions list also includes media figures involved in disseminating Russian disinformation in African countries and the operators of Stark Industries, a U.K.-based hosting provider. The company is believed to have supported Kremlin-aligned cyber and influence campaigns, including the well-known Doppelgänger operation to manipulate public opinion in the West.
Sources:
The Record, Antoniuk, D., 2025. EU sanctions target individuals, organizations behind Russia’s disinformation and sabotage operations. [online] Available at: https://therecord.media/eu-sanctions-orgs-individuals-tied-to-russia-disinformation
[Download Report]
GLOSSARY
Information Operations
Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.
Hybrid Warfare
It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.
Cyber Warfare
Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.
Cyfluence Attack
Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.
Soft Warfare
All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.
CIB
Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,
emphasizing both coordination and inauthentic behavior.
FIMI
The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.
Hostile Influence Campaign (HIC)
An information operation sought to influence a targeted audience for a hostile cause.
Digital Impact on Discourse (DID)
Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.
Misinformation
A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.
Disinformation
Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".
Inauthentic Behavior
Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.
Fake users
AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.
Unidentified users
A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).
Sockpuppet accounts
A sock puppet or sockpuppet is an online identity used for deception.
Bots
Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.
Repurposed accounts
Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.
Fake website
Is a website designed for fraudulent or scam activity, hiding its real purpose.
Deep Assets
These are non-human deep cover assets, divided into two sub-categories:
Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).
Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.
Real platforms
Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.
Astroturfing
Takes place when a coordinating actor creates a false impression of grassroots support.
Cyberbullying
is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.