Anthropic’s AI Report and Its Implications for Cyfluence Operations
- CRC
- 4 days ago
- 6 min read

Background
In November 2025, Anthropic published a report marking a potential milestone in the discussion on agent-driven automated cyber threats and information operations. The analysis, titled “Disrupting the first reported AI-orchestrated cyber espionage campaign”, makes a bold claim about the nature of a recently detected malicious activity cluster, which Anthropic defines as a widescale complex campaign, attributed to a Chinese state-linked attacker (tracked as GTG-1002).
According to Anthropic, for the first time, its Claude Code was allegedly responsible for autonomously executing 80%-90% of the observed attack chain. The agentic coding tool apparently conducted major phases of the attack chain, including target reconnaissance and acquisition, vulnerability scanning, exploit generation, backdoor deployment, and data exfiltration. While most tactical actions were carried out with minimal human intervention, attackers were required to provide initial targets, define strategic objectives, and make decisions at key operational moments.[i]

Figure 1 – Lifecycle of the AI-orchestrated attack chain, courtesy of Anthropic
Soon after its release, the report has drawn considerable criticism from cybersecurity experts[ii], who pointed to the absence of tangible technical evidence (e.g. indicative code samples, concrete indicators of compromise, meaningful observed TTPs, etc.). At the same time, Anthropic qualified its own assessment and noted limits in Claude Code’s operational efficiency. The model automated several tasks but also produced incorrect outputs, including hallucinated, non-existent ‘stolen’ credentials. These errors reduced the model’s reliability and constrained its operational value.[iii]
From Automated Intrusion to Automated Influence
Although the Anthropic report describes an intrusion operation, a largely autonomous AI system could also play a significant role in digital hostile influence campaigns (HICs) and cyfluence operations. Such an application could enable highly efficient and scalable operational workflows.
Applied to HICs, the same principle yields a similar structure: high-volume task decomposition, model-driven execution, and rapid iteration. Influence actors already organise their workflows across persona creation, message development, narrative testing, audience analysis, content deployment, and performance assessment. AI-enabled orchestration would integrate into these processes without structural disruption.
Where an intrusion campaign maps technical networks, an influence campaign maps social communities. Where espionage collects credentials and access tokens, influence operations collect trust relationships and leverage points within social graphs.
Potential AI-enabled influence systems could generate tailored content, adjust narratives in real time, infiltrate closed communities, build rapport quickly, modulate tone based on sentiment analysis, and reuse “campaign kits” containing personas, scripts, assets, and dissemination strategies. The parallels are functional.
Current State of AI-Driven Attacks
In its ‘Detecting and countering misuse of AI: August 2025’ threat intelligence report, Anthropic stated that Agentic AI has been weaponized to perform sophisticated cyberattacks. It explained how abuse of LLM models has significantly lowered the entry barrier for cybercriminals, enabling threat actors with limited technical expertise to conduct complex operations. What previously required years of practical experience could now be fully deployed and automated within minutes.[iv]
In addition, the report claimed that financially motivated actors were embedding AI throughout every phase of their offensive operations, using it to profile victims, analyze stolen data, harvest financial information, fabricate identities, and scale campaigns’ volumes and precision to levels that would be impossible manually.
Together, these reports by Anthropic join a growing body of empirical evidence and academic studies regarding existing and potential AI technologies implementation for information operations. In recent years, substantial resources have been invested in assessing evolving AI capabilities and their potential to enhance digital influence operations and automate cognitive threats. A prominent example can be found in a noteworthy two-part report by The Center for Security and Emerging Technology (CSET) at Georgetown University, titled ‘AI and the Future of Disinformation Campaigns’[v], which also offered a framework (RICHDATA) to “describe the stages of disinformation campaigns”, with an emphasis on potential GenAI applications.[vi]
Later analytical reviews, including the CRC’s ‘The Rise of AI and What It Means in the Realm of Digital Influence’[vii] and ‘Russia’s Use of GenAI in Disinformation and Cyber Influence: Strategy, Use Cases and Future Expectations’[viii], have contributed existing case studies, observations, and actionable insights.
Implications for HICs and TrustOps
If indeed accurate, the latest findings from Anthropic could have far reaching consequences for cybersecurity and Influence Defense stakeholders alike. The main implications for the hybrid-operational space being:
Scaling is no longer limited by human capacity.
Commercially available Agentic AI tools can handle reconnaissance, infiltration and data exfiltration.
Orchestration is semi- (if not fully) autonomous.
Successful cyber ops can immediately and seamlessly feed influence operations.
Taken together, these developments are expected to dramatically increase both the velocity and impact of cyfluence threats.
Adapting Counter Influence
Leading researchers have also been exploring the application of AI models and linguistic “signatures” for defensive counter-FIMI purposes (see ‘Cracking the Code: How AI Detects Social Media Impersonation’[ix]).
However, addressing issues such as AI-orchestrated attacks detection and attribution must rely on an omni-disciplinary approach:
Detection needs to apply updated analytical frameworks (e.g. RICHDATA, DISARM, ABCDE), algorithmically-enhanced dynamic logics, static model-specific signatures, real-time threat intel, UEBA baselining, data enrichments.
Attribution must adequately determine campaign, threat actor, and tooling (in this case, Agentic AI tools and protocols, LLM models and custom adaptations). These observations should, in turn, inform TTP cataloguing and ongoing analysis.
Remediation should adapt, evolving from legacy solutions to an AI-based, autonomous Cognitive Assets Security Broker (CASB) suite of solutions, reflecting a multilayered defensive methodology.
Crucially, in order to keep up with the expected increase in threat velocity, defenders could employ large-scale AI-orchestrated Digital Impact on Discourse (DID) activities, including asset disruption and narrative recovery campaigns.
Conclusion
So far, AI-orchestrated (or largely-automated) cyber-attacks have been a speculative prediction. Past assessments concluded that concerns over a disinformation “AI Apocalypse”[x] were overhyped, with no clear evidence of game-changing deployment of AI capabilities for cognitive threats (sometimes referred to as the “atomic bomb” of AI disinformation) by any major threat actor.
However, in light of recent reports, Influence Defense practitioners and counter-FIMI stakeholders must address several key questions:
If Anthropic’s findings are accurate, should sophisticated AI-orchestrated influence attack chains be considered an immediate threat?
Can Agentic AI tools enable mostly autonomous or largely-automated end-to-end cyfluence campaigns?
Even if Anthropic’s claims are exaggerated, what components of hostile influence campaigns’ kill chains (and operational infrastructure) can now be driven by Agentic AI tools?
Answering these questions is necessary to adopt effective defensive strategies and implement suitable solutions, as the effects of increased automation and potential operational autonomy in HICs become clearer. Anthropic’s findings, as they pertain directly to cyber-enabled influence operations, illustrate how the technical segment of cyfluence kill chains might be accelerated and scaled once agent-based AI capabilities become prevalent. Influence-based vectors (such as narrative-driven external or internal threats) are likely to emerge as new variants of cyfluence kill chains.
Lastly, as advanced AI capabilities - especially autonomous operation orchestration and threat detection - continue to coalesce, technological challenges for actors and defenders will shift. The change is expected to pivot focus from pre-determined AI-based orchestration to AI-assisted decision-making and Agentic AI’s cognitive agility, affecting tactical performances.
[Footnotes:]
[i] Anthropic, Disrupting the first reported AI-orchestrated cyber espionage campaign. [online] pp 3-4. Published November 2025. Available at: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
[ii] Cyberscoop, D. B. Johnson, China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work. [online] Published 14 November 2025. Available at: https://cyberscoop.com/anthropic-ai-orchestrated-attack-required-many-human-hands/; Ars Technica, D. Goodin, Researchers question Anthropic claim that AI-assisted attack was 90% autonomous. [online] Published 14 November 2025. Available at: https://arstechnica.com/security/2025/11/researchers-question-anthropic-claim-that-ai-assisted-attack-was-90-autonomous/; The Stack, E. Targett, Backlash over Anthropic "AI cyberattack" paper mounts. [online] Published 17 November 2025. Available at: https://www.thestack.technology/backlash-over-anthropic-ai-cyberattack-paper-mounts/; The Guardian, A. Down, AI firm claims it stopped Chinese state-sponsored cyber-attack campaign. [online] Published 14 November 2025. Available at: https://www.theguardian.com/technology/2025/nov/14/ai-anthropic-chinese-state-sponsored-cyber-attack
[iii] Anthropic, Disrupting the first reported AI-orchestrated cyber espionage campaign. [online] p 4. Published November 2025. Available at: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
[iv] Anthropic, Detecting and countering misuse of AI. [online] Published 27 August 2025. Available at: https://www.anthropic.com/news/detecting-countering-misuse-aug-2025
[v] Georgetown University Center for Security and Emerging Technology (CSET), K. Sedova, C. McNeill, A. Johnson, A. Joshi & I. Wulkan, AI and the Future of Disinformation Campaigns: Part 1: The RICHDATA Framework. [online] Published December 2021. Available at: https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/; CSET, K. Sedova, C. McNeill, A. Johnson, A. Joshi & I. Wulkan, AI and the Future of Disinformation Campaigns: Part 2: A Threat Model. [online] Published December 2021. Available at: https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-2/
[vi] Georgetown University Center for Security and Emerging Technology (CSET), J. A. Goldstein, G. Sastry, M. Musser, R. DiResta, M. Gentzel & K. Sedova, Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. [online] Published January 10, 2023. Available at: https://arxiv.org/pdf/2301.04246
[vii] Cyfluence Research Center (CRC), F. Frank, The Rise of AI and What It Means in the Realm of Digital Influence. [online] Published 6 August 2024. Available at: https://www.cyfluence-research.org/post/the_rise_of_ai
[viii] Cyfluence Research Center (CRC), Dr. C. Nehring, Russia’s Use of genAI in Disinformation and Cyber Influence: Strategy, Use Cases and Future Expectations. [online] Published 12 April 2025. Available at: https://www.cyfluence-research.org/post/russia-genai-disinformation-cyber-influence-hybrid-warfare-strategy-deepfakes
[ix] Cyfluence Research Center (CRC), O. Mokryn, Cracking the Code: How AI Detects Social Media Impersonation. [online] Published 23 March 2025. Available at: https://www.cyfluence-research.org/post/craking-the-code-how-ai-detects-social-media-impersonation
[x] Security Conference, R. Carr & P. Köhler, AI-pocalypse Now? Disinformation, AI, and the Super Election Year. [online] Published 4 October 2024. Available at: https://securityconference.org/en/publications/analyses/ai-pocalypse-disinformation-super-election-year/
_edited.png)
.png)