← All Resources

Introduction

Last week, on November 13, 2025, Anthropic -- a leader in commercial artificial intelligence technology -- released a report regarding their disruption of a nation-state espionage operation wherein AI was a featured operator and enabling element. The report titled "Disrupting the first reported AI-orchestrated cyber espionage campaign" and the blog released by the Anthropic team are worth the short amount of time it takes to read through them. We would encourage any and everyone to do just that -- read them.

According to the team at Anthropic, in mid-September 2025, their team uncovered what they consider to be a highly sophisticated cyber espionage campaign linked to a Chinese state-sponsored group known as GTG-1002. The discovery signaled a major evolution in how advanced threat actors are leveraging AI to conduct operations -- the very kind of threat we at Salience Cybersecurity focus our efforts on via the development of intelligence-driven prevention capabilities.

The Anthropic investigation revealed a well-funded, expertly organized effort that launched multiple targeted intrusions simultaneously across about 30 organizations, several of which were successfully compromised.

The report makes a claim that should be considered plausible though difficult to prove:

"We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention."

Though many are skeptical of this report and the reality of the evidence being presented, our stance -- as a team comprised of individuals with deep and long histories in the commercial and public intelligence community and decades of threat research and intelligence expertise -- is that the evidence as shared and disclosed will and should be studied and, where possible, corroborated via sources that are cleared for such disclosure.

The purpose of this blog is not to call into question the Anthropic team's findings, but rather to assess and acknowledge them while introducing our own perspective on what we believe is a very real advancement of threat actor and adversary activities based on the gross availability of artificial intelligence (generative AI, LLMs, etc.) and the lowering of the barrier of entry for threat actors and adversaries of all kinds -- including Nation-States and their proxies.

Detection, Scope, and Nature of the Campaign

According to the team at Anthropic, once they became aware and cognizant of the threat actor's activity, they started their investigation to gain clarity into the nature of the adversary, the campaign, and the underpinnings of the attack itself. The Anthropic team spent ten days working to gain the degree of understanding necessary to take the appropriate measures: banning accounts as they became aware of them, conducting victim notification, establishing victimology, and coordinating efforts with law enforcement and authorities once a reasonable and defensible corpus of actionable intelligence was assembled.

We encourage the reader to consult the primary sources that the Anthropic team has produced to ensure the greatest degree of accuracy in representation and detail.

Our team found it noteworthy that the attack associated with this campaign was made possible in part by the rapid evolution of the models associated with Anthropic's Claude Code capability. In fact, the team notes that many of the features the threat actor depended upon to carry out the attack did not exist even one calendar year prior. This is incredibly important given the nature of the rapid evolution of artificial intelligence, its gross availability, and potential for weaponization.

Advancing Model Capability and Intelligence

Models have and continue to advance in ways that enable them to not only interpret complex instructions and directives, but also to validate them and execute them -- making complicated tasks, including offensive security tasks and attacks, possible. The team at Salience Cyber has worked diligently to prove this out since inception and has been demonstrating this capability as part of its private design partner and industry demonstrations.

The truth is that what Anthropic has observed in this attack and campaign is possible, probable, and provable. This point alone ought to give the reader -- irrespective of their knowledge of artificial intelligence -- pause, resulting in not only greater consideration of their adoption but also their current cybersecurity technology stack's ability to detect and prevent threats associated with misuse, abuse, compromise, and weaponization.

Agency as Opposed to Model Only

Models now can act not only in their principal capacity but as agents as well. What this enables users to do is engage in autonomous actions based on directives. The results include task chaining and assessment of progress (positive and negative) that result in decision-making on the part of the model-agent, at times with limited human intervention. In the hands of a sophisticated and skilled user -- good or bad -- the outcomes can be quite significant and, in the worst cases, devastating.

Access to New and Powerful Tooling

Today, current-generation generative AI and LLMs provide users with a wealth of tooling -- much of which is quite new and powerful. A substantial portion of these tools did not exist even one calendar year ago. In many cases these tools are accessible via fully credentialed APIs and Model Context Protocol (MCP) enabled servers.

For those encountering MCP for the first time: MCP is an open-source standard used to interconnect AI applications with external systems. AI applications (Copilot, Cursor, ChatGPT, Claude Code, etc.) lack the native ability to connect and communicate with tools, data sources, repositories, and workflows. MCP provides the protocol necessary to achieve these ends. The community behind MCP often compares it to a "USB-C port for AI applications."

Consider how this capability has enabled great strides for those using AI applications for good -- while also enabling those with malicious intent in advancing their goals through network reconnaissance, port scanning, vulnerability scanning, and much more advanced operations like those described in the Anthropic report.

The Attack and Campaign Phases

The attack itself was broken down into phases by the Anthropic team. Though it is tempting to analyze every aspect of their write-up, we have elected to provide a high-level walkthrough to illustrate both Anthropic's findings and the reality of this new event horizon being realized in real time.

Phase I: Campaign Development and Target Selection

Human operators identified specific targets and designed the framework for their attack campaign. Using tactics like those refined by Salience Cyber through extensive R&D, they jailbroke the AI model to bypass security controls and further their offensive objectives.

Phase II: Reconnaissance and Network Enumeration

The threat actors used Claude Code to conduct automated reconnaissance. According to Anthropic, this activity was largely autonomous, with Claude -- under human direction -- leveraging multiple tools, including browser automation through MCP manipulation. This enabled target cataloging, authentication mechanism analysis, and vulnerability prioritization for exploitation and compromise.

Phase III: Vulnerability Discovery and Validation

Automated systems were used to identify and exploit vulnerabilities, providing the attackers with an initial foothold and enabling callbacks to their infrastructure.

Phase IV: Credential Harvesting and Lateral Movement

Human operators guided Claude to expand operations, systematically harvesting credentials across compromised networks and enabling further lateral movement.

Phase V: Data Collection and Exfiltration

Anthropic noted this phase had minimal human involvement, showing the highest level of AI autonomy. This reflects the increasing sophistication and capability of current-generation AI systems compared to earlier versions.

Phase VI: Documentation and Handoff

The campaign concluded with the automated creation of detailed documentation covering all attack phases. Claude's ability to generate and organize comprehensive technical reports, code, and supporting materials proved to be a valuable asset to the threat actors behind this operation.

Conclusion: What Can We Learn from This?

The GTG-1002 campaign marks a watershed moment in cyber threat evolution. Advanced AI capabilities are becoming more accessible and easier to weaponize, creating a dangerous asymmetry that legacy security approaches cannot adequately address. When state-sponsored actors can conduct large-scale espionage with minimal human intervention, the traditional boundaries of cybersecurity dissolve.

This is not an isolated incident -- it is a preview of what the industry will increasingly face. The question is no longer whether AI will be weaponized at scale by sophisticated adversaries, but whether defenders will evolve quickly enough to protect critical organizations and infrastructure.

For enterprises and governments, the imperative is urgent: rethink security frameworks entirely rather than making incremental improvements. This requires deploying AI-aware prevention capabilities, honestly assessing current blind spots, and accelerating investment in next-generation defenses designed to detect and counter AI-orchestrated threats. The cost of inaction is unacceptable.

Want to Learn More?

Salience Cyber has developed a comprehensive threat prevention platform engineered for today's evolving threat landscape. Our solution provides unified defense against AI-driven threats -- including weaponized and misused AI systems -- while simultaneously protecting against conventional attack vectors.

Powered by the Salience Cyber Cognition AI Engine, our platform delivers predictive threat prevention and proactive defense grounded in adversary-aware threat intelligence and deterministic outcomes. Built on proprietary AI and security tradecraft, the platform operates without hallucination, providing operationally reliable results and measurable cost efficiency for security teams.

If you're evaluating threat prevention solutions designed to address both AI-driven and conventional threats, we'd welcome a conversation about how Salience Cyber can address your organization's needs. We're currently hosting a private beta for select design partners and invite your participation. Get in touch.