Offensive operators have been using automated, logic-driven attack chains for years… Now the adversaries are just catching up.
In a turn of events that should shock nobody who’s spent time tracking threat actors or building agentic AI systems, Anthropic just published a report describing how they disrupted a Chinese state-sponsored APT using MCPs and Claude to orchestrate an espionage campaign.
It’s unprecedented in that it’s the first time a legitimate cyber-threat-intel group has publicly attributed this kind of AI-orchestrated operation in the wild. But to those of us who’ve worked on real red teams – this isn’t a revelation. It’s confirmation.
We’ve been here before.
Red teamers have lived this reality for years
For seasoned red team operators, automation, logic chaining, and adaptive attack processes have been part of daily tradecraft for a long time. Legitimate teams have used automated tooling to accelerate everything from reconnaissance to exploitation, from social engineering to post-exploitation and persistence.
Why? Because automation buys time and time is the most valuable resource in an engagement. The more you can offload, the more you can think, pivot, and react. It’s exactly the same reason adversaries do it.
So, while the Anthropic report is new in attribution, it’s not new in concept. The only real change is scale and accessibility. What once required a team of skilled operators and custom tooling is now achievable through an MCP and an LLM that can be scripted to think, reason, and act semi-autonomously.
A note on what “legit” red teaming actually means
Quick sidebar…because we’re going to hear a lot more “experts” weigh in on this.
Red teaming isn’t about jumping fences or whispering from “secret locations.” It’s about operational realism, not theater. It’s about testing enterprise defenses the way capable adversaries actually behave, not running canned scripts or parading buzzwords on social media.
If you’ve ever heard someone say:
“You can’t red team without AD,”
“I can’t tell you what I did before,”
or my personal favorite, “I jumped a fence”…
Cool story. But that’s not red teaming. That’s cosplay.
The people who’ve been doing this work legitimately have been building and testing automated adversaries for years because that’s what real attackers do. We’ve always known that logic-driven automation is the natural extension of fieldcraft under pressure.
Where defenders are falling behind
Here’s the uncomfortable part: the offensive side has evolved faster than the defensive side.
Enterprise security – especially detection engineering and training – is still largely optimized for a world of rules, signatures, and static logic. But attackers are no longer static. They’re dynamic systems now, composed of agents that can iterate and reason in real time.
Penetration testing is already drifting toward more automated “appsec-style” workflows: scanners, code review plugins, even interactive fuzzers like Burp, simply because it’s easier, faster, and more reproducible. But that shift also highlights how far behind defensive teams are in adopting equivalent automation and reasoning.
We still see training pipelines that teach manual banner-grabbing or outdated enumeration steps – the kinds of tasks that beg for automation. Understanding fundamentals matters, sure. But forcing analysts through archaic processes only widens the capability gap between enterprise defenders and modern adversaries when they are taking those lessons as hard truths back to their respective teams.
The real lesson from Anthropic’s report
The Anthropic case isn’t about a brand-new threat. It’s about velocity: the speed and scale at which attackers can now act. Agentic AI collapses the time between steps, shortens dwell time, and lets adversaries explore more attack paths in parallel.
Defenders need to stop treating this as a theoretical future and start adapting now. That means shifting from static detections to behavioral detection, focusing on identity-first protection, and mapping attack paths rather than just looking for single events.
The atomic steps of attacks haven’t changed – reconnaissance is still reconnaissance – but the tempo and adaptability have. The only sustainable defense is one that matches that agility.
The takeaway
If you’ve worked offense, you’ve already seen this coming. If you haven’t, you’re now living in a world where the adversary thinks and acts like a red team – only faster.
Anthropic’s report isn’t a warning; it’s a signal flare. The same automation that once powered legitimate testing has crossed the line into active operations. The fieldcraft has gone live.
AI didn’t reinvent the attack chain, it just put it on fast-forward. The defenders who survive the next wave won’t be the ones with the most signatures; they’ll be the ones who can think and adapt at machine speed.




