TLDR: The Charming Kitten leak reveals a government-backed cyber division targeting private-sector IP – and why defenders must replace rule-based detection with AI-driven behavioral analytics.
When the internal files of Iran’s Revolutionary Guard–linked cyber unit – better known as APT35 / Charming Kitten – hit GitHub in late September 2025, it didn’t look like an ordinary breach. Inside the Persian-language archives were not only campaign reports and malware samples but org charts, shift schedules, and office memos. One folder even listed an “office manager.”
This was not a loose band of hackers freelancing in the shadows. It was a fully staffed cyber enterprise operating like a business, complete with HR functions, metrics, and deliverables.
“These operators clock in, hand off shifts, and file reports. They just happen to be stealing research data from pharmaceutical companies.”
— Adam Koblentz, Reveal Security
From Espionage to Enterprise Ops
Independent analysis by CloudSEK concludes the leaked material originated from Unit 1500, part of the IRGC Intelligence Organization, and includes more than 100 internal Persian documents outlining structure, tools, and campaigns.
The files describe departments for social engineering, malware development, and infrastructure, each with defined responsibilities*. It’s the most detailed view we’ve had of a nation-state hacking unit functioning like a modern company. And their “business focus” isn’t politics or propaganda – it’s intellectual property theft from organizations conducting high-value research.
Department | Responsibilities |
Social engineering team | Runs targeted phishing and influence campaigns, scanning social media and professional networks for scientists, executives, and administrators with access to valuable data. They register look-alike domains, design spoofed login portals, and coordinate SMS or email phishing runs to harvest credentials and contacts. |
Malware development group | Functions like an in-house R&D lab, maintaining and testing custom remote-access tools such as the RTM Project RAT and BellaCiao loader, experimenting with obfuscation and anti-EDR techniques, and documenting test results in internal “daily reports.” |
Infrastructure team | Compromised routers used to host payloads or proxy stolen data. CloudSEK’s review of leaked task lists even shows roles for server maintenance, credential storage, and modem exploitation (notably GoAhead devices). |
Together, these functions mirror a mature corporate structure: operators performing assigned tasks on schedule, backed by engineering and IT support that keeps the espionage machine running.
APT35’s past campaigns show the same intent:
- 2019–2020: Attacks on academic institutions and medical research centers in the U.S., France, and Israel.
- 2021: The BadBlood credential-phishing campaign targeting oncologists and geneticists.
- 2022–2023: Exploitation of Log4Shell and Microsoft Exchange flaws to deploy backdoors like GhostEcho/CharmPower and the BellaCiao dropper.
The new leak fills in the missing context: showing how those operations are organized internally, down to job titles, reporting structures, and attack workflow documents.
A Shift in Target and Tradecraft
APT35’s goals are clear: steal intellectual property that confers economic and scientific advantage. Unlike North Korea’s financially motivated ransomware groups, Iran’s priority is acceleration, shortcutting years of R&D by stealing data from innovators.
Artifacts in the GitHub repository include:
- .NET and PowerShell reverse-proxy tooling
- Python webshell frameworks
- AV-evasion testing notes and campaign logs
These are mature, repeatable tools built for long-term access, not smash-and-grab attacks. Their operators move laterally, establish persistence, and quietly exfiltrate datasets — often from cloud or SaaS environments that house the most sensitive IP.
“If you’re Pfizer or Merck, you’re no longer just guarding against corporate espionage. You’re defending against a nation-state that runs like a startup.”
— Adam Koblentz, Reveal Security
Why Rules Aren’t Enough
Traditional detection models rely on static rules — IP blocks, signatures, and IOC-based alerts. But when an adversary authenticates with valid credentials, uses sanctioned apps, and operates during normal hours, there’s no rule to match.
Nation-state operators now behave like insiders. They log into Microsoft 365 or Google Workspace, access data through legitimate APIs, and mimic the workflows of real users. This is where behavioral analytics – driven by AI – changes the game. By understanding how users and entities normally behave, defenders can identify subtle deviations that traditional detections miss, such as:
- A lab researcher’s account accessing sensitive datasets from a new geography.
- A service account performing bulk downloads after months of inactivity.
- A new OAuth grant linking a SaaS app to an unknown external tenant.
Each of these events looks normal in isolation. Together, they form the behavioral fingerprint of compromise.
The Assume-Breach Playbook
APT35’s leak underscores that persistence is inevitable. The goal is not to prevent every intrusion – it’s to detect the breach early, while adversaries are still staging data and before exfiltration.
A modern detection strategy should:
- Monitor the full post-authentication identity story — from initial foothold to persistence and data staging.
- Detect work-pattern drift — anomalous access times, new device types, and unusual resource combinations.
- Identify quiet exfiltration — slow, steady data transfer via legitimate services or dormant accounts.
This approach depends on analytics that learn from your environment rather than rely on static indicators.
What Good Looks Like
An effective behavioral analytics approach combines AI-driven modeling with human expertise:
- Signal Collection: Aggregate activity across SaaS, IaaS, and identity providers.
- Behavioral Modeling: Learn normal usage patterns for each identity and resource.
- Anomaly Detection: Flag activity that deviates from expected behavior.
- Analyst Triage: Incorporate feedback loops to improve accuracy and reduce false positives.
The goal is to spot the “weird” – not just the known bad – and to do so fast enough to contain an attack before valuable research walks out the door.
Getting Left of Boom
The Charming Kitten leak is a reminder that the boundary between espionage and enterprise is gone. Governments are targeting private businesses for their data, and that data lives in SaaS and cloud applications where traditional defenses are blind.
As attackers evolve, defenders must evolve faster. Static rules and perimeter alerts can’t keep pace with adversaries who operate like legitimate employees.
The future of defense is AI-driven behavioral visibility – understanding normal to recognize abnormal, across every identity and application. APT35’s operation shows us what we’re up against: a government-run cyber enterprise that behaves like a business.
To counter it, defenders must think like analysts, not auditors – and get left of boom.
Reveal Security enables organizations to detect identity-based threats in SaaS and cloud applications using AI-driven behavioral analytics. Learn how we help security teams uncover hidden compromises — before data theft begins.
Explore the Reveal Platform → https://www.reveal.security/platform