Cisco Talos has recently observed an increase in activity that is leveraging notification pipelines in popular collaboration platforms to deliver spam and phishing emails.
These emails are transmitted using the legitimate mail delivery infrastructure associated with GitHub and Jira, minimizing the likelihood that they will be blocked in transit to potential victims.
By taking advantage of the built-in notification functionality available within these platforms, adversaries can more effectively circumvent email security and monitoring solutions and facilitate more effective delivery to potential victims.
In most cases, these campaigns have been associated with phishing and credential harvesting activity, which is often a precursor to additional attacks once credentials have been compromised and/or initial access has been achieved.
During one campaign conducted on Feb. 17, 2026, approximately 2.89% of the emails observed being sent from GitHub were likely associated with this abuse activity.
Platform abuse, social engineering, and SaaS notification hijacking
Recent telemetry indicates an increase in threat actors leveraging the automated notification infrastructure of legitimate Software-as-a-Service (SaaS) platforms to facilitate social engineering campaigns. By embedding malicious lures within system-generated commit notifications, attackers bypass traditional reputation-based email security filters. This Platform-as-a-Proxy (PaaP) technique exploits the implicit trust organizations place in traffic originating from verified SaaS providers, effectively weaponizing legitimate infrastructure to bypass standard email authentication protocols. Talos’ analysis explores how attackers abuse the notification pipelines of platforms like GitHub and Atlassian to facilitate credential harvesting and social engineering.
The PaaP model
The core of this campaign relies on the abuse of SaaS features to generate emails. Because the emails are dispatched from the platform’s own infrastructure, they satisfy all standard authentication requirements (SPF, DKIM, and DMARC), effectively neutralizing the primary gatekeepers of modern email security. By decoupling the malicious intent from the technical infrastructure, attackers successfully deliver phishing content with a “seal of approval” that few security gateways are configured to challenge.
Anatomy of GitHub campaign: Abusing automated notification pipelines
The GitHub vector is a pure “notification pipeline” abuse mechanism. Attackers create repositories and push commits with payloads embedded in the commit messages. The User Interface Mechanism has two fields for text input: one is a mandatory summary, a single limited line, where the user provides a high-level overview of the change. Attackers weaponize this field to craft the initial social engineering hook, ensuring the malicious lure is the most prominent element of the resulting automated notification. The second field is an optional, extended description that allows for multi-line, detailed explanations. Attackers abuse this to place the primary scam content, such as fake billing details or fraudulent support numbers.
Figure 1: Email headerFigure 2: The body of the message
By pushing a commit, the attacker triggers an automatic email notification. GitHub’s system is configured to notify collaborators of repository activity. Because the content is generated by the platform’s own system, it avoidssecurity flags. In this example, we can see the details of the commit followed by the scam message. At the bottom of the email, we have the mention of the subscription, buried at the very bottom of the page.
Figure 3: List-Unsubscribe link
The chain of Received headers shows the message entering the system from “out-28[.]smtp[.]github[.]com” (IP “192[.]30[.]252[.]211”). This is a known legitimate and verified GitHub SMTP server.
Figure 4: Raw headers
The email contains a DKIM-Signature with “d=github[.]com”. This signature was successfully verified by the receiving server (“esa1[.]hc6633-79[.]iphmx[.]com”), proving that the email was sent by an authorized GitHub system and was not tampered with in transit. Telemetry collected over a five-day observation period indicates that 1.20% of the total traffic originating from “noreply[@]github[.]com” contained the “invoice” lure in the subject line. On the peak day of Feb. 17, 2026, this volume spiked to approximately 2.89% of the daily sample set.
Abusing workflow and invitation logic (Jira)
The Jira vector does not rely on a notification pipeline in the traditional sense. Jira notifications are expected in corporate environments. An email from Atlassian is rarely blocked, as it is often critical for internal project management and IT operations. The abuse here is not a “pipeline” of activity, but an abuse of the collaborative invitation feature.
Attackers do not have access to modify the underlying HTML/CSS templates of Atlassian’s emails. Instead, they abuse the data fields that the platform injects into those templates. When an attacker creates a Jira Service Management project, they are given several fields to configure. When the platform triggers an automated “Customer Invite” or “Service Desk” notification, it automatically wraps the attacker’s input — such as a fraudulent project name or a deceptive welcome message — within its own cryptographically signed, trusted email template. By utilizing a trusted delivery pipeline, the attacker successfully obscures the origin and intent of the malicious.
In this example, the attacker sets the “Project Name” to “Argenta.” When the platform sends an automated invite, the email subject and body dynamically pull the project name. The recipient sees “Argenta” as the sender or the subject, which the platform has verified as the project name.
Figure 5: Email header
The attacker placed their malicious lure subject into the “Welcome Message” or “Project Description” field. They use the “Invite Customers” feature and input the victim’s email address. Atlassian’s backend then generates the email. Because the system is designed to be a “Service Desk,” the email is formatted to look like a professional, automated system alert. At the bottom of the phishing email, we can see the branding footer that Jira automatically appends to email notifications.
Figure 6: The body of the message and the footer branding
Strategic implications
The trust paradox is now the primary driver of successful phishing and scamming. GitHub is abused primarily for its high developer reputation, where attackers rely on the platform’s status as an official source of automated alerts. In contrast, Jira is abused for its business-critical integration; because it is a trusted enterprise tool, attackers use it to mimic internal IT and helpdesk alerts, which employees are pre-conditioned to treat as urgent and legitimate. In both cases, attackers are using the platform’s own reputation to launder their malicious content.
How do we fundamentally change the trust model?
Defending against PaaP attacks requires moving from the binary “trusted vs. untrusted” approach. Because attackers weaponize the platform’s own infrastructure to bypass authentication protocols (SPF/DKIM/DMARC), the gateway is effectively blind to the malicious intent. Defenders should transition to a Zero-Trust architecture that treats SaaS notifications as untrusted traffic until verified against platform-level telemetry. Moving beyond the limitations of the email gateway and adopt a fundamental paradigm shift: transitioning from reactive, signature-based filtering toward a proactive, API-driven model architecture that validates intent before a notification ever reaches the user.
Identity andinstance-levelverification: We must move from “global domain trust” to “instance-level authorization.” Security teams should restrict notification acceptance to specific sender addresses or IP ranges associated with their organization’s verified SaaS instances. Furthermore, by implementing Identity-Contextualization, notifications must be cross-referenced against the organization’s internal SaaS directory. If a notification originates from an external or unverified account — even one hosted on a trusted platform like GitHub — it should be automatically quarantined. Verification is no longer about the server sending the email; it is about the identity of the user triggering the action.
Upstream API-levelmonitoring: The most effective way to disrupt PaaP campaigns is to detect them before the notification is ever sent. Attackers must perform “precursor activities” within the platform — such as creating repositories, configuring project names, or mass-inviting users — to set the stage for their cyber-attack. By ingesting metadata from SaaS APIs (e.g., GitHub or Atlassian audit logs) into a SIEM/SOAR environment, security teams can identify these anomalous events in real-time. Detecting a “Project Creation” event that deviates from established naming conventions, originating from a country where the receiving organization has no employees or occurs outside of business hours allows for the preemptive suspension of the malicious account, neutralizing the threat at the source. Instead of waiting for a phishing email to arrive in an inbox, defenders are watching the attacker’s movements inside the platform as they set up the attack.
Semanticintent andbehavioralprofiling: We must replace simple keyword matching with Business Logic Profiling. Every sanctioned SaaS tool has a functional “Communication Baseline.” GitHub is for code collaboration; Jira is for project management. By defining these baselines, security teams can detect “semantic discontinuity,” when the content of a notification (e.g., urgent financial billing) is incongruent with the platform’s primary utility. Any notification that deviates from the expected functional profile should trigger an automated “Suspicious” banner or be routed for manual review, regardless of its technical validity.
Mitigatingcognitiveautomationfatigue: PaaP attacks exploit “automation fatigue,” where users are conditioned to trust system-generated alerts. To break this cycle, organizations can introduce intentional friction. For high-risk SaaS interactions, such as new project invitations or requests for sensitive data, security policies should mandate out-of-band verification. By requiring a platform-native verification code or forcing the user to navigate directly to the official portal rather than clicking a link, we remove the “reflexive trust” that attackers rely on. This ensures that the platform’s “seal of approval” is validated by a deliberate human action.
Automatedtakedownorchestration: Finally, the cost of attack must be increased. Security teams should integrate automated workflows that report malicious repositories or projects directly to the provider’s Trust andSafety teams. By accelerating the detection-to-takedown lifecycle, we force adversaries to constantly churn their infrastructure, making the PaaP model technically and economically unsustainable.
By adopting this framework, the security posture evolves from “Is this email authenticated?” to “Is this platform activity authorized and consistent with our business logic?” This shift effectively strips the trusted status that attackers exploit, forcing them to operate within an environment where their actions are monitored, profiled, and verified at every stage of the pipeline.
Acknowledgements
Special thanks to the Talos Email Security Research Team — Dev Shah, Lucimara Borges, Bruno Antonino, Eden Avivi, Marina Barsegyan, Barbara Turino Jones, Doaa Osman, Yosuke Okazaki, and Said Toure — for their collaborative effort in identifying and mitigating these platform abuse vectors.
Indicators of compromise (IOCs)
IOCs for this threat can be found on our GitHub repository here.
For years, macOS environments carried an aura of relative safety. Not immunity, but lower priority in the threat landscape. That perception has aged about as well as an unpatched server.
The reality in 2026 is very different. Apple devices now make up a significant share of corporate endpoints. And they sit in the hands of the people attackers most want to reach. Engineers, product leads, finance teams, and the C-suite are disproportionately Mac users. They have access to source code repositories, financial systems, privileged cloud credentials, and sensitive business data.
Key Takeaways
macOS is no longer a low-risk environment. Engineering, product, and executive teams are disproportionately Mac users with privileged access, making them high-value targets.
A single compromised Mac can be an enterprise-wide breach entry point. Stolen session tokens, Keychain credentials, and SaaS cookies harvested from one device can grant attackers persistent access to cloud environments and internal systems without triggering authentication alerts.
The ClickFix technique has evolved. Attackers now mimic and abuse legitimate AI platforms like Claude Code and Grok, exploiting the trust employees place in these tools to bypass traditional security controls entirely.
Automated sandboxes miss macOS threats by design. Without interactive analysis, the execution paths are never triggered, and the threat goes undetected.
ANY.RUN’s macOS sandbox closes a years-long visibility gap. Security teams can now investigate Apple-targeted threats inside the same unified workflow used for Windows, Linux, and Android — eliminating the context-switching and tooling fragmentation that slows incident response.
Why macOS Threat Analysis Now Belongs in Your Security Stack
Static or automated scanners often miss the full picture because many macOS threats stay dormant until a user enters a password, approves a dialog, or interacts with the system. This creates dangerous visibility gaps, longer dwell times, and slower incident response in mixed Windows/macOS environments.
Interactive sandbox analysis lets security teams safely detonate suspicious files or URLs, observe real-time behavior, and simulate genuine user actions, revealing hidden intent, data exfiltration paths, and attacker capabilities that would otherwise remain invisible.
Moonlock’s Mac Security Survey 2025 found that 66% of Mac users have encountered at least one cyber threat within the past year.
Over 80 countries affected by major Mac stealer malware campaigns.
A 67% increase in registered macOS backdoor variants in 2025.
The Use Case: A macOS ClickFix Campaign Targeting AI Users
ANY.RUN recently uncovered a sophisticated macOS-specific ClickFix campaign aimed squarely at users of popular AI development tools — including Claude Code, Grok, n8n, NotebookLM, Gemini CLI, OpenClaw, and Cursor.
Multi-OS attack: malicious terminal commands for various platforms
Attackers bought Google ads that redirected victims to convincing fake documentation pages mimicking legitimate AI platforms (Claude Code in this case). Once there, a ClickFix-style social engineering prompt tricked users into running a terminal command.
macOS terminal command downloading the malicious script
This downloaded an obfuscated script that installed the AMOS Stealer malware.
ZIP archive containing the stolen data
AMOS escalated to root privileges, swept browser credentials and session cookies from Chrome, Safari, and Firefox, emptied cryptocurrency wallet applications, harvested saved passwords from the macOS Keychain, collected files from the Desktop, Documents, and Downloads folders, and installed a persistent backdoor that restarted itself within seconds if terminated.
Backdoor C2 registration request
This backdoor upgraded from basic command polling to a fully interactive reverse shell over WebSocket with PTY support, giving attackers real-time, hands-on control of the compromised Mac.
To validate your detection coverage, research the campaign’s IOCs collected in our X post and subscribe to ANY.RUN via X.
Why This Attack Works
This campaign represents a fundamental shift in how risk reaches organizations. The delivery mechanism was not a phishing email or a malicious attachment — two threat vectors that corporate security infrastructure is built to intercept. It was a search engine result, a paid advertisement, and a trusted AI interface. Employees were not behaving carelessly; they were using the same research tools they use every day to get work done.
AI workflows normalize experimentation: users expect to copy commands, test tools, and troubleshoot issues. The attack blends into that behavior.
macOS users often operate with elevated trust: there is still a lingering perception that macOS is less targeted, which lowers suspicion.
Security tools are not built for “user-driven execution”: when a user intentionally runs a command, many controls interpret it as legitimate activity.
In short, the attack doesn’t break the rules. It borrows them.
Close the macOS visibility gap before it becomes a breach Equip your SOC with deeper multi-platform threat analysis
This type of campaign doesn’t rely on technical failure, but on human-process alignment:
Compromise without exploitation: traditional vulnerability management offers no protection here. The attack path is behavioral.
High-value users are directly exposed: the targets of AI tools are often the same people with access to sensitive systems and data.
Detection timelines increase: without clear malicious signatures, identifying the attack depends on recognizing suspicious behavior patterns.
Incident scope can expand quickly: once access is established, attackers can pivot into internal systems, especially in loosely governed tool environments.
Traditional security tools largely failed to detect this campaign because the initial payload (a shell command pasted from a legitimate website) produced no files, no installer, and no warning dialogs. Understanding and blocking the full attack chain required behavioral analysis in an environment that could replicate what a real macOS user would experience. That is precisely what interactive sandbox analysis provides.
ANY.RUN Now Covers the Full Enterprise Attack Surface
Recognizing that modern enterprises are not single-OS environments, ANY.RUN has extended its Interactive Sandbox to include macOS virtual machines, now available in beta for Enterprise Suite customers. This brings the platform to four major operating systems (Windows, Linux, Android, and macOS) within a single unified investigation workflow.
When a macOS-specific file surfaces alongside Windows samples in a phishing campaign, analysts no longer need to switch context, stand up separate infrastructure, or route the sample to a different team. Cross-platform campaigns can be investigated as a whole.
Interactive analysis catches what automated tools miss. A critical characteristic of many macOS threats, including the AMOS campaign described above, is that they are designed not to trigger until a user takes a specific action.
ANY.RUN‘s interactive environment allows analysts to replicate genuine user actions during live sandbox execution. The result is that deceptive authentication dialogs, staged execution chains, and social engineering lures become visible and documentable, rather than hidden behind an execution condition the sandbox never triggered.
In one documented analysis of the Miolab Stealer, a macOS-targeting infostealer, the sandbox surfaced the malware’s fake authentication prompt, the AppleScript routine used to collect files from user directories, and the outbound data transfer via a curl POST request, providing a complete behavioral picture of the attack chain in minutes.
The practical impact of adding macOS to the sandbox workflow is measurable at multiple levels:
Security teams can now validate suspicious files and URLs targeting Mac endpoints within minutes using behavioral analysis, rather than escalating to manual investigation or accepting the risk of unconfirmed alerts. The reduction in triage time directly compresses Mean Time to Detect and Mean Time to Respond: both metrics that translate directly into breach risk and regulatory exposure.
For organizations where macOS represents a significant portion of the device fleet this closes a visibility gap that has existed for years. Attackers have been aware of and exploiting that gap. The tools to close it now exist.
For MSSPs managing diverse client environments, the ability to investigate macOS threats within the same platform used for Windows and Linux analysis means consistent SLAs, fewer escalation paths, and the capacity to handle cross-platform incidents without specialized personnel for each OS.
Expand your SOC’s cross-platform threat visibility Speed up triage and response across 4 major OS
The campaign that weaponized AI platforms to deliver credential-stealing malware to macOS users is a clear indicator of where threat actors are investing their development effort. AI services trust, search engine visibility, and macOS endpoints are converging into a high-value attack surface: one that is actively being exploited against enterprises today.
ANY.RUN’s expansion of its Interactive Sandbox to macOS gives security leaders a direct answer to a question that has grown more urgent with every major Apple-targeted campaign: when a threat targets our Mac users, can we actually see what it does? That answer is now yes.
The capability is available in beta for Enterprise Suite customers. For organizations running mixed-OS environments — which today means nearly every enterprise — it represents a concrete step toward closing the gap between the threats targeting their users and the tools available to analyze them.
About ANY.RUN
ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, helps security teams investigate threats faster and with greater clarity across modern enterprise environments.
It allows teams to safely execute suspicious files and URLs, observe real behavior in an Interactive Sandbox, enrich indicators with immediate context through TI Lookup, and monitor emerging malicious infrastructure using Threat Intelligence Feeds. Together, these capabilities help reduce investigation uncertainty, accelerate triage, and limit unnecessary escalations across the SOC.
ANY.RUN is trusted by thousands of organizations worldwide and meets enterprise security and compliance expectations. It is SOC 2 Type II certified, demonstrating its commitment to protecting customer data and maintaining strong security controls.
FAQ
Is macOS really at risk in enterprise environments, or is this overstated?
The volume and sophistication of macOS-targeted malware has grown substantially since 2023. Campaigns like the one described in this article are not isolated incidents; they reflect a sustained, commercially organized effort targeting Apple endpoints.
Why couldn’t existing security tools detect the AI-abusing ClickFix campaign?
Because the initial infection vector produced nothing that traditional tools are built to flag. Signature-based detection and perimeter controls had nothing to intercept. Only behavioral analysis, observing what happens after that command executes, can surface the full attack chain.
What is the difference between interactive and automated sandbox analysis for macOS threats?
An automated sandbox executes a sample and records what it does without any user interaction. Many macOS threats are specifically engineered to detect this: they stay dormant, exit cleanly, or display nothing until a user takes a specific action — entering a password, clicking a dialog, or running a terminal command. Interactive analysis allows an analyst to replicate those real user actions inside the sandbox, triggering conditional execution paths that automated tools never reach.
What should organizations do immediately to reduce exposure to this type of attack?
Three steps deliver the most immediate risk reduction. First, ensure your SOC has the capability to analyze macOS-specific samples behaviorally — not just flag them as unreviewed. Second, implement user education specifically around AI platform trust: employees need to understand that content appearing on ChatGPT or Grok is not inherently safe, and that no legitimate service will ask them to paste commands into Terminal. Third, treat macOS endpoints with the same endpoint detection, logging, and incident response coverage you apply to Windows systems. Coverage parity is the baseline.
Is ANY.RUN’s macOS sandbox available to all customers?
The macOS virtual machine environment is currently available in beta for Enterprise Suite users. Organizations interested in evaluating macOS threat analysis capabilities as part of their existing or planned ANY.RUN deployment should contact the ANY.RUN team directly to discuss access and roadmap.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-07 11:06:402026-04-07 11:06:40ClickFix Meets AI: A Multi-Platform Attack Targeting macOS in the Wild
In the span of just a few weeks, we have observed a dizzying array of major supply chain attacks. Prominent examples include the malicious modification of Axios, a popular HTTP client library for JavaScript, as well as cascading compromises from TeamPCP, a “chaos-as-a-service” group that injected malicious code into hijacked GitHub repositories for open-source projects, including Trivy, an open-source security scanner.
The impact of these supply chain attacks can be vast. Axios receives 100 million downloads weekly and innumerable organizations rely on the frameworks and libraries compromised by TeamPCP. The headache they pose to organizations and their security personnel is considerable as well; affected utilities can be integrated so deeply that it may be difficult to fully catalog, let alone remediate.
Although the timing, scale, and severity of these attacks can be shocking, this is not a new phenomenon. The supply chain has remained an attractive target for some time because of its fragility and the fact that a successful compromise can lead to countless additional downstream victims.
Findings from the recently published Talos 2025 Year in Review illustrate these long-standing trends. Nearly 25% of the top 100 targeted vulnerabilities we observed in 2025 affect widely used frameworks and libraries. Digging deeper into the list reveals additional insights. The React2Shell vulnerability affecting React Server Components became the top-targeted vulnerability of 2025 despite being disclosed in December, reflecting the speed at which these supply chain attacks can reach massive scale. The presence of Log4j vulnerabilities shows how deeply embedded these utilities can be and therefore how difficult it can be to reduce the attack surface. Although these particular examples represent extant vulnerabilities that can be weaponized by numerous adversaries versus a deliberate attack carried out by a single adversary, they show how impactful and disruptive threats to the supply chain can be. Follow-on attacks can range from ransomware to espionage, which is reflective of the broad swath of adversaries that carry them out — from sophisticated state-sponsored groups to teenage cyber criminals.
If we are all building on such shaky foundation, what can we do to keep safe? After all, it certainly seems dire when a tool such as Trivy that we could normally use to scan for supply chain vulnerabilities becomes compromised itself. But there are concrete steps we can take to improve our security posture.
As highlighted in the Year in Review, protecting identity is key. This includes securing CI/CD pipelines to prevent these types of compromises from occurring in the first place, as well as limiting the impact and lateral movement of an adversary should they obtain access to a downstream victim.
In addition, organizations must try to the best of their abilities to inventory the software libraries and frameworks they employ, stay informed of security incidents, and respond rapidly to implement patching and other mitigations.
Just as supply chain attacks are evergreen, so too is the efficacy of security fundamentals, such as segmentation, robust logging, multi-factor authentication (MFA), and the implementation of emergency response plans.
As trust continues to break down, the only viable solution may be to double down on vigilance. Since this recent spate of attacks represents a trend that will likely only grow in intensity and breadth, the time for action and planning is now.
Coverage
Below, find a sample of the some of the recent coverage we offer to protect against these threats:
ClamAV: Txt.Trojan.TeamPCP-10059839-0
Txt.Trojan.TeamPCP-10059839-0
Behavioral Protections: LiteLLM Supply Chain Compromise – alerts during installation of compromised packages
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-03 19:06:402026-04-03 19:06:40Do not get high(jacked) off your own supply (chain)
Cisco Talos is actively investigating the March 31, 2026 supply chain attack on the official Axios node package manager (npm) package during which two malicious versions (v1.14.1 and v0.30.4) were deployed. Axios is one of the more popular JavaScript libraries with as many as 100 million downloads per week.
Axios is a widely-deployed HTTP client library for JavaScript that simplifies HTTP requests, specifically for REST endpoints. The malicious packages were only available for approximately three hours, but if downloaded Talos strongly encourages that all deployments should be rolled back to previous known safe versions (v1.14.0 or v0.30.3). Additionally, Talos strongly recommends users and administrators investigate any systems that downloaded the malicious package for follow-on payloads from actor-controlled infrastructure.
Details of supply chain attack
The primary modification of the packages introduced a fake runtime dependency (plain-crypto-js) that executes via post-install without any user interaction required. Upon execution, the dependency reaches out to actor-controlled infrastructure (142[.]11[.]206[.]73) with operating system information to deliver a platform-specific payload to Linux, MacOS, or Windows.
On MacOS, a binary, “com.apple.act.mond”, is downloaded and run using zsh. Windows is delivered a ps1 file, which copies the legitimate powershell executable to “%PROGRAM DATA%wt.exe”, and executes the downloaded ps1 file with hidden and execution policy bypass flags. On Linux, a Python backdoor is downloaded and executed. The payload is a remote access trojan (RAT) with typical associated capabilities allowing the actor to gather information and run additional payloads.
Impact
As with most supply chain attacks, the full impact will likely take some time to uncover. The threat actors exfiltrated credentials along with remote management capabilities. Therefore, Talos strongly recommends organizations treat any credentials present on their systems with the malicious package as compromised and begin the process of rotating them as quickly as possible. Actors are likely to try to weaponize access as quickly as possible to maximize financial gain.
Supply chain attacks tend to have unexpected downstream impacts, as these packages are widely used across a variety of applications, and the compromised credentials can be leveraged in follow-on attacks. For additional context, about 25% of the top 100 vulnerabilities in the Cisco Talos 2025 Year in Review affect widely used frameworks and libraries, highlighting the risk of supply chain-style attacks.
Talos will continue to monitor any follow-on impacts from this supply chain attack in the days and weeks ahead, as well as any additional indicators that are uncovered as a result of our ongoing investigation.
As we already talked in a previous post, modern software development is practically unthinkable without the use of open-source components. But in recent years the associated risks have become increasingly diverse, complex, and numerous. When, first, vulnerabilities affect a company’s infrastructure and code faster than they’re remediated; second, data is unreliable and incomplete; and third, malware may be lurking within popular components, it’s not enough to simply scan version numbers and toss fix-it tickets at the IT team. Vulnerability management must be expanded to cover software download policies, guardrails for AI assistants, and the entire software build pipeline.
A trusted pool of open-source components
The main part of the solution is to prevent the use of vulnerable and malicious code. The following measures should be implemented:
Having an internal repository of artifacts. The sole source of components for internal development needs to be a unified repository to which components are admitted only after a series of checks.
Performing rigorous component screening. These include checks of: known versions of the component, known vulnerable and malicious versions, publication date, activity history, and the reputation of the package and its authors. Scanning the entire contents of the package, including build instructions, test cases, and other auxiliary data, is mandatory. To filter the registry during ingestion, use specialized open-source scanners or a comprehensive cloud workload security solution.
Running dependency pinning. Build processes, AI tools, and developers mustn’t use templates (such as “latest”) when specifying versions. Project builds need to be based on verified versions. At the same time, pinned dependencies must be regularly updated to the latest verified versions that maintain compatibility and are free of known vulnerabilities. This significantly reduces the risk of supply chain attacks through the compromise of a known package.
Improving vulnerability data
To identify vulnerabilities more effectively and prioritize them properly, an organization needs to establish several IT and security processes:
Vulnerability data enrichment. Depending on the organization’s needs, this is needed either to enrich information by combining data from NVD, EUVD, BDU, GitHub Advisory Database, and osv.dev, or to purchase a commercial vulnerability intelligence feed where the data is already aggregated and enriched. In either case, it’s worth additionally monitoring threat intelligence feeds to track real-world exploitation trends and gain an insight into the profile of attackers targeting specific vulnerabilities. Kaspersky provides a specialized data feed specifically focused on open-source components.
In-depth software composition analysis. Specialized software composition analysis (SCA) tools allow for the correct navigation of the dependency chain in open-source code to fully inventory the libraries being used, and discover outdated or unsupported components. Data on healthy components also comes in handy to enrich the artifact registry.
Identifying abandonware. Even if a component isn’t formally vulnerable and hasn’t been officially declared unsupported, the scanning process should flag components that haven’t received updates for more than a year. These warrant separate analysis and potential replacement, much like EOL components.
Securing AI code and AI agents
The activities of AI systems used in coding must be wrapped in a comprehensive set of security measures — from input data filtering to user training:
Restrictions on dependency recommendations. Configure the development environment to make sure that AI agents and assistants can only reference components and libraries from the trusted artifact registry. If these don’t contain the right tools, the model should trigger a request to include the dependency in the registry, rather than pulling something from PyPI that simply matches the description.
Filter model outputs. Despite these restrictions, anything generated by the model must also be verified to ensure the AI code doesn’t contain outdated, unsupported, vulnerable, or made-up dependencies. This check should be integrated directly into the code acceptance process or the build preparation stage. It doesn’t replace the traditional static analysis process: SAST tools must still be embedded in the CI/CD pipeline.
Developer training. IT and security teams must be intimately familiar with the characteristics of AI systems, their operating principles, and common errors. To achieve this, employees should complete a specialized training course tailored to their specific roles.
Systematic removal of EOL components
If a company’s systems utilize outdated open-source components, a systematic, consistent approach to addressing their vulnerabilities should be taken. There are three primary methods for doing this:
Migration. This is the most organizationally complex and expensive method, involving the total replacement of a component followed by the adaptation, rewriting, or replacement of the applications built upon it. Deciding on a migration is especially daunting when it demands a massive overhaul of all internal code. This frequently affects core components — it’s impossible to migrate away from Node.js 14 or Python 2 easily.
Long-term support (LTS). A dedicated support-services market exists for large-scale legacy projects. Sometimes this involves a fork of the legacy system maintained by third-party developers; in other cases, specialized teams backport patches that fix specific vulnerabilities into older, unsupported versions. Transitioning to LTS generally requires ongoing support costs, but this can still be more cost-effective than a full migration in many cases.
Security, IT, and business must work together to choose one of these three paths for every documented EOL or abandoned component, and reflect the made choice in the company’s asset registries and SBOMs.
Risk-based open-source vulnerability management
All of the measures listed above reduce the volume of vulnerable software and components entering the organization, and simplify the detection and remediation of flaws. Despite this, it’s impossible to eliminate every single defect: the number of applications and components is simply growing too fast.
Therefore, prioritizing vulnerabilities based on real-world risk remains essential. The risk assessment model must be expanded to account for the characteristics of open source, answering the following questions:
Is the vulnerable code branch actually executed in the organization’s environment? A reachability analysis for discovered vulnerabilities should be performed. Many defective code snippets are never actually run within the organization’s specific implementation, making the vulnerability impossible to exploit. Certain SCA solutions can perform this analysis. This same process permits evaluating an alternative scenario: what happens if the vulnerable procedures or components are removed from the project entirely? Sometimes, this method of remediation proves to be surprisingly painless.
Is the defect being exploited in real-world attacks? Is a PoC available? The answers to these questions are part of standard prioritization frameworks like EPSS, but tracking must be conducted across a much broader set of intelligence sources.
Has cybercriminal activity been reported in this dependency registry, or in related and similar components? These are additional factors for prioritization.
Considering these factors allows the team to allocate resources effectively and remediate the most dangerous defects first.
Transparency is the new black
The security bar for open-source software is only going to keep on rising. Companies developing applications — even for internal use — will face regulatory pressures demanding documented and verifiable cybersecurity within their systems. According to the estimates of Sonatype experts, 90% of companies globally already fall under one or more requirements to provide evidence of the reliability of the software they use; therefore, the experts deem transparency “the currency of software supply chain security”.
By controlling the use of open-source components and applications, enriching threat intelligence, and strictly monitoring AI-driven development systems, organizations can introduce the innovations the business craves — all while clearing the high bar set by regulators and customers alike.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-03 17:06:362026-04-03 17:06:36Managing open-source vulnerabilities | Kaspersky official blog
It used to be that only specialized software houses and tech giants had to lose sleep over open-source vulnerabilities and supply chain attacks. But times have changed. Today, even small businesses are running their own development shops, making the problem relevant for everyone. Every second company’s internal IT teams are busy writing code, configuring integrations, and automating workflows — even if its core business has absolutely nothing to do with software. It’s what modern business efficiency demands. However, the byproduct of that is a new breed of software vulnerabilities — the kind that are far more complicated to fix than just installing the latest Windows update.
Modern software development is inseparable from open-source components. However, the associated risks have proliferated in recent years, increasing in both variety and sophistication. We’re seeing malicious code injected into popular repositories, fragmented and flawed vulnerability data, systematic use of outdated, vulnerable components, and increasingly complex dependency chains.
The open-source vulnerability data shortage
Even if your organization has a rock-solid vulnerability management process for third-party commercial software, you’ll find that open-source code requires a complete overhaul of that process. The most widely used public databases are often incomplete, inaccurate, or just plain slow to get updates when it comes to open source. This turns vulnerability prioritization into a guessing game. No amount of automation can help you if your baseline data is full of holes.
According to data from Sonatype, about 65% of open-source vulnerabilities assigned a CVE ID lack a severity score (CVSS) in the NVD — the most widely used vulnerability knowledge base. Of those unscored vulnerabilities, nearly 46% would actually be classified as High if properly analyzed.
Even when a CVSS score is available, different sources only agree on the severity about 55% of the time. One database might flag a vulnerability as Critical, while another assigns a Medium score to it. More detailed metadata like affected package versions is often riddled with errors and inconsistencies too. Your vulnerability scanners that compare software versions end up crying wolf with false positives, or falsely giving you a clean bill of health.
The deficit in vulnerability data is growing, and the reporting process is slowing down. Over the past five years, the total number of CVEs has doubled, but the number of CVEs lacking a severity score has exploded by a factor of 37. According to Tenable, by 2025, public proof-of-concept (PoC) exploit code was typically available within a week of a vulnerability’s discovery, but getting that same vulnerability listed in the NVD took an average of 15 days. Enrichment processes, such as assigning a CVSS score, are even slower — Sonatype in the same study estimates that the median time to assign a CVSS score is 41 days, with some defects remaining unrated for up to a year.
The legacy open-source code problem
Libraries, applications, and services that are no longer maintained — either being abandoned or having long reached their official end of life (EOL) — can be found in 5 to 15% of corporate projects, according to HeroDevs. Across five popular open-source code registries, there are at least 81 000 packages that contain known vulnerabilities but belong to outdated, unsupported versions. These packages will never see official patches. This “legacy baggage” accounts for about 10% of packages in Maven Central and PyPI, and a staggering 25% in npm.
Using this kind of open-source code breaks the standard patch management lifecycle: you can’t update, automatically or manually, a dependency that is no longer supported. Furthermore, when EOL versions are omitted from official vulnerability bulletins, security scanners may categorize them as “not affected” by a defect and ignore them.
A prime example of this is Log4Shell, the critical (CVSS 10) vulnerability in the popular Log4j library discovered back in 2021. The vulnerable version accounted for 40 million out of 300 million Log4j downloads in 2025. Keep in mind that we’re talking about one of the most infamous and widely reported vulnerabilities in history — one that was actively exploited, patched by the developer, and addressed in every major downstream product. The situation for less publicized defects is significantly worse.
Compounding this issue is the visibility gap. Many organizations lack the tools necessary to map out a complete dependency tree or gain full visibility into the specific packages and versions embedded within their software stack. As a result, these outdated components often remain invisible, never even making it into the remediation queue.
Malware in open-source registries
Attacks involving infected or inherently malicious open-source packages have become one of the fastest-growing threats to the software supply chain. According to Kaspersky researchers, approximately 14 000 malicious packages were discovered in popular registries by the end of 2024, a 48% year-over-year increase. Sonatype reported an even more explosive surge throughout 2025 — detecting over 450 000 malicious packages.
The motivation behind these attacks varies widely: cryptocurrency theft, harvesting developer credentials, industrial espionage, gaining infrastructure access via CI/CD pipelines, or compromising public servers to host spam and phishing campaigns. These tactics are employed by both spy APT groups and financially motivated cybercriminals. Increasingly, compromising an open-source package is just the first step in a multi-stage corporate breach.
Common attack scenarios include compromising the credentials of a legitimate open-source package maintainer, publishing a “useful” library with embedded malicious code, or publishing a malicious library with a name nearly identical to a popular one. A particularly alarming trend in 2025 has been the rise of automated, worm-like attacks. The most notorious example is the Shai-Hulud campaign. In this case, malicious code stole GitHub and npm tokens and kept infecting new packages, eventually spreading to over 700 npm packages and tens of thousands of repositories. It leaked CI/CD secrets and cloud access keys into the public domain in the process.
While this scenario technically isn’t related to vulnerabilities, the security tools and policies required to manage it are the same ones used for vulnerability management.
How AI agents increase the risks of open-source code usage
The rushed, ubiquitous integration of AI agents into software development significantly boosts developer velocity — but it also amplifies any error. Without rigorous oversight and clearly defined guardrails, AI-generated code is exceptionally vulnerable. Research shows that 45% of AI-generated code contains flaws from the OWASP Top 10, while 20% of deployed AI-driven applications harbor dangerous configuration errors. This happens because AI models are trained on massive datasets that include large volumes of outdated, demonstrational, or purely educational code. These systemic issues resurface when an AI model decides which open-source components to include in a project. The model is often unaware of which package versions currently exist, or which have been flagged as vulnerable. Instead, it suggests a dependency version pulled from its training data — which is almost certainly obsolete. In some cases, models attempt to call non-existent versions or entirely hallucinated libraries. This opens the door to dependency confusion attacks.
In 2025, even leading LLMs recommended incorrect dependency versions — simply making up an answer — in 27% of cases.
Can AI just fix everything?
It’s a simple, tempting idea: just point an AI agent at your codebase and let it hunt down and patch every vulnerability. Unfortunately, AI can’t fully solve this problem. The fundamental hurdles we’ve discussed handicap AI agents just as much as human developers. If vulnerability data is missing or unreliable, then instead of finding known vulnerabilities, you’re forced to rediscover them from scratch. That’s an incredibly resource-intensive process requiring niche expertise that remains out of reach for most businesses.
Furthermore, if a vulnerability is discovered in an obsolete or unsupported component, an AI agent cannot “auto-fix” it. You’re still faced with a need to develop custom patches or execute a complex migration. If a flaw is buried deep within a chain of dependencies, AI is likely to overlook it entirely.
What to do?
To minimize the risks described above, it will be necessary to expand the vulnerability management process to include open-source package download policies, AI assistant operating rules, and the software build process. This includes:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 21:06:362026-04-02 21:06:36Risks, emerging when developing or using open-source software
Welcome to this week’s edition of the Threat Source newsletter.
Last weekend, I witnessed a crime. Not a notable crime that you might read about in the press, but an unremarkable fraud attempt that nevertheless illustrates how new threat actor capabilities are emerging.
I imagine that most people reading this probably field IT questions from friends, family, and your local community. I assist with the IT provision for a local community association. It’s not a wealthy, large association — just your typical volunteer-run nonprofit like many others in the region providing community services.
This weekend, the chair emailed the treasurer requesting a bank transfer. The treasurer replied asking for the recipient’s details, and the chair promptly responded. The emails appeared authentic: correct names, a sum consistent with the association’s regular expenditure. Yet something made the treasurer pause. The reason for the transfer felt vague, and the tone seemed slightly off. They picked up the phone to verify. The chair had no idea what they were talking about. The emails and the request were an attempted fraud by a third party.
This is a variant of the business email compromise (BEC) scam in which an attacker impersonates a trusted individual and requests a fund transfer to an account they control. The attacker relies on social engineering to trick someone with payment authority to send the money. Once received, funds typically pass through money mules or compromised personal accounts before being rapidly shuffled through multiple transfers, obscuring the trail and drastically reducing the chances of recovery.
The initial email is often sent from a plausible email address. Closely scrutinising the sender’s email address may not help, since the attack may originate from the sender’s genuine account that has previously been compromised.
Historically, BEC targeted large organisations where anticipated payouts justified the time investment required to research key personnel and craft targeted attacks. The anticipated payout would more than cover the costs involved.
However, the fact that attackers are willing to target a small community organisation for a relatively small sum of money shows that the economics of the attack have changed.
AI has fundamentally altered the economics of BEC. Attackers can now reconnoitre many small organisations rapidly and cheaply. AI-generated content can be tailored to each target: referencing specific projects, using appropriate terminology, matching organisational tone.
The attack no longer needs to be labour-intensive or highly targeted. It’s become democratised, and an accessible playbook for targeting any organisation. Community associations, local charities, or small businesses can now be targeted, both because the attack is easier to execute, but also because scamming smaller sums from many victims can be as profitable as scamming large sums from few victims. Unfortunately, because this profile of organisation may never have encountered this threat before, they may be unaware and consequently more vulnerable.
For every treasurer who pauses when something doesn’t quite feel right, there are others who will accept an apparently legitimate email at face value. Protection begins with awareness of how the fraud operates. Be suspicious of any unexpected request for payment, especially if there is a sense of urgency or reasons why a phone call “isn’t possible” right now. Verify through separate channels before any transfer occurs. Call a known number for your contact, not one provided in the suspicious email. Enforce strict procurement rules that prevent any last-minute urgent payments.
Above all, recognise the democratisation of business email compromise scams. They’re no longer something that only happens to large corporations with complex supply chains and international operations. They’re for everyone now.
The one big thing
Cisco Talos has identified a large-scale automated credential harvesting campaign that exploits React2Shell, a remote code execution vulnerability in Next.js applications (CVE-2025-55182). Using a custom framework called “NEXUS Listener,” the attackers automatically extract and aggregate sensitive data — including cloud tokens, database credentials, and SSH keys — from hundreds of compromised hosts to facilitate further malicious activity.
Why do I care?
This campaign uses high-speed automation to exploit React2Shell, enabling attackers to rapidly harvest high-value credentials and establish persistent, unauthenticated access. This creates significant risks for lateral movement and supply chain integrity. Furthermore, the centralized aggregation of stolen data allows attackers to map infrastructure for targeted follow-on attacks and potential data breaches.
So now what?
Organizations should immediately audit Next.js applications for the React2Shell vulnerability and rotate all potentially compromised credentials, including API keys and SSH keys. Enforce IMDSv2 on AWS instances and implement RASP or tuned WAF rules to detect malicious payloads. Finally, apply strict least-privilege access controls within container environments to limit the potential impact of a compromise.
F5 BIG-IP DoS flaw upgraded to critical RCE, now exploited in the wild The US cybersecurity agency CISA on Friday warned that threat actors have been exploiting a critical-severity F5 BIG-IP vulnerability in the wild. (SecurityWeek)
EuropeanCommission investigating breach after Amazon cloud account hack The threat actor told BleepingComputer that they will not attempt to extort the Commission using the allegedly stolen data, but intend to leak it online at a later date. (BleepingComputer)
Google fixes fourth Chrome zero-day exploited in attacks in 2026 As detailed in the Chromium commit history, this vulnerability stems from a use-after-free weakness in Dawn, the underlying cross-platform implementation of the WebGPU standard used by the Chromium project. (BleepingComputer)
Anthropic inadvertently leaks source code for Claude Code CLI tool Anthropic quickly removed the source code, but users have already posted mirrors on GitHub. They are actively dissecting the code to understand the tool’s inner workings. (Cybernews)
Can’t get enough Talos?
Qilin EDR killer infection chain Take a deep dive into the malicious “msimg32.dll” used in Qilin ransomware attacks, which is a multi-stage infection chain targeting EDR systems. It can terminate over 300 different EDR drivers from almost every vendor in the market.
An overview of 2025 ransomware threats in Japan In 2025, the number of ransomware incidents increased compared to 2024. Notably, it was a year in which attacks leveraging Qilin ransomware were observed most frequently.
A discussion on what the data means for defenders To unpack the biggest Year in Review takeaways and what they mean for security teams, we brought together Christopher Marshall, VP of Cisco Talos, and Peter Bailey, SVP and GM of Cisco Security.
When attackers become trusted users The latest TTP draws on 2025 Year in Review data to explore how identity is being used to gain, extend, and maintain access inside environments.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 19:06:342026-04-02 19:06:34The democratisation of business email compromise fraud
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 15:06:382026-04-02 15:06:38[Video] The TTP Ep 21: When Attackers Become Trusted Users
Endpoint detection and response (EDR) tools are widely deployed and far more capable than traditional antivirus. As a result, attackers use EDR killers to disable or bypass them.
Disabling telemetry collection (process, memory, network activity) limits what defenders can see and analyze.
As defenders improve behavioral detection, attackers increasingly target the defense layer itself as part of their initial access or early execution stages.
This blog provides an in-depth analysis of the malicious “msimg32.dll” used in Qilin ransomware attacks, which is a multi-stage infection chain targeting EDR systems. It can terminate over 300 different EDR drivers from almost every vendor in the market.
We present multiple techniques used by the malware to evade and ultimately disable EDR solutions, including SEH/VEH-based obfuscation, kernel object manipulation, and various API and system call bypass methods.
This blog post provides an in-depth technical analysis of the malicious dynamic-link library (DLL) “msimg32.dll”, which Cisco Talos observed being deployed in Qilin ransomware attacks. The broader activities and attacks of Qilin was previously introduced and described in the blog post here.
This DLL represents the initial stage of a sophisticated, multi-stage infection chain designed to disable local endpoint detection and response (EDR) solutions present on compromised systems. Figure 1 shows a high-level diagram demonstrating the overall execution flow of this infection chain.
Figure 1. Infection chain overview.
The first stage consists of a PE loader responsible for preparing the execution environment for the EDR killer component. This secondary payload is embedded within the loader in an encrypted form.
The loader implements advanced EDR evasion techniques. It neutralizes user-mode hooks and suppresses Event Tracing for Windows (ETW) event generation at runtime by leveraging a -like approach. Additionally, it makes extensive use of structured exception handling (SEH) and vectored exception handling (VEH) to obscure control flow and conceal API invocation patterns. This enables the EDR killer payload to be decrypted, loaded, and executed entirely in memory without triggering detection by the locally installed EDR solution.
Once active, the EDR killer component loads two helper drivers. The first driver (“rwdrv.sys”) provides access to the system’s physical memory, while the second driver (“hlpdrv.sys”) is used to terminate EDR processes. Prior to loading the second driver, the EDR killer component unregisters monitoring callbacks established by the EDR, ensuring that process termination can proceed without interference.
Overall, the malware is capable of disabling over 300 different EDR drivers across a wide range of vendors. While the campaign has been previously reported by , , and others at a higher level, this analysis focuses on previously undocumented technical details of the infection chain (e.g., the SEH/VEH tricks and the overwriting of certain kernel objects).
PE loader section (“msimg32.dll”)
The malicious DLL is most likely side-loaded by a legitimate application that imports functions from “msimg32.dll”. To preserve expected functionality, the original API calls are forwarded to the legitimate library located in “C:WindowsSystem32”.
The version of “msimg32.dll” deployed by the threat actor triggers its malicious logic from within its DllMain function. As a result, the payload is executed as soon as the legitimate application loads the DLL.
Figure 2. Malicious version of “msimg32.dll”.
Sophos also gave some technical and historical insights into this loader in their earlier blog, in which it is referred to as Shanya.
Initialization phase
During initialization, the loader allocates a heap buffer in process memory that acts as a slot-policy table.
Figure 3a. Allocating buffer for slot-policy table.
The size of this buffer is computed as “ntdll.dll” OptionalHeader.SizeOfCode divided by 16 ( SizeOfCode >> 4), resulting in one byte per 16-byte code slot covering the code region as defined by OptionalHeader.SizeOfCode (typically the .text range). Each entry in the table corresponds to a fixed 16-byte block relative to BaseOfCode.
The loader then iterates over the export table of “ntdll.dll”. For each exported function whose name begins with “Nt”, the virtual address of the corresponding syscall stub is resolved. From this address, a slot index is calculated as: slot_idx = (FuncVA – BaseOfCode)/16
This index is used to mark the corresponding entry in the slot-policy table. All Nt* stubs are assigned a default policy, while selected functions are explicitly marked with special policies, including:
NtTraceEvent
NtTraceControl
NtAlpcSendWaitReceivePort
The result is a data-driven classification of relevant syscall stubs without modifying the executable code of “ntdll.dll”. The resulting slot-policy-table appears as follows:
Figure 3b. Slot-policy table.
The actual loader function is significantly more complex and incorporates additional obfuscation techniques, such as hash-based API resolution at runtime.
After constructing the table, the sample dynamically resolves ntdll!LdrProtectMrdata, which will be discussed in greater detail later. It then invokes this routine to change the protection of the .mrdata section to writable. This section contains the exception dispatcher callback pointer along with other critical runtime data.
Once the section is writable, the loader overwrites the dispatcher slot with its own custom exception handler. As a result, its routine is executed whenever an exception is triggered.
Figure 5. Overwriting of exception handler dispatcher slot.
Runtime exception handling
This function primarily performs two tasks: handling breakpoint exceptions and single-step exceptions.
The handling of breakpoint exceptions (0xCC) is relatively straightforward. It simply resumes execution at the instruction immediately following the INT3 (0xCC). Talos is not certain why this approach was implemented. It may function as a lightweight anti-emulation, anti-analysis, or anti-sandbox mechanism for weak analysis systems, serve as groundwork for more advanced anti-debugging techniques, or act as preparation for future control-flow manipulation similar to the VEH-based logic observed in Stages 2 and 3.
Figure 6. Breakpoint logic of hook_function_ExceptionCallback function.
The single-step portion of the function is significantly more complex and is where the previously introduced slot-policy table is utilized. ctx->ntstub_class_map points to the map buffer allocated during initialization.
Figure 7. Single step logic of hook_function_ExceptionCallback function.
Simplified the logic of the initialization and dispatch function looks like this in pseudo code. InitCtxAndPatchNtdllMrdataDispatch is the initialization function and hook_function_ExceptionCallback is the dispatch function mentioned above.
Figure 8. Simplified single step SEH logic.
The find_syscall routine shown in Figure 7 implements a syscall recovery technique. Details can be found in the picture below. It scans both backward and forward through “ntdll.dll” to locate intact syscall stubs and identify neighboring syscalls that can be repurposed.
The simplified logic is as follows:
Indirectly determine the target syscall number by scanning forward and backward.
Locate a clean neighbouring stub.
Manually load the correct syscall ID into eax.
Transition directly to kernel mode using the syscall instruction (i.e., a syscall instruction located inside a clean neighboring stub).
By reusing a neighboring syscall stub to invoke the desired system call, the loader bypasses EDR-hooked syscalls without modifying the hooked code itself. The Windows kernel only evaluates the syscall ID in eax; it does not verify which exported API function initiated the call.
Figure 9. Halo’s Gate: find_syscall function.
As previously mentioned, the actual code of the malware is more complex (e.g., the aforementioned runtime resolution of ntdll!LdrProtectMrdata).
Figure 10. Resolution of ntdll!LdrProtectMrdata at runtime.
The loader resolves the ntdll!LdrProtectMrdata function in a stealthy way. Instead of resolving LdrProtectMrdata by name or hash, the loader instead:
Finds the .mrdata section in the “ntdll.dll” image
Checks whether the current dispatcher slot pointer (dispatch_slot) lies inside .mrdata
If it does, it uses a known exported ntdll function (RtlDeleteFunctionTable, located via hash) as an anchor
From that anchor, it scans for a CALL rel32 instruction (0xE8) and extracts its target address
That call target is the address of LdrProtectMrdata and stored in ctx->LdrProtectMrdata
The initialization routine described earlier also incorporates several basic anti-debugging measures. For example, it verifies whether a breakpoint has been placed on KiUserExceptionDispatcher. If such a breakpoint is detected, the process is deliberately crashed. This check is performed before the dispatcher is overwritten, which means that the resulting exception is handled by the original, default exception handler.
The loader also implements geo-fencing. It excludes systems configured for languages commonly used in post-Soviet countries. This check is performed at an early stage, and the loader terminates if a locale from the exclusion list is detected.
Figure 12. Geo-fencing function.Figure 13. Geo-fencing excluded countries list.
After initializing Stage 1, the loader proceeds to unpack the subsequent stages. It creates a paging file-backed section and maps two views of this section into the process address space. This aspect was not analyzed in depth; however, creating two views of the same section is a common malware technique used to obscure a READ-WRITE-EXECUTABLE memory region. Typically, one view is configured with WRITE access only, masking the effective executable permissions of the underlying section. This shared memory region will contain subsequent malware stages after unpacking them. This also makes it more difficult to dump the memory during analysis. When a virtual memory page is not currently present in RAM (present bit cleared), accessing it triggers a page fault. The kernel then resolves the fault (e.g., by loading the page from the pagefile into physical memory).
Figure 14. CreateFileMappingA resolver function, returns the handle 0x174.Figure 15. First “write only” view, FILE_MAP_WRITE (0x2).Figure 16. Second “R-W-X” view, 0x24 = FILE_MAP_READ (0x4) | FILE_MAP_EXECUTE (0x20).
After creating the views, it copies and decodes bytes into this buffer. The basic block highlighted in green marks the start of this routine, while the red basic block represents the final control transfer (see Figure 17) to the decoded payload. The yellow basic block contains the decision logic that determines when execution transitions to the red basic block.
Figure 17. Stage 2 decoding routine.
Inside the red basic block, we have the final jump into the decoded bytes of Stage 2.
Figure 18. Call to Stage 2 in red basic block.
Stage 2
Stage 2 (0x2470000) serves solely as a stealthy transition mechanism to transfer execution to Stage 3. As expected, all addresses referenced from this point onward, such as 0x2470000, may vary between executions of the loader, as they are dynamically allocated at runtime.
The initial part of Stage 2 is straightforward: It decodes the data stored in the memory section and then unmaps the previously mapped view. The subsequent function call constitutes the critical step: ctx->FuncPtrHookIAT((ULONGLONG)ctx->hooking_func);
This IAT-hooking routine overwrites the ExitProcess entry in the Import Address Table (IAT) of the main process (i.e., the process that loaded the malicious “msimg32.dll”).
Figure 21. Overwritten IAT pointer to ExitProcess at 0x140017138.
As shown in Figure 18, execution returns normally from Stage 2, and DllMain completes without any obvious anomalies. The malicious logic is triggered later, when ExitProcess is invoked by exit_or_terminate_process during process termination. Instead of terminating the process, execution is redirected to function 0x2471000, which corresponds to Stage 3.
Stage 3
Stage 3 primarily decompresses and loads a PE image from memory that was originally embedded within the malicious “msimg32.dll”. It begins by resolving syscall stubs, which are used in subsequent code sections followed by decoding routines.
Figure 22. Syscall resolution and execution of certain functions.
After several decoding and preparation steps, the PE image is decompressed from memory.
After the PE image has been decompressed, the final routine responsible for preparing, loading, and ultimately executing the PE can be found at 0x24A2CE7 in this run.
Figure 25. Final load and execution of the embedded PE.
The fix_and_load_PE_set_VEH function begins by mapping “shell32.dll” into the process address space using NtCreateFile, NtCreateSection, and MapViewOfFile. It then overwrites the in-memory contents of “shell32.dll” with the previously loaded PE image.
Figure 26. Load “shell32.dll” into memory.
After copying the embedded and decoded PE image into memory, the code manually applies base relocations.
Figure 27. PE relocation.
After preparing the PE for in-memory execution, the loader employs a technique similar to Stage 2, but this time leveraging a vectored exception handler (VEH). After registering the VEH, it triggers the handler by setting a hardware breakpoint on ntdll!NtOpenSection. To indirectly invoke NtOpenSection, the loader subsequently loads a fake DLL via a call to the LdrLoadDll API. It appears that the malware author intentionally chose a name referencing a well-known security researcher, likely as a provocative touch.
Figure 28. Call to LdrLoadDll.
After several intermediate steps, this results in a call to NtOpenSection, which triggers the previously configured hardware breakpoint and, in turn, invokes the VEH. The first time the VEH is triggered at NtOpenSection, it executes the code in Figure 29.
Figure 29. Malicious VEH, part 1: NtOpenSection handler.
It modifies the “shell32.dll” name in memory to “hasherezade_[redacted].dll”, then adjusts RIP in the context record to point to the next ret instruction (0xC3) within the NtOpenSection stub and sets a new hardware breakpoint on NtMapViewOfSection. In addition, it updates the stack pointer to reference LdrpMinimalMapModule+offset, where the offset corresponds to an instruction immediately following a call to NtOpenSection inside LdrpMinimalMapModule. It then invokes NtContinue, which resumes execution at the RIP value stored in the context record (i.e., at the ret instruction). That ret instruction subsequently transfers control to the address prepared on the stack, namely LdrpMinimalMapModule+offset.
cr_1->rsp = LdrpMinimalMapModule+offset cr_1->rip = ntdll!NtOpenSection+0x14 = ret ; jumps to <rsp> when executed
Figure 30. Jump destination after calling NtOpenSection.
During execution of LdrpMinimalMapModule, a call to NtMapViewOfSection is made, which triggers the hardware breakpoint set by the previous routine. On this occasion, the VEH executes the code in Figure 31.
Figure 31. Malicious VEH, part 2: NtMapViewOfSection handler.
It deletes all HW breakpoints and then sets the stackpointer to an address which points to an address in LdrMinimalMapModule+offset. As expected, this is right after a call to NtMapViewOfSection. In other words, the registers in the context are overwritten like this:
ctx->rsp -> ntdll!LdrpMinimalMapModule+0x23b ctx->rip -> ntdll!NtMapViewOfSection+0x14 = ret
When the return (ret) instruction is reached, it jumps to the address stored in the stack pointer (rsp).
Figure 32. Jump destination after call NtMapViewOfSection.
The subsequent code in LdrpMinimalMapModule maps the previously restored PE image into the process address space and prepares it for execution. Finally, control returns to 0x24A3C1E, the instruction immediately following the call that originally triggered the first hardware breakpoint.
Figure 33. Instruction after the call to LdrLoadDll.
After several additional fix-up steps, the loader transfers execution to Stage 4 (i.e., the loaded PE image).
Figure 34. Final jump to loaded PE.
This PE file is an EDR killer capable of disabling over 300 different EDR drivers across a wide range of solutions. A detailed analysis of this component will be provided in the next section.
Figure 35. Excerpt from the EDR driver list.
PE loader summary
The first three stages of this binary implement a sophisticated and complex PE loader capable of bypassing common EDR solutions by evading user-mode hooks through carefully crafted SEH and VEH techniques. While these methods are not entirely novel, they remain effective and should be detectable by properly implemented EDR solutions.
The loader decrypts and executes an embedded PE payload in memory. In this campaign, the payload is an EDR killer capable of disabling over 300 different EDR products. This component will be analyzed in detail in the next section.
EDR killer
Stage 4: Extracted EDR killer PE file
Besides initialization, the first thing the extracted PE from Stage 3 does is check again if the system locale matches a list of post-Soviet countries and, if it does, it crashes. This is another indicator that former stages are just a custom PE loader, which could be used to load any PE the adversaries want. Otherwise, doing the same check again is not logical.
Figure 36. Malware geo-fencing function.Figure 37. List of blocked countries.
The malware then attempts to elevate its privileges and load a helper driver. This also implies that the process must be executed with administrative privileges.
Figure 38. Privilege escalation and loading of helper driver.
The “rwdrv.sys” driver is a renamed version of “ThrottleStop.sys”, originally distributed by TechPowerUp LLC and signed with a valid digital certificate. It is legitimately used by tools such as GPU-Z and ThrottleStop. This is not the first observed abuse of this ; it has previously been leveraged in several malware campaigns.
Despite its benign origin, the driver exposes highly powerful functionality and can be loaded by arbitrary user-mode applications. Critically, it implements these capabilities without enforcing meaningful security checks, making it particularly attractive for abuse.
This driver exposes a low-level hardware access interface to user mode via input/output controls (IOCTLs). It allows a user-mode application to directly interact with system hardware.
The driver implements IOCTL handlers that provide the following capabilities:
I/O port access
Read from hardware ports (inb/inw/ind)
Write to hardware ports (outb/outw/outd)
CPU Model Specific Register (MSR) access
Read MSRs (__readmsr)
Write MSRs (__writemsr) with limited protection against modifying critical syscall/sysenter registers
Physical memory/MMIO access
Map arbitrary physical memory into kernel space using MmMapIoSpace
Create a user-mode mapping of the same memory using MmMapLockedPagesSpecifyCache
Maintain up to 256 active mappings per driver instance
Additionally, the driver tracks the number of open handles and associates memory mappings with the calling process ID.
Overall, the driver functions as a generic kernel-mode hardware access layer, exposing primitives for port I/O, MSR access, physical memory mapping, and PCI configuration operations. Such functionality is typically used by hardware diagnostic tools, firmware utilities, or low-level system utilities, but it also provides powerful primitives that could be abused if accessible from unprivileged user-mode.
The two important functions heavily used by the sample are the ability to read and write physical memory.
After loading the driver, the malware proceeds to determine the Windows version. To do so, it first resolves the required API function using a PEB-based lookup routine, a technique consistently employed throughout the sample.
Figure 41. DLL resolution.Figure 42. API function resolution.
The implementation parses the Process Environment Block (PEB) and locates the target module by finding the hash of its name. Then the ResolveExportByHash function takes the module base from the previously found DLL and parses its PE header to find the function that corresponds to the function hash. It can either provide the API function address as an PE offset or as a virtual address.
After a couple of initializations and checks, it gets the “rwdrv.sys” handle, followed by the EDR-related part of the sample — the kernel tricks which are responsible for avoiding, blinding, and disabling the EDR.
Figure 43. Get driver handle for “rwdrv.sys”.Figure 44. Overview of the EDR killer part of the sample.
However, let’s have a brief look into the details. It starts with building a vector of physical memory pages. This vector will later be used in subsequent methods.
Figure 45. Initialization logic of the Page Frame Number (PFN) metadata list.
The SetMemLayoutPointer function in the if statement above leverages the NtQuerySystemInformation API function to gather the Superfetch information about the physical memory pages. It stores a pointer to this information in global variables (mem_layout_v1_ptr or mem_layout_v2_ptr). Which one is used depends on the version variable which is the argument handed over to the function. In our case, 1 is for calling the function the first time and 2 is for the second time. In other words, it brute-forces whichever version works for the Windows system it is running on.
Figure 46. Superfetch structure and NtQuerySystemInformation call.
The BuildSuperfetchPfnMetadataList function is quite large and complex. Simplified, it starts by using the mem_layout pointer to calculate the total page count.
Figure 47. Total Page count algorithm.
It then ends by using NtQuerySystemInformation again to get the physical pages and their meta data to store this information in a global vector (g_PfnVector).
Figure 48. Superfetch structure.Figure 49. Build global physical memory list Vector.
Back to the block from the above, the next step blinds the EDRs by deleting their callbacks for certain operations (e.g., process creation, thread creation, and image loading events).
Figure 50. Deleting EDR callbacks.
The unregister_callbacks function iterates through a list of over 300 driver names which are stored in the sample.
Figure 51. EDR driver name list.Figure 52. unregister_callbacks function.
It also demonstrates the overall implementation of the malware, which is also used in several other functions. It uses a certain API function to calculate an offset to the function or object it is really using — in this case, the kernel callback cng!CngCreateProcessNotifyRoutine. It also does not touch this object in the process virtual address space. It uses the driver loaded earlier (“rwdrv.sys”) to get the physical memory address of it. The logic and driver communication is implemented in the read_phy_bytes function, and the same for overwriting memory; the write_to_phy_mem function is used to handle the driver communications. The DeviceIoControlImplementation function which talks to the driver is implemented in write_to_phy_mem.
Figure 53. DeviceIoControlImplementation function called in write_to_phy_mem.
The other callback-related functions shown in Figure 44 work similarly to the one we discussed. They overwrite or unregister other EDR-specific callbacks, which were set by the EDR Mini-Filter driver.
The final part of the EDR killer begins by loading another driver (“hlpdrv.sys”).
Figure 54. Load and use of hlpdrv.sys.
The malware uses the driver to terminate EDR processes running on the system using the IOCTL code 0x2222008. This executes the function in the driver which is responsible for unprotecting and terminating the process.
Figure 55. Terminate protected process function in hlpdrv.sys.
Once terminated, EDR processes such as Windows Defender no longer run, as demonstrated in Figure 56.
Figure 56. Terminated Windows Defender process.
Additionally, it restores the CiValidateImageHeader callback. The RestoreCiValidateImageHeaderCallback function is shown in Figure 57.
Figure 57. Restoring code integrity checks.
This is accomplished using the same concept we previously saw in Figure 52:
Resolve a known API function.
Use this function as an anchor point to locate a specific instruction within its code.
This instruction contains a pointer in one of its operands that points to, or near, the object of interest.
Identify the pointer to the target object within that instruction.
Perform a sign extension on the operand.
Add an additional offset to compute the final address of the object being sought — in this case, the CiValidateImageHeader callback.
Restore the original function pointer to CiValidateImageHeader.
Note that the malware had previously overwritten the callback to CiValidateImageHeader with the address of ArbPreprocessEntry, a function that always returns true. In other words, it has now restored the original Code Integrity check.
Summary
This blog was a technical deep dive into the infection chain that is hidden in the malicious “msimg32.dll”, which has been observed during Qilin ransomware attacks. It demonstrates the sophisticated tricks the malware is employing to circumvent or completely disable modern EDR protection features on compromised systems.
It is encouraging to see how many hurdles modern malware must overcome. At the same time, this highlights that even state-of-the-art defense mechanisms can still be bypassed by determined adversaries. Defenders should never rely on a single product for protection; instead, Talos strongly recommends a multi-layered security approach. This significantly increases the difficulty for attackers to remain undetected, even if they manage to evade one line of defense.
Coverage
The following ClamAV signatures detect and block this threat:
Win.Malware.Bumblebee-10056548-0
Win.Tool.EdrKiller-10059833-0
Win.Tool.ThrottleStop-10059849-0
The following SNORT® rules (SIDs) detect and block this threat:
Covering Snort2 SID(s): 1:66181, 1:66180
Covering Snort3 SID(s): 1:301456
Indicators of compromise (IOCs)
The IOCs for this threat are also available at our GitHub repository here.
Reaching a higher level of SOC maturity takes better, more consistent decision-making during malware and phishing investigation.
This requires a shift in how threat intelligence is used: not as a reference point, but as a core layer in the decision process.
Moving from reactive to confidently proactive security means establishing a threat intelligence workflow that:
Solve key challenges, from alert fatigue to blind spots
Integrate across SOC workflows, supporting them
Deliver compounding value as a unified system
In this model, threat intelligence becomes part of the SOC’s operational fabric. That’s what ANY.RUN Threat Intelligence is designed for.
It becomes a layer inside your SOC’s operations. A layer that provides behavioral context, workflow support, and data delivery for faster triage, incident response, and threat hunting.
Read further to see how it changes each stage of your SOC operations.
Key takeaways
Threat intelligence must move from data to decisions, as its value is measured by how it improves SOC actions, not how much data it provides.
Context is the differentiator. Linking IOCs to behavior and TTPs is what enables accurate triage and detection.
Unified TI drives consistency in SOC teams, embedding intelligence across workflows.
Operationalized TI compounds over time. Every investigation strengthens detection, automation, and future response.
ANY.RUN’s threat intelligence is built on live attack data that provides unique, real-time visibility into emerging threats and supports the full investigation cycle.
Solving Key SOC Challenges with Behavioral TI
Most threat intelligence today is still delivered as bare indicator feeds, standalone reports, or enrichment tools with fragmented intelligences that exist outside the core SOC workflow.
In this model, threat intelligence behaves as an input, not as part of the system itself. Indicators without context create noise. Context without operationalization creates friction. As a direct outcome, SOCs struggle with:
Time-consuming manual enrichment
Operational bottlenecks across processes
Detection that gets delayed by the lack of fresh data
Human-centered challenges in SOC teams are often not analysts’ fault either. Alert fatigue and unnecessary escalations stem from fragmented, hard-to-access threat data that fails to deliver usable context during investigations.
The path to improvement lies in acquiring actionable threat intelligence that operationalizes SOC tasks and completes the workflow, supporting the entire investigation cycle.
Reach a higher level of SOC maturity
Behavioral threat intelligence for proactive action
Threat Intelligence That Offers More than Just Indicators
What SOC teams require is actionable intelligence that supports decisions and execution, enabling analysts to move from enrichment to understanding, and from understanding to detection and rapid response.
Where traditional TI may fail because of its fragmented, add-on nature, actionable threat intelligence encompasses the entire malware and phishing investigation cycle by:
Supporting both automation and analyst-driven workflows
This is threat intelligence that doesn’t exist beside your SOC, but an essential operational layer within it that turns repetitive work into a scalable workflow where each detection enhances overall security and proactive protection from similar threats in the future.
A key differentiator of effective threat intelligence is its foundation in live, real-world attack activity.
ANY.RUN Threat Intelligence is built on continuously analyzed data from over 15,000 organizations and 600,000 analysts conducting daily malware and phishing investigations worldwide. This creates a unique, constantly evolving dataset of active threats processed and validated to minimize noise.
Operational Impact of Actionable Threat Intelligence
For analysts
Less manual work, faster understanding of threats, confident decisions during triage and investigation
For SOC leaders
Improved detection quality, reduced dwell time; consistent, predictable operations across teams
For CISOs
Lower risk exposure, better visibility into threats and coverage gaps; stronger confidence in security effectiveness and ROI
ANY.RUN’s TI As an Operational Layer in Your SOC
ANY.RUN’s approach to behavioral threat intelligence is built around the idea of treating it not as a dataset but as an operational layer that connects context and action across the SOC lifecycle.
This approach reframes TI from a passive resource into an active component of the SOC system that:
1. Links Isolated IOCs to malware behavior and TTPs via TI Lookup
Instead of treating indicators as isolated data points, with Threat Intelligence Lookup (TI Lookup), a solution for instant enrichment and threat research, analysts immediately see how they behave in real attacks. Any artifact (IP, domain, hash, or URL) is enriched with execution context, infrastructure relationships, and associated TTPs.
This allows teams to move from “what is this?” to “how does this operate?” within seconds, improving triage quality and enabling faster, more confident decisions.
IP identified as Moonrise RAT infrastructure, enriched with linked behavioral analyses and attack context. TI Lookup
Turn intelligence into action
Make confident decisions with ANY.RUN’s TI
2. Embeds context directly into triage and response
Integration opportunities for ANY.RUN Threat Intelligence
Whether through integrations or manual use, threat intelligence from ANY.RUN becomes a part of the SOC investigation cycle that supports early detection and smart decisions.
3. Enables conversion of intelligence into detections via YARA Search
YARA Search accumulating artifacts and sandbox analyses
Threat intelligence becomes particularly valuable when it directly translates into detections. YARA Search enables that by helping analysts test, refine, validate, and create YARA rules to ensure coverage of relevant threats with reduced false positives.
The result is more reliable detections and greater confidence in security controls.
4. Delivers continuous, real-time intelligence streams via TI Feeds
TIFeeds streamline operations with 99% unique threat data
Threat Intelligence Feeds are continuously delivered into existing security pipelines rather than accessed on demand, and that’s how real-time, validated indicators sourced from live attack data flow directly into SIEM, SOAR, and EDR systems, supporting automated detection, correlation, and response.
This reduces manual workload, improves alert quality, and lowers dwell time.
5. Fills visibility gaps with TI Reports
TI Reports, a module of ANY.RUN’s Threat Intelligence
ANY.RUN TI Reports address the partial visibility challenge in SOC teams by providing threat overviews curated by our experts, turning analyst-driven insights into strategic intelligence with threat behaviors, TTPs, and detection opportunities already described and contextualized.
This enables teams to quickly understand emerging risks, validate their coverage, and identify blind spots without investing additional investigation time.
Threat Intelligence Across Processes and Outcomes
ANY.RUN Threat Intelligence’s goal is not to improve a single step, but to encompass the entire operational cycle.
SOC Process
ANY.RUN’s Threat Intelligence Action
Outcomes
Triage and Alert Enrichment
Centralized validation of indicators with immediate context and prioritization; scalability for teams of any size and secure integration
Behavior-driven search with access to real attack data and analyses; supports conversion of findings into detections
Proactive threat discovery, stronger and more consistent detections, elimination of repetitive work
Incident Response
Immediate access to unified threat context across incidents, enabling consistent investigation and decision-making
Faster response, reduced dwell time, lower operational risk
SOC Management & Performance
Continuous, real-time intelligence aligned with current threats; visibility into threat landscape and coverage gaps
Improved MTTD/MTTR,measurable SOC performance, clearer ROI, and risk reduction
Conclusion
High-performing SOCs are defined by how effectively threat intelligence is integrated into their operations.
When threat intelligence components operate as a unified system rather than isolated capabilities, they stop being tools and become part of the SOC’s operational infrastructure.
In this model, Threat Intelligence is:
a unified, behavior-driven intelligence layer;
a continuous link from indicators to behavior and from detection to automation;
a real-time stream of relevant, active threat data;
embedded across triage, incident response, threat hunting, detection, and management.
About ANY.RUN
ANY.RUN provides interactive malware analysis and behavior-driven threat intelligence solutions designed to support real-world SOC operations. The platform enables security teams to understand threats faster, make informed decisions, and operationalize intelligence across detection and response workflows.
Used by over 15,000 organizations and 600,000 security professionals worldwide, ANY.RUN delivers continuously updated intelligence based on live attack analysis. The company is SOC 2 Type II certified, ensuring strong security controls and protection of customer data.
FAQ
What is ANY.RUN Threat Intelligence?
ANY.RUN Threat Intelligence features TI Lookup, TI Feeds, TI Reports, and YARA Search as a unified, behavior-driven intelligence layer that connects indicators with malware behavior, TTPs, and artifacts—supporting decision-making across SOC workflows.
How is it different from traditional threat intelligence?
Traditional feeds primarily deliver indicators. ANY.RUN’s TI provides context, behavioral analysis, and enables conversion into detections, while continuously integrating into SOC processes.
What data is it based on?
It is built on real-time analysis data from over 15,000 organizations and 600,000 analysts conducting malware and phishing investigations worldwide.
How does it improve SOC operations?
By reducing manual enrichment, accelerating triage and response, improving detection quality, and enabling more consistent, data-driven decisions.
Does it support both manual and automated workflows?
Yes. It is designed to be used both manually by analysts and automatically via integrations with SIEM, SOAR, EDR, and other platforms.
How does it help reduce risk?
By providing early visibility into emerging threats, improving detection coverage, and shortening the time between threat emergence and response.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-04-02 11:06:352026-04-02 11:06:35From Reactive to Proactive: 5 Steps to SOC Maturity with Threat Intelligence