Cyble Research & Intelligence Labs (CRIL) tracked 1,158 vulnerabilities last week. Of these, 251 vulnerabilities already have publicly available Proof-of-Concept (PoC) exploits, significantly increasing the likelihood of real-world attacks.
A total of 94 vulnerabilities were rated critical under CVSS v3.1, while 43 were rated critical under CVSS v4.0.
In parallel, CISA issued 15 ICS advisories covering 87 vulnerabilities affecting industrial environments. These vulnerabilities impacted vendors including Siemens, Yokogawa, AVEVA, Hitachi Energy, ZLAN, ZOLL, and Airleader.
Additionally, 8 vulnerabilities were added to CISA’s Known Exploited Vulnerabilities (KEV) catalog, reflecting confirmed exploitation in the wild.
The Week’s Top Vulnerabilities
CVE-2025-40554 — SolarWinds Web Help Desk (Critical)
CVE-2025-40554 is a critical authentication bypass vulnerability affecting SolarWinds Web Help Desk versions prior to 2026.1. The flaw allows unauthenticated remote attackers to invoke privileged functionality without valid credentials, potentially leading to full compromise of helpdesk systems.
Cyble observed this vulnerability being discussed on underground forums shortly after disclosure, and a public PoC is available. The vulnerability’s presence in enterprise environments increases the risk of initial access and lateral movement.
CVE-2026-1340 — Ivanti Endpoint Manager Mobile (Critical)
CVE-2026-1340 is a critical code injection vulnerability in Ivanti Endpoint Manager Mobile (EPMM). A remote, unauthenticated attacker can exploit the flaw to achieve arbitrary remote code execution without user interaction.
The vulnerability has been captured in dark web discussions and has a publicly available PoC , significantly lowering the barrier to exploitation.
CVE-2026-21509 — Microsoft Office (High Severity, Actively Exploited)
CVE-2026-21509 is a feature-bypass vulnerability in Microsoft Office that allows crafted documents to circumvent built-in security protections. Attackers can deliver malicious Office files that execute payloads once opened by the victim.
The flaw has been actively exploited by threat actors including APT28 and RomCom , highlighting its operational impact.
CVE-2026-1529 — Keycloak (High Impact)
CVE-2026-1529 affects Red Hat’s Keycloak and involves improper validation of JWT invitation token signatures. Attackers can manipulate trusted token contents to gain unauthorized access to organizational resources.
A PoC is available, and the vulnerability surfaced on underground forums shortly after disclosure.
CVE-2026-23906 — Apache Druid (Critical)
CVE-2026-23906 is a critical authentication bypass vulnerability in Apache Druid, enabling unauthorized access to sensitive data stores.
CVE-2026-0488 — SAP CRM & SAP S/4HANA (Critical)
CVE-2026-0488 is a critical code injection vulnerability affecting SAP CRM and SAP S/4HANA. An authenticated attacker can exploit improper function module calls to execute arbitrary SQL statements, potentially resulting in full database compromise.
Vulnerabilities Added to CISA KEV
CISA added 8 vulnerabilities to the KEV catalog during the reporting period. The most important of these were:
These critical vulnerabilities in ZLAN Information Technology Co.’s ZLAN5143D device involve missing authentication for critical functions.
Successful exploitation could allow attackers to bypass authentication controls or reset device passwords, potentially enabling unauthorized configuration changes and interference with industrial communications. Researchers also identified internet-facing instances, increasing exposure risk.
CVE-2025-52533 — Siemens SINEC OS (Critical)
CVE-2025-52533 is a critical out-of-bounds write vulnerability in Siemens SINEC OS before version 3.3, potentially enabling memory corruption and system compromise in industrial network environments.
CVE-2026-1358 — Airleader Master (Critical)
CVE-2026-1358 is a critical, unrestricted file-upload vulnerability in Airleader Master systems. Successful exploitation could allow attackers to upload malicious files, potentially resulting in remote code execution in OT environments.
Impacted Critical Infrastructure Sectors
Analysis of the ICS advisories shows that Critical Manufacturing and Energy sectors appear in 98.9% of reported vulnerabilities, showcasing concentrated exposure in these environments.
The cross-sector nature of these vulnerabilities underscores the interdependencies between Energy, Manufacturing, Transportation, Water, and Food systems.
Conclusion
The convergence of high-volume IT vulnerabilities and significant ICS exposure highlights the continued expansion of the attack surface across enterprise and industrial environments. With over 250 PoCs publicly available and multiple KEV additions confirming active exploitation, organizations must prioritize rapid remediation and risk-based vulnerability management.
Security best practices include:
Prioritizing vulnerabilities based on risk and exploit availability
Protecting web-facing and internet-exposed assets
Implementing strict IT/OT network segmentation
Deploying multi-factor authentication and strong access controls
Conducting regular vulnerability assessments and penetration testing
Monitoring underground forums and KEV updates for early warning signals
Cyble’s comprehensive attack surface management solutions help organizations continuously monitor internal and external assets, prioritize remediation, and detect early warning signals of exploitation. Additionally, Cyble’s threat intelligence and third-party risk intelligence capabilities provide visibility into vulnerabilities actively discussed in underground communities, enabling proactive defense against both IT and ICS threats.
G2, the world’s largest and most trusted software marketplace, has recognized ANY.RUN among the Best Software Companies.
The ranking is based on verified reviews from organizations actively using ANY.RUN’s solutions. It reflects the company’s strong international presence and measurable impact across global cybersecurity markets.
Thank You to Our Community
Recognition on G2’s Top 50 Best Software Companies list is a reflection of peer validation, powered by customer reviews and feedback. We are very grateful to all analysts, SOC teams, and experts whose insights and evaluations contributed to the ranking.
For ANY.RUN, entering the G2 ranking is a milestone, not a finish line. We will continue to invest in product innovation, community-driven improvements, and measurable outcomes for security operations worldwide.
Impact with ANY.RUN: Customer-Reported Outcomes
ANY.RUN optimizes SOC workflows across processes
ANY.RUN delivers measurable operational value to security teams with demanding workloads and strict SLAs. Among results reported by our customers are 50%+ reduction in investigation & IOC extraction time and 30–55% fewer irrelevant escalations.
Beyond the metrics, ANY.RUN’s rising position in software rankings is by its ability to solve operational challenges across the SOC lifecycle:
Unified SOC Workflow: ANY.RUN delivers solutions that support processes from monitoring to triage and incident response in a single ecosystem, enabling investigation without switching tools.
Accelerated Decision-Making: Interactive malware analysis combined with contextual threat data provides immediate behavioral insight and evidence.
Solved SOCs and MSSP Challenges: Standardized workflows and integrated intelligence enable efficient operations at scale, filling the gaps in work processes.
ANY.RUN: one workflow to cover all SOC needs.
Upgrade to enterprise-grade solutions today.
Trusted by the World’s Most Demanding Organizations
We support analysts in accelerating investigations, reducing risk, and improving operational outcomes across industries. Among 15,000 SOC teams applying our solutions, there are 3,102 IT & technology companies, 1,778 financial institutions, 1,059 government entities, and 919 healthcare providers.
The results companies get when using ANY.RUN in their security operations
ANY.RUN is used broadly by organizations with high security requirements, including the world’s largest enterprises:
74% of Fortune 100 companies rely on ANY.RUN for malware analysis and threat investigation workflows.
64% of Fortune 500 companies incorporate ANY.RUN into broader threat detection and response strategies.
“We just stopped losing time to uncertainty. Now we can confirm what’s happening faster and escalate only when it actually makes sense.”
ANY.RUN has become an integral component of modern security operations, enabling teams to make faster, more confident decisions across Tier 1, Tier 2, and Tier 3. It integrates seamlessly into existing workflows and reinforces the full investigation lifecycle from initial validation to in-depth analysis and continuous threat monitoring.
By exposing real attacker behavior, enriching investigations with critical context, and ensuring detections reflect the evolving threat landscape, ANY.RUN helps SOC teams reduce alert fatigue, accelerate response times, and minimize operational impact.
Today, more than 600,000 security professionals and 15,000 organizations worldwide rely on ANY.RUN to streamline triage, reduce unnecessary escalations, and stay ahead of constantly shifting phishing and malware campaigns.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-19 12:06:282026-02-19 12:06:28G2 Recognizes ANY.RUN Among the Top 50 Best Software Companies in the Region
I believe we’re witnessing the most significant event India has ever experienced. The nation stands at the cusp of a major global shift, and I want to share why I’m so bullish about India’s role in the AI revolution—and the critical security challenges we must address together.
India: Right Place, Right Time
No country will prosper without making significant changes in their AI capabilities. India is uniquely positioned to lead this transformation. We’ve already pioneered the entire FinTech ecosystem, processing payments for more than half a billion people globally. This foundation puts India at the perfect intersection of technological capability and market opportunity to ride the AI wave.
At the same time, scale brings responsibility. As AI becomes embedded across financial systems, digital public infrastructure, enterprise workflows, and citizen services, the attack surface expands alongside innovation. If India is to lead the AI revolution, we must lead in securing it as well.
Cyble’s Commitment to India’s AI Future
At Cyble, we’re incredibly excited to invest and continue growing our AI capabilities from India—from infrastructure to applications to talent. We’re not just talking about supplying talent to the world; we’re building core infrastructure, services, and capabilities right here. That’s why we’ve invested millions of dollars and will continue doing so. India’s potential extends far beyond being a service provider—we’re becoming a global AI powerhouse.
Beenu Arora, Co-Founder & CEO, Cyble, speaking during the session “Responsible AI at Scale: Governance, Integrity, and Cyber Readiness for a Changing World” at the India AI Impact Summit 2026.
As we build, I am also conscious that AI is not just another infrastructure layer. It is increasingly a cognitive system — capable of reasoning, contextual learning, and autonomous decision-making. That means it must be secured differently. Protecting AI systems requires thinking beyond traditional perimeter defenses and anticipating new risk categories such as model manipulation, data poisoning, prompt injection, AI-assisted reconnaissance, and sensitive data leakage.
The AI Security Challenge: A New Battlefield
But let me be candid about the challenge ahead. AI has fundamentally changed the game—it’s a massive structural shift. The threat landscape has evolved dramatically:
The Democratization of Cyber Attacks
What once took hours to execute—a basic phishing attack—now happens at scale with high contextual accuracy and perfect timing.
AI agents continuously monitor user activities on LinkedIn and social media, knowing exactly who you are, what interests you, and who you communicate with.
We’re seeing over 100,000 deepfake videos being created. With apps like Grok, anyone can generate a convincing deepfake in just 60 seconds.
I’ve seen this shift firsthand.
Three years ago, a member of my leadership team received a WhatsApp call that convincingly mimicked my voice and requested a financial transaction. It was a deepfake attempt. We identified it only after careful scrutiny.
At the time, such attacks were considered sophisticated and relatively rare.
Recently, my eight-year-old son wrote a simple program that deepfaked my own mother.
The point is not novelty. It is accessibility.
What once required specialized expertise and resources is now democratized. Consumer-grade AI systems can generate convincing synthetic audio with minimal effort. The barrier to entry has collapsed. Cybercrime is being industrialized.
Phishing has entered a new era as well. For decades, phishing attempts were often detectable through poor grammar, awkward phrasing, or generic messaging. That signal has largely disappeared. AI-driven agents now scrape publicly available information, analyze behavioral patterns, and craft highly personalized messages tailored to specific individuals and roles. These agents continuously learn, retain context, and refine their attacks. Precision has replaced volume as the dominant strategy.
The Defender’s Dilemma
AI is already democratized. Bad actors have access to the same technologies as defenders. This fight will be relentless. I believe attackers will initially gain the upper hand because AI systems weren’t designed with security in mind from the beginning.
Consider this: $4.6 trillion has been invested in building AI infrastructure, applications, and toolkits. Security, as always, is catching up.
Beyond social engineering, AI is influencing technical intrusion methods as well. AI systems are increasingly capable of identifying and chaining vulnerabilities across systems, discovering weaknesses with notable efficiency. In controlled environments, AI-assisted approaches have demonstrated the ability to map exploit pathways faster than traditional methods. This compresses the time between vulnerability discovery and exploitation, shrinking defensive response windows and amplifying attacker efficiency.
AI is not simply another tool in the attacker’s arsenal. It is a multiplier.
And while organizations rapidly integrate AI into customer experiences, analytics platforms, and internal decision-making systems, security investments do not always scale proportionately.
AI is often treated as infrastructure rather than as a cognitive system requiring dedicated protection mechanisms. This creates exposure across model integrity, training data pipelines, inference layers, and external integrations.
The enterprise attack surface is expanding — and becoming more intelligent.
Hope on the Horizon
Despite these challenges, I’m optimistic. As defenders gain access to the right governance frameworks and infrastructure, we’ll be positioned to make these systems better and safer for everyone. This is exactly why Cyble exists—to bridge that gap and protect organizations in this new AI-driven world.
Defending against AI-driven threats requires more than traditional controls. It requires continuous external threat intelligence, early detection of impersonation campaigns, dark web visibility into emerging AI-enabled tactics, proactive attack surface management, and context-aware anomaly detection.
The race is on, and India is ready to lead not just in AI innovation but in AI security. The question isn’t whether we’ll rise to this challenge—it’s how quickly we can mobilize our talent, infrastructure, and innovation to secure the AI future.
About the Author
Beenu Arora is the Co-Founder and CEO of Cyble, a leading AI-powered threat intelligence company investing heavily in India’s cybersecurity and AI infrastructure.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-19 11:06:432026-02-19 11:06:43India’s AI Revolution: Why This Is India’s Most Significant Moment
We’ve written time and again about phishing schemes where attackers exploit various legitimate servers to deliver emails. If they manage to hijack someone’s SharePoint server, they’ll use that; if not, they’ll settle for sending notifications through a free service like GetShared. However, Google’s vast ecosystem of services holds a special place in the hearts of scammers, and this time Google Tasks is the star of the show. As per usual, the main goal of this trick is to bypass email filters by piggybacking the rock-solid reputation of the middleman being exploited.
What phishing via Google Tasks looks like
The recipient gets a legitimate notification from an @google.com address with the message: “You have a new task”. Essentially, the attackers are trying to give the victim the impression that the company has started using Google’s task tracker, and as a result they need to immediately follow a link to fill out an employee verification form.
To deprive the recipient of any time to actually think about whether this is necessary, the task usually includes a tight deadline and is marked with high priority. Upon clicking the link within the task, the victim is presented with an URL leading to a form where they must enter their corporate credentials to “confirm their employee status”. These credentials, of course, are the ultimate goal of the phishing attack.
How to protect employee credentials from phishing
Of course, employees should be warned about the existence of this scheme — for instance, by sharing a link to our collection of posts on the red flags of phishing. But in reality, the issue isn’t with any one specific service — it’s about the overall cybersecurity culture within a company. Workflow processes need to be clearly defined so that every employee understands which tools the company actually uses and which it doesn’t. It might make sense to maintain a public corporate document listing authorized services and the people or departments responsible for them. This gives employees a way to verify if that invitation, task, or notification is the real deal. Additionally, it never hurts to remind everyone that corporate credentials should only be entered on internal corporate resources. To automate the training process and keep your team up to speed on modern cyberthreats, you can use a dedicated tool like the Kaspersky Automated Security Awareness Platform.
Beyond that, as usual, we recommend minimizing the number of potentially dangerous emails hitting employee inboxes by using a specialized mail gateway security solution. It’s also vital to equip all web-connected workstations with security software. Even if an attacker manages to trick an employee, the security product will block the attempt to visit the phishing site — preventing corporate credentials from leaking in the first place.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-19 10:06:362026-02-19 10:06:36Phishing via Google Tasks | Kaspersky official blog
Every security alert represents a decision point. Act too slowly, and a threat becomes a breach. Act without context, and analysts drown in noise. At the center of both failure modes is a single, often underestimated process: alert enrichment.
Key Takeaways
Alert enrichment is the operational multiplier. Its quality determines the effectiveness of every other SOC investment — detection tools, SIEM rules, and analyst headcount all underperform when enrichment is slow or fragmented.
Manual enrichment is a structural problem, not a skills problem. Even experienced analysts lose 20–30 minutes per alert to fragmented, multi-platform investigations.
Static intelligence and live behavioral analysis cover different failure modes. Threat Intelligence Lookup handles known indicators at speed. The Interactive Sandbox handles the unknown with depth.
Enrichment improvements are directly measurable in business terms. MTTD, MTTR, false positive rate, and analyst retention are all affected by enrichment quality.
The Seconds That Define a Breach
Alert enrichment is the practice of layering contextual intelligence onto raw security alerts (IP reputation, domain history, file behavior, attacker TTPs) so that analysts can make fast, accurate decisions. It sounds operational. But its downstream effects are deeply strategic: mean time to respond, analyst capacity, false-positive rates, and ultimately, whether the security function is perceived as a cost center or a competitive asset.
For the business, the difference is simple: enriched alerts lead to faster containment and fewer incidents. Poorly enriched alerts lead to delays, escalations, and avoidable losses.
From Raw Alerts to Actionable Decisions
Alert enrichment sits at the crossroads of detection, analysis, and response. It connects telemetry from SIEM, EDR, email security, and network controls with external and internal context such as indicators, attacker behavior, infrastructure, and historical activity.
When enrichment works well:
Tier 1 analysts understand what they are seeing;
Tier 2 can quickly validate intent and scope;
Tier 3 focuses on root cause and prevention, not data gathering.
Considering business objectives, effective enrichment directly affects:
Mean time to triage and respond,
Incident escalation rates,
Analyst productivity and burnout,
Cost of incidents and downtime,
Confidence in SOC reporting.
In short, alert enrichment defines how efficiently security investments translate into risk reduction.
Leadership increasingly demands that security spend be justified in operational terms. Alert enrichment is one of the most concrete levers available. It is measurable, improvable, and its effects cascade through the entire security operation. Organizations that treat it as a background task, rather than a core process deserving investment and optimization, consistently underperform on every metric that matters.
Without behavioral evidence, analysts often guess severity.
The business consequences of poor enrichment practices compound over time. The most direct impact is an extended breach window. Organizations with slow enrichment workflows consistently show longer dwell times before threat detection and containment.
Beyond breach economics, there are workforce consequences. Analyst teams experiencing enrichment bottlenecks burn out faster, make more errors under time pressure, and escalate inappropriately.
Finally, poor enrichment undermines executive reporting. When MTTR and false positive rates are poor, security teams struggle to demonstrate value to the board. This erodes confidence in the function and creates pressure for headcount reductions at precisely the moment when operational capacity is already strained.
Transforming Alert Enrichment into a Business-Aligned Efficiency Driver
The path from dysfunctional enrichment to a streamlined, high-performance process runs through threat intelligence. High-performing SOCs enrich alerts with two types of validation:
Historical attack data,
Live behavioral analysis.
Live sandbox analysis of Wannacry malware sample
ANY.RUN offers two distinct but deeply complementary capabilities that, together, cover the full spectrum of SOC enrichment needs: the Interactive Sandbox for live behavioral analysis of unknown threats, and Threat Intelligence Lookup for instant, structured context on known indicators.
Quick verdict on a domain: active, malicious, Lumma stealer-associated
Understanding each one, and how they interconnect, is key to applying them effectively across SOC tiers. With intelligence-backed and behavior-validated enrichment:
The SOC shifts from reactive investigation to structured decision-making.
Interactive Sandbox: Live Analysis When Intelligence Doesn’t Exist Yet
The ANY.RUN Interactive Sandbox is a cloud-based malware analysis environment that executes suspicious files and URLs and captures every aspect of their behavior in real time. It allows analysts to interact with the execution clicking through installer dialogs, entering credentials on a phishing page, following multi-stage execution chains.
In this sample, a QR code hidden in a phishing email leads to a CAPTCHA-protected page and then to a fake Microsoft 365 login designed to steal credentials. The sandbox detonates the full chain, reveals the phishing infrastructure, and confirms credential theft behavior in seconds.
A sandbox session generates a rich analytical output that invests in alert enrichment and aligns with business objectives:
Faster mean time to respond (MTTR), minimizing breach dwell time and data loss;
Reduced false positives by 35-60%, lowering analyst fatigue and operational costs;
Cost savings from prevented incidents and long-term ROI through proactive defense.
When one analyst runs a new sample, the resulting data immediately becomes available to the entire community and feeds directly into TI Lookup’s dataset.
The Interactive Sandbox is accessible via API, allowing orchestration platforms to trigger sandbox submissions automatically when incoming files or URLs meet defined criteria and to attach the resulting behavioral analysis directly to the incident ticket.
Turn alert enrichment into a measurable performance driver Combine real attack intelligence with live behavioral validation
ANY.RUN Threat Intelligence Lookup: Structured Context at Investigation Speed
Threat Intelligence Lookup is a search-driven intelligence platform built specifically to support the investigative and enrichment needs of SOC analysts. It centralizes structured, current intelligence in a single queryable interface.
The platform aggregates data from ANY.RUN’s Sandbox. Analysts can query by over 40 parameters including IP address, domain, URL, file hash, YARA rule, or MITRE ATT&CK technique and receive structured, actionable results in seconds.
Here we can see an actionable verdict on a domain that triggered alerts: it’s malicious, associated with Lumma stealer, spotted in the very recent attacks that mostly target telecom, IT, and healthcare sectors across Europe.
TI Lookup answers the question: have we (or has anyone in the security community) seen this indicator before, and what do we know about it? The Interactive Sandbox answers the question: what does this artifact do when it runs, right now, in a real environment?
Just switch to the “Analyses” tab in TI Lookup results to see a selection of fresh malware samples featuring the artifact in question and to view analyses for full attack chains, IOCs and TTPs.
Sandbox sessions with a certain indicator found in TI Lookup and showing malware behavior
Both capabilities are designed for operational integration. TI Lookup is accessible via a web interface for direct analyst use and via API for integration into SIEM, SOAR, and ticketing platforms, enabling automated pre-enrichment of alerts before they reach a human reviewer.
Enhances detection accuracy and reduces false positives;
Cuts investigation time and effort, boosting SOC productivity and minimizing breach impacts;
Supports compliance and employee training with rich, pre-processed data on malware behaviors and trends.
One Process, Organization-Wide Impact
Alert enrichment is not an isolated activity that affects only the analyst who performs it. It sits at the center of the SOC’s operational cycle, and its efficiency (or inefficiency) propagates through every tier and every metric. When enrichment is slow, fragmented, or dependent on stale intelligence, every downstream process suffers: triage is less accurate, investigation takes longer, containment is slower, and leadership receives metrics that tell a story of organizational underperformance.
By integrating TI Lookup and the Interactive Sandbox into the enrichment workflow, organizations address the root cause of this underperformance. Together, these capabilities cover the full surface area of enrichment need: instant structured context for known indicators, and live behavioral evidence for the unknown. The former get handled at speed, and the latter are exposed in depth. Neither replaces a professional’s judgment: both elevate it while being integrated into the analyst’s existing workflows.
When enrichment velocity increases, the key metrics that define SOC value to the business improve in tandem: MTTD drops because contextual data enables faster threat recognition; MTTR drops because analysts spend less time on data collection and more time on decision-making; false positive rates fall because richer context enables more accurate triage; and analyst capacity increases because the same team can handle greater alert volume without compromising quality.
Conclusion: Enrichment as the Multiplier
Alert enrichment defines whether a SOC operates reactively or strategically. When alerts are supported by real attack intelligence and validated through dynamic analysis, analysts stop guessing and start deciding.
Move from reactive alert handling to evidence-backed decision-making Empower your SOC with the synergy of TI Lookup & Sandbox
ANY.RUN’s Threat Intelligence Lookup and Interactive Sandbox together provide both precedent and proof. And when enrichment is grounded in both, security becomes faster, clearer, and more aligned with business objectives.
About ANY.RUN
ANY.RUN is part of modern SOC workflows, integrating easily into existing processes and strengthening the entire operational cycle across Tier 1, Tier 2, and Tier 3.
Today, more than 600,000 security professionals and 15,000 organizations rely on ANY.RUN to accelerate triage, reduce unnecessary escalations, and stay ahead of evolving phishing and malware campaigns.
To stay informed about newly discovered threats and real-world attack analysis, follow ANY.RUN’s team on LinkedIn and X, where weekly updates highlight the latest research, detections, and investigation insights.
FAQ
What is alert enrichment in a SOC?
Alert enrichment is the process of adding contextual and behavioral information to security alerts to enable accurate prioritization and faster response.
Why is enrichment critical for business outcomes?
Because it affects response time, escalation rates, analyst workload, and ultimately the cost and impact of security incidents.
How does Threat Intelligence Lookup support alert enrichment?
It provides real-world attack context, linking indicators to malware families, techniques, and infrastructure observed in live campaigns.
How does Interactive Sandbox improve enrichment quality?
It allows analysts to safely detonate suspicious artifacts and observe real-time execution behavior, reducing uncertainty and guesswork.
Why combine Lookup and Sandbox instead of using only one?
Lookup provides historical evidence. Sandbox provides live behavioral proof. Together, they reduce false positives, accelerate investigations, and improve SOC-wide efficiency.
A Cisco Talos researcher worked around the limitations of hardware-level Code Read-out Protection (RDP) on the Socomec DIRIS M-70 gateway by pivoting from physical debugging to a “good enough” emulation approach.
By focusing on emulating only the single thread responsible for Modbus protocol handling rather than the entire system, the author demonstrates how a streamlined emulation strategy can effectively surface vulnerabilities in complex industrial Internet of Things (IoT) devices.
The post highlights the integration of the Unicorn Engine and AFL for coverage-guided fuzzing, as well as the use of the Qiling framework to visualize code coverage and perform root cause analysis on crashes.
This research led to the discovery of six CVEs related to denial-of-service vulnerabilities, all of which have been patched by the manufacturer through Cisco’s Coordinated Disclosure Policy.
This blog describes efforts at emulating functionality of the Socomec DIRIS M-70 gateway to discover vulnerabilities. In vulnerability research, knowing which tool to use for the job at hand is crucial. This post will highlight multiple emulation tools and approaches used, detail the benefits and drawbacks of each, and reveal how a “good enough” approach can really pay off.
Project background
The M-70 gateway facilitates data communication over both RS485 and Ethernet networks, supporting a wide array of industrial communication protocols, including Modbus RTU, Modbus TCP, BACnet IP, and SNMP (v1, v2, and v3). This gateway is vital for energy management in sectors like critical infrastructure, data centers, healthcare, and the general energy sector. However, as an industrial Internet-of-Things (IIoT) device, vulnerabilities in the M-70 or similar gateways can lead to severe consequences, including operational disruption, financial losses, and manipulation of industrial processes. These risks are severe, especially in critical infrastructure where a compromised gateway could lead to widespread outages or equipment damage.
This large attack surface, the impact of vulnerabilities, and the fact that the M-70 gateway runs the real-time operating system (RTOS) µC/OS-III, made it an attractive research target. There was an expectation that prior familiarity with this RTOS, gained through previous work, would offer an advantage in understanding the device’s intricacies.
Why emulate? The debugging roadblock
Having insight into the system is critical to performing root cause analysis of any discovered vulnerabilities. Ideally, one would have real hardware and the ability to debug the software running on that hardware. The presence of an unpopulated JTAG header on the board was an exciting initial find.
Figure 1. Unpopulated JTAG header.
However, the presence of a JTAG header does not always guarantee debug access. There are a variety of reasons for this, but in the case of the M-70 gateway, code read-out protection (RDP) Level 1 is enabled. This is a feature of STM32 microcontrollers, which provides flash memory protection. There are three possible levels (0 – 2) of this protection. Level 1 prevents flash memory reads while debugger access is detected (e.g., JTAG). When attached via JTAG, no access to Flash memory is permitted, essentially preventing debugging of the running software. The intention behind this protection is to prevent third parties (like myself) from dumping the contents of flash via JTAG.
This was bad news. It was not possible to step through the code processing malicious network messages to determine the cause of device disruption. The address for the $pc register (see Figure 2) indicates that the MCU has entered a core lock-up state.
Figure 2. RDP Level 1 debug output.
In this project, two significant opportunities arose regarding code and memory access. First, an unencrypted firmware update file was available, providing the code that would be written to flash and eliminating the need to read it directly from memory. The second is that the ability to access SRAM while a debugger is attached is allowed with RDP Level 1 enabled (see Figure 2). This made it feasible to dump the contents of SRAM during the device’s execution and capture a snapshot of dynamic data.
Figure 3. RAM dumping script.
While it was not possible to have fine-grained control over the processor’s state when dumping the SRAM contents, some influence could be exerted (e.g., opening a TCP connection with the device before dumping the SRAM contents). The objects and data created as a result of this connection would be present when the CPU was halted for the SRAM dump.
Emulating with Unicorn
Emulation is one solution to this inability to debug the software natively. If the processing code of interest can be emulated, it is possible to gain visibility into the effects of a malicious message on the state of the M-70. When emulating software, it’simportant to recognize that the emulated code might not behave exactly like it would on the physical device. Full system emulation aims to mitigate this by mimicking device behavior as closely as possible, but it requires deep knowledge of system internals and significant development to accurately emulate peripherals. The focus for this project was on vulnerabilities within the Modbus protocol handling code, which ran in a single thread of the M-70 application. Rather than spending the time required for full system emulation, the decision was made to emulate only the Modbus thread. Admittedly, emulating this single thread would not be true to the device’s real-world operation. However, this deliberate time trade-off was made with the hope that it would still be “good enough” to find vulnerabilities in the Modbus protocol handling code.
The first step in this process involved utilizing the Unicorn Engine, a powerful CPU emulation framework supporting various architectures. It provided the core capability to run the Modbus thread’s code in a controlled software environment where I could then inspect the system state when processing network data.
The emulator was implemented with an entry point in the Modbus processing thread, positioned after network data had been received. Before emulating this code, the argument registers $r2 and $r3 which originally contained a pointer to network data and its length were modified to reference data originating from the emulator, along with it’s corresponding length. Once the argument registers were updated, emulation could begin and continue until that thread returned from the message processing function.
The need to fuzz
Manual inspection of network processing code is sometimes sufficient; however, this Modbus thread supports over 700 unique message types, defined by supported register values and something referred to as service identifier. The combination of these two values within a Modbus message influenced the code path of data processing, and with so many code paths to investigate, automation was clearly necessary.
Figure 4. Register values and service identifier.
Unicorn’s AFL integration made it simple to fuzz using the emulator, automatically exploring these many execution paths. AFL uses coverage-guided test case generation to maximize the number of different code paths explored. This is tool provided precisely the type of automation that was necessary. It was simple integrating AFL fuzzing into the Unicorn script, requiring only the addition of the place_input_callback function and a call to unicorn_afl_fuzz (see Figure 5).
Figure 5. Unicorn AFL integration.
Triage and debugging
With fuzzing came crashes, and the next step was to triage those crashes to perform root cause analysis. Typically, a debugger would be the go-to tool for this job; however, because execution was performed through emulation, GDB didn’t “just work” out of the box. A tool compatible with the Unicorn framework’s internal CPU representation was required. Conveniently, a tool called udbserver does exactly that. Udbserver is a plugin for the Unicorn engine that enables debugging of Unicorn emulated code within GDB. This tool worked as advertised and allowed remote GDB connections to the emulated code. There is only one line required to add udbserver support to a unicorn emulator: udbserver.udbserver(mu,1234,0x80fede0)beforecalling emu_start.
Qiling framework: Adding code coverage to the mix
Observing code coverage visually is another important part of any fuzzing campaign. It helps identify unexplored paths and provides insights for root cause analysis by comparing coverage between test cases. The need for this feature prompted an investigation into the Qiling framework. Described as a full system emulator, it also supports debugging and code coverage output. Could Qiling to emulate only a single thread rather than the whole system? It would be wonderful to benefit from its features without having to spend the time to implement full system emulation.
The Qiling framework is based on Unicorn, so it was likely that the Unicorn script could be easily ported to Qiling. Figure 6 shows the API changes between unicorn engine and the Qiling framework.
Figure 6. Unicorn to Qiling API changes.
It wasn’t clear from existing examples in the Qiling codebase if single thread emulation was possible. After some investigation and some small modifications to two components called the blob loader and the blob OS it became feasible to emulate just this single thread rather than the whole system. Those code changes have been integrated into the development branch of Qiling on GitHub. Also, a little bit of monkey patching was also required for my emulation script in order to output the coverage data in the correct way so that it contains accurate metadata for use in visualization tools like bncov or Lighthouse. You can see an example of this in action in the Qiling repository.
This code coverage feature turned out to be more useful than originally expected. Code coverage data from multiple test inputs was compared to identify points at which their execution paths diverged. This approach facilitated rapid identification of the root causes of the crashes generated by AFL.
Figure 7. Code coverage visualization with bncov.
Vulnerability highlight
This fuzzing campaign led to the discovery of multiple Modbus messages that would cause a denial of service within the device and resulted in six CVEs. You can read those vulnerability reports here: TALOS-2025-2248 (CVE-2025-54848 – CVE-2025-54851), TALOS-2025-2251 (CVE-2025-55221, CVE-2025-55222).
All the discussed vulnerabilities have been reported to the manufacturers in accordance with Cisco’s Coordinated Disclosure Policy. Each of these vulnerabilities in the affected products has been patched by the corresponding manufacturer.
For SNORT® coverage that can detect the exploitation of these vulnerabilities, download the latest rulesets from Snort.org.
Conclusion
In the future, Qiling will be my go-to for from the start of an emulation project. The high-level features of debugging and code coverage really make this a stand-out tool. However, if all you need is the ability to debug your scripts, udbserver is an easy solution that you can use with your Unicorn scripts as-is. Remember, “good enough” emulation is sometimes all that is needed to achieve impactful vulnerability discovery.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-18 11:06:352026-02-18 11:06:35“Good enough” emulation: Fuzzing a single thread to uncover vulnerabilities
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-18 05:06:492026-02-18 05:06:49Is it OK to let your children post selfies online?
Malware campaigns targeting Latin America (LATAM) are evolving. While the final payloads, often commodity RATs like XWorm, remain consistent, delivery mechanisms are becoming increasingly sophisticated to bypass region-specific defenses and increase the chance of reaching real business users.
In this analysis, we dissect a recent campaign targeting Brazilian users. What starts as a deceptive “banking receipt” quickly turns into a multi-stage infection chain that leverages steganography, Cloudinaryabuse, and a dedicated .NET persistence module designed to bypass traditional schtasks monitoring, reducing early visibility for security teams and prolonging dwell time.
Complete infection chain from WScript execution to CasPol injection
Key Takeaways
Built to blend into finance workflows: A “receipt” lure is optimized for real corporate inboxes and shared drives across LATAM.
High click potential in real operations: Payment and receipt themes map to everyday processes, which raises the chance of execution on work machines.
The chain is designed to stay quiet: WMI execution, fileless loading, and .NET-based persistence reduce early detection signals and increase dwell time.
One endpoint can become an identity problem: XWorm access can lead to credential/session theft and downstream compromise of email, SaaS, and finance systems.
Trusted services and binaries are part of the evasion: Cloud-hosted payload delivery and CasPol.exe abuse help the activity blend in.
Early detection is an operational advantage: Better monitoring + faster triage + regional hunting can keep his attack from escalating into fraud, data exposure, or ransomware.
74% of Fortune 100 companies rely on ANY.RUN for earlier detection and faster SOC response
This campaign begins with a classic but effective technique aimed at Brazilian users: a malicious file masquerading as a bank receipt (“Comprovante-Bradesco…”). While it abuses the double-extension trick (.pdf.js) to look like a document, it is, in reality, a Windows Script Host (WSH) dropper designed for direct execution
The file tries to masquerade as a PDF document with a fake extension to deceive the user.
Although the file size is unusually large (~1.2MB) for a simple script, this is intentional. The attackers padded it with junk data to inflate entropy and evade static analysis scanners that may skip larger files, helping the lure pass through initial controls and delaying detection.
Analyzing the Obfuscated JavaScript
Upon opening the file, there’s no readable code. Instead, the script uses heavy obfuscation via Unicode “junk injection.” The malicious logic is buried inside massive string variables packed with emojis, homoglyphs, and other non-ASCII characters
Heavily obfuscated code using Unicode characters and emojis.
As seen above, the script uses a delimiter-based reconstruction method. Rather than relying on complex cryptography, it applies a simple .replace() function at runtime to strip away the injected Unicode noise (the delimiters) and reconstruct the payload
Deobfuscation and Payload Extraction
To understand the dropper’s intent, we replicated the deobfuscation logic using CyberChef. By stripping the specific Unicode delimiters and decoding the resulting Base64 and UTF-16LE text, we revealed the core payload.
Using CyberChef to strip Unicode delimiters and reveal the PowerShell command
The deobfuscated payload confirms that this is a pure dropper. It constructs a PowerShell command responsible for downloading the next stage.
Speed up alert triage Validate real threats in minutes
An interesting aspect of this sample is how it executes the payload. Instead of using the noisier WScript.Shell.Run, it leverages WMI (Windows Management Instrumentation) via GetObject(“winmgmts:root\cimv2”) and Win32_Process.
The execution flow in ANY.RUN confirms the use of WMI to spawn PowerShell
This technique allows the attacker to set ShowWindow = 0, spawning the PowerShell process in a hidden window to avoid alerting the user. The script also implements a hardcoded Sleep(5000) delay, likely to ensure the system is ready and to bypass simplistic sandbox heuristics that expect immediate malicious behavior.
Stage 2: PowerShell, Steganography, and Argument Decoding
Upon decoding the PowerShell command launched by the JavaScript dropper, we find a script designed to act as a stealthy bridge. It performs three critical tasks: downloading a disguised resource, extracting a fileless loader(Stage 3), and preparing the configuration for the final infection.
Abusing Cloudinary for Evasion
The script initializes a `System.Net.WebClient` and sets a specific User-Agent to mimic a legitimate browser. It then reaches out to a hardcoded URL hosted on Cloudinary, a popular image hosting service.
The malware abuses legitimate infrastructure (Cloudinary) to bypass domain reputation filters
The URL is constructed at runtime using a simple replace function (.Replace(‘#’, ‘h’)) to evade static string detection. To the network perimeter, this trafficlooks like a user downloading a standard JPEG image.
Steganography and In-Memory Loading
The downloaded file (optimized_MSI_lpsd9p.jpg) carries a hidden payload. The PowerShell script does not save this file to disk as an image. Instead, it readsthe data stream and searches for specific markers: BaseStart- and -BaseEnd.
The Stage 3 loader is embedded within the image file boundaries
The data between these markers is a Base64-encoded .NET assembly (Stage 3). The script extracts this blob and loads it directly into memory using[Reflection.Assembly]::Load(). This “fileless” technique ensures that theStage 3 loader never touches the hard drive, evading traditional antivirus scans.
Deciphering the Configuration Arguments
Before invoking the loaded assembly, the PowerShell script prepares a massive argument string (`$argsBase64`). This is where the malware’s true intent is revealed.
Deobfuscating this string (Base64 → UTF-16LE) yields a comma-separated list of parameters that control the behavior of the next stages. Most notably, the first argument appears to be a random string: ‘0hHduAjMxQjNwYTMxAjNyAjMf9mdpVXcyF2LyJmLt92YuM3byZXasJXZsV3b29yL6MHc0RHa’
Reversing and decoding the argument reveals the final payload URL
Upon closer inspection, this string is actually Reversed Base64. By reversing the string order and decoding it, we uncover the URL for the final XWorm payload (Stage 4): https://voulerlivros.com.br/arquivo_20260116064120. txt
The other arguments confirm the injection target and installation paths:
Injection Target: CasPol (defined twice in the arguments)
Install Directory: C:UsersPublicDownloads
Fallback URL: …/bkp
With these arguments prepared, the script invokes the Main method of the in-memory assembly, passing the configuration that drives the final phase of the attack.
Stage 3: The Persistence Module (A Dedicated .NET DLL)
Contrary to what one might expect in a simple infection chain, the payload extracted from the image file is not the XWorm RAT itself. Instead, it is a specialized VB.NET DLL designed with a single purpose: Survival.
This stage acts as a dedicated persistence module. It does not communicate with a C2, nor does it download files. Its job is to ensure that the infection survives a reboot by registering a Scheduled Task.
Stop multi-stage attacks before they spread Give your SOC real execution visibility
Most commodity malware takes the easy route: spawning cmd.exe /c schtasks /create…. This is “noisy” and easily flagged by EDRs monitoring child processes.
This sample takes a stealthier approach. It abuses the Task Scheduler Managed Wrapper, interacting directly with the Windows Task Scheduler via COM interfaces (TaskService, TaskDefinition) within the.NET framework.
The DLL bypasses schtasks.exe by using .NET APIs to register persistence directly
By doing this, the malware leaves no command-line artifacts. To a defender looking at process logs, the task appears to “materialize” without a corresponding execution command.
The Infection Loop
The persistence mechanism reveals the modular nature of this campaign. The scheduled task created by this DLL does not launch XWorm directly. Instead, it isconfigured to re-execute the Stage 2 PowerShell loader.
The created task ensures the PowerShell loader runs at logon, restarting the cycle
Stage 4: The XWorm Payload & CasPol Abuse
Following the configuration passed by the PowerShell loader, the final payload is retrieved from the URL https://voulerlivros…/arquivo_20260116064120. txt.
Despite the .txt extension, the content is not plain text. It is a reversed Base64 string. This lightweight obfuscation technique can still be effective against content scanners that expect standard Base64 patterns. Once reversed and decoded, the resulting binary is a .NET executable identified as XWorm v5.6.
Reversing the text file reveals the valid PE header of the XWorm payload
Living off the Land: CasPol.exe Injection
The malware does not execute as a standalone process. Instead, it injects itself into CasPol.exe (Code Access Security Policy Tool), a legitimate binary located at C:WindowsMicrosoft.NETFrameworkv4.0.30319CasPol.exe.
The legitimate CasPol.exe binary is hollowed out to host the malicious payload
By abusing this “Living off the Land” binary (LOLBIN), the malware attempts to blend in with trusted system processes. However, in the ANY.RUN sandbox, this anomaly is immediately flagged due to the suspicious network activity originating from a trusted utility.
Cracking the Crypto (Static Analysis)
A deep dive into the payload using dnSpy reveals a critical flaw in the malware’s design. The configuration is encrypted using AES, but the implementation is weak.
Key derivation: The AES key is generated by taking the MD5 hash of the Mutex string.
Mode of operation: It uses AES-ECB (Electronic Codebook) mode.
The encryption key is derived directly from the Mutex string using MD5
Because the Mutex is hardcoded in the binary (or passed via arguments), the encryption is deterministic. This allows us to decrypt the configuration offline without needing to run the malware.
Splitter: <Xwormmm> (A unique fingerprint for XWorm)
Behavioral Confirmation (Dynamic Analysis)
The static findings are fully corroborated by the runtime behavior observed in ANY.RUN.
Mutex Creation: The sandbox logs show the creation of the mutex V2r1vDNFXE1YLWoA, confirming the exact seed used for our decryption.
C2 Traffic: The process CasPol.exe initiates a TCP connection to jholycf100.ddns.com.br on port 7000.
Protocol: The traffic stream contains the <Xwormmm> delimiter, matching the decrypted configuration.
Network traffic confirms the C2 destination and the custom XWorm protocol delimiter
Business Impact: What This Means for Companies
This isn’t “just another XWorm.” The risk comes from how reliably the chain can reach corporate endpoints and how quietly it can stay there. A fake receipt is the kind of lure that fits normal finance and ops workflows, and the delivery stack (WMI-spawned PowerShell, cloud-hosted content, fileless loading, and task-based persistence via .NET APIs) is built to reduce the early signals many teams depend on.
Credential and session theft → downstream compromise: Once a workstation is controlled, attackers can harvest browser sessions and credentials and pivot into email, SaaS, and finance tooling, turning a single click into an identity-driven incident.
Higher blast radius, faster: With persistence in place, the operator can take time, map the environment, and expand access, raising the likelihood of lateral movement and follow-on payloads.
Cost of delayed detection: “Lower-noise” tradecraft tends to inflate MTTR because the initial event looks benign (image download, PowerShell in the background, no obvious dropped binary), while real impact surfaces later.
Operational risk, not just endpoint risk: The outcomes aren’t limited to one infected machine. The realistic worst cases are business email compromise, fraudulent payments, data access, or ransomware staging, each with direct financial and reputational consequences.
The takeaway is simple: this kind of campaign rewards fast, evidence-based validation at the first suspicious touchpoint (script/PowerShell execution + abnormal cloud-hosted “image” responses) and strict monitoring of LOLBIN abuse (e.g., CasPol.exe producing outbound traffic). Catching it early is what keeps a workstation event from becoming a business threat.
How to Set Up Early Detection of XWorm Attacks
Early detection of XWorm usually depends on how well the SOC operational cycle is working day to day. When monitoring, triage, and threat hunting are tightly connected, commodity RAT activity is far more likely to be contained before it turns into a real business incident.
1. Monitoring: Strengthen Visibility with TI Feeds
The first signal often appears in external infrastructure or newly observed indicators. ANY.RUN’s TI Feeds help by continuously surfacing fresh XWorm-related domains, hashes, and behavioral patterns, based on telemetry and submissions coming from 15,000+ organizations and 600,000+ security professionals.
100% actionable IOCs delivered by TI Feeds to your existing stack
This makes it easier to spot suspicious activity earlier and push relevant IOCs directly into SIEM or EDR controls.
99% unique threat intel for your SOC Catch attacks early to protect your business
Once an alert or suspicious artifact appears, speed becomes critical.
TI Lookup provides immediate enrichment, showing reputation, related samples, network relationships, and historical context around a file, hash, or domain.
Interactive sandbox analysis allows teams to safely execute suspicious files or URLs and observe real runtime behavior, confirming XWorm activity within minutes rather than hours.
ANY.RUN’s sandbox revealing full attack chains in just 1 minute
Fast, evidence-based triage reduces uncertainty and prevents unnecessary escalation while still catching real threats early.
3. Threat Hunting: Track Active Regional Campaigns
The next step in the cycle is proactive visibility. Using structured TI Lookup queries such as: threatName:”xworm” AND submissionCountry:”br” SOC teams can surface the latest XWorm samples observedin Brazil, review delivery techniques, and pivot into related infrastructure. This makes detection logic more relevant to the current regional threat landscape, not just historical global data.
TI Lookup shows analysis sessions related to XWorm attacks observed in Brazil
When these three motions operate as a continuous cycle rather than isolated tasks, XWorm shifts from a late discovery to an early, manageable security event, reducing response time, investigation cost, and overall business risk.
This campaign highlights a clear trend in LATAM-focused malware: pairing high-volume delivery vectors with established commodity RATs. While the XWorm payload itself relies on relatively basic cryptography (AES-ECB), the overall delivery chain is built for resilience.
By combining HTML/LNK delivery, Cloudinary abuse, steganography, and modular persistence (via .NET Task Scheduler APIs), the attackers have created a lower-noise infection chain that can bypass superficial defenses.
For defenders, detection opportunities exist at multiple stages:
Delivery: Monitor for LNK/JS files spawning PowerShell.
Network: Flag traffic to image hosting services (Cloudinary) where responses contain non-image headers or BaseStart markers.
Endpoint: Alert on CasPol.exe initiating outbound network connections.
About ANY.RUN
ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, fits naturally into modern SOC workflows, strengthening the day-to-day operational cycle across Tier 1, Tier 2, and Tier 3.
It supports every step of an investigation, from safely detonating suspicious files and links to see real behavior, to enriching indicators with broader context, to delivering fresh intelligence that helps teams act faster and with fewer blind spots.
Today, more than 600,000 security professionals across 15,000+ organizations use ANY.RUN to speed up triage, cut unnecessary escalations, and keep pace with fast-moving phishing and malware campaigns.
This rule is designed as a medium-to-high confidence hunting rule, prioritizing behavioral and structural indicators rather than brittle IOCs..
rule JS_WSH_Unicode_Padded_Dropper
meta:
description = "WSH JavaScript dropper with Unicode padding and repeated assignment patterns"
author = "0xOlympus"
confidence = "medium-high"
strings:
$assign = "this." ascii
$pad = {
74 68 69 73 2E 76 61 74 66 75 6C 20 2B 3D 20 22
E0 B2 92 E2 9C 96 C8 B7
}
$wsh = "Scripting.FileSystemObject" ascii nocase
condition:
/* Exclude PE files */
uint16(0) != 0x5A4D and
/* Script-sized payloads (not tiny JS snippets) */
filesize > 1000KB and
/* Must be WSH-based */
$wsh and
/* Obfuscation indicators */
(
$pad or
$assign
)
}
Key detection components:
Non-PE filtering The check uint16(0) != 0x5A4D ensures that only script-based files are evaluated, preventing false positives on executable payloads.
File size heuristic The condition filesize > 1000KB targets scripts that abuse entropy padding. Legitimate JavaScript files are rarely this large, especially when used as WSH droppers.
YARA – Xworm 5.6 Payload:
This rule targets the final XWorm RAT binary, using protocol and cryptographic fingerprints that are stable across XWorm versions.
rule XWorm_PE_v56
{
meta:
description = "XWorm RAT v5.6 .NET payload"
author = "0xOlympus"
family = "XWorm"
version = "5.6"
confidence = "very high"
strings:
// Protocol splitter (strong family fingerprint)
$splitter = "<Xwormmm>" ascii
// Cryptographic implementation
$crypto1 = "RijndaelManaged" ascii
$crypto2 = "MD5CryptoServiceProvider" ascii
$crypto3 = "CipherMode.ECB" ascii
// Network functionality
$net1 = "System.Net.Sockets" ascii
$net2 = "NetworkStream" ascii
condition:
uint16(0) == 0x5A4D and
filesize < 5MB and
$splitter and
2 of ($crypto*) and
1 of ($net*)
}
Note: The <Xwormmm> splitter combined with AES-ECB + MD5 key derivation provides a near-unique signature for XWorm, resulting in very low false-positive risk.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-17 12:06:402026-02-17 12:06:40LATAM Businesses Hit by XWorm via Fake Financial Receipts: Full Campaign Analysis
Everyone has likely heard of OpenClaw, previously known as “Clawdbot” or “Moltbot”, the open-source AI assistant that can be deployed on a machine locally. It plugs into popular chat platforms like WhatsApp, Telegram, Signal, Discord, and Slack, which allows it to accept commands from its owner and go to town on the local file system. It has access to the owner’s calendar, email, and browser, and can even execute OS commands via the shell.
From a security perspective, that description alone should be enough to give anyone a nervous twitch. But when people start trying to use it for work within a corporate environment, anxiety quickly hardens into the conviction of imminent chaos. Some experts have already dubbed OpenClaw the biggest insider threat of 2026. The issues with OpenClaw cover the full spectrum of risks highlighted in the recent OWASP Top 10 for Agentic Applications.
OpenClaw permits plugging in any local or cloud-based LLM, and the use of a wide range of integrations with additional services. At its core is a gateway that accepts commands via chat apps or a web UI, and routes them to the appropriate AI agents. The first iteration, dubbed Clawdbot, dropped in November 2025; by January 2026, it had gone viral — and brought a heap of security headaches with it. In a single week, several critical vulnerabilities were disclosed, malicious skills cropped up in the skill directory, and secrets were leaked from Moltbook (essentially “Reddit for bots”). To top it off, Anthropic issued a trademark demand to rename the project to avoid infringing on “Claude”, and the project’s X account name was hijacked to shill crypto scams.
Known OpenClaw issues
Though the project’s developer appears to acknowledge that security is important, since this is a hobbyist project there are zero dedicated resources for vulnerability management or other product security essentials.
OpenClaw vulnerabilities
Among the known vulnerabilities in OpenClaw, the most dangerous is CVE-2026-25253 (CVSS 8.8). Exploiting it leads to a total compromise of the gateway, allowing an attacker to run arbitrary commands. To make matters worse, it’s alarmingly easy to pull off: if the agent visits an attacker’s site or the user clicks a malicious link, the primary authentication token is leaked. With that token in hand, the attacker has full administrative control over the gateway. This vulnerability was patched in version 2026.1.29.
Also, two dangerous command injection vulnerabilities (CVE-2026-24763 and CVE-2026-25157) were discovered.
Insecure defaults and features
A variety of default settings and implementation quirks make attacking the gateway a walk in the park:
Authentication is disabled by default, so the gateway is accessible from the internet.
The server accepts WebSocket connections without verifying their origin.
Localhost connections are implicitly trusted, which is a disaster waiting to happen if the host is running a reverse proxy.
Several tools — including some dangerous ones — are accessible in Guest Mode.
Critical configuration parameters leak across the local network via mDNS broadcast messages.
Secrets in plaintext
OpenClaw’s configuration, “memory”, and chat logs store API keys, passwords, and other credentials for LLMs and integration services in plain text. This is a critical threat — to the extent that versions of the RedLine and Lumma infostealers have already been spotted with OpenClaw file paths added to their must-steal lists.
Malicious skills
OpenClaw’s functionality can be extended with “skills” available in the ClawHub repository. Since anyone can upload a skill, it didn’t take long for threat actors to start “bundling” the AMOS macOS infostealer into their uploads. Within a short time, the number of malicious skills reached the hundreds. This prompted developers to quickly ink a deal with VirusTotal to ensure all uploaded skills aren’t only checked against malware databases, but also undergo code and content analysis via LLMs. That said, the authors are very clear: it’s no silver bullet.
Structural flaws in the OpenClaw AI agent
Vulnerabilities can be patched and settings can be hardened, but some of OpenClaw’s issues are fundamental to its design. The product combines several critical features that, when bundled together, are downright dangerous:
OpenClaw has privileged access to sensitive data on the host machine and the owner’s personal accounts.
The assistant is wide open to untrusted data: the agent receives messages via chat apps and email, autonomously browses web pages, etc.
It suffers from the inherent inability of LLMs to reliably separate commands from data, making prompt injection a possibility.
The agent saves key takeaways and artifacts from its tasks to inform future actions. This means a single successful injection can poison the agent’s memory, influencing its behavior long-term.
OpenClaw has the power to talk to the outside world — sending emails, making API calls, and utilizing other methods to exfiltrate internal data.
It’s worth noting that while OpenClaw is a particularly extreme example, this “Terrifying Five” list is actually characteristic of almost all multi-purpose AI agents.
OpenClaw risks for organizations
If an employee installs an agent like this on a corporate device and hooks it into even a basic suite of services (think Slack and SharePoint), the combination of autonomous command execution, broad file system access, and excessive OAuth permissions creates fertile ground for a deep network compromise. In fact, the bot’s habit of hoarding unencrypted secrets and tokens in one place is a disaster waiting to happen — even if the AI agent itself is never compromised.
On top of that, these configurations violate regulatory requirements across multiple countries and industries, leading to potential fines and audit failures. Current regulatory requirements, like those in the EU AI Act or the NIST AI Risk Management Framework, explicitly mandate strict access control for AI agents. OpenClaw’s configuration approach clearly falls short of those standards.
But the real kicker is that even if employees are banned from installing this software on work machines, OpenClaw can still end up on their personal devices. This also creates specific risks for given the organization as a whole:
Personal devices frequently store access to work systems like corporate VPN configs or browser tokens for email and internal tools. These can be hijacked to gain a foothold in the company’s infrastructure.
Controlling the agent via chat apps means that it’s not just the employee that becomes a target for social engineering, but also their AI agent, seeing AI account takeovers or impersonation of the user in chats with colleagues (among other scams) become a reality. Even if work is only occasionally discussed in personal chats, the info in them is ripe for the picking.
If an AI agent on a personal device is hooked into any corporate services (email, messaging, file storage), attackers can manipulate the agent to siphon off data, and this activity would be extremely difficult for corporate monitoring systems to spot.
How to detect OpenClaw
Depending on the SOC team’s monitoring and response capabilities, they can track OpenClaw gateway connection attempts on personal devices or in the cloud. Additionally, a specific combination of red flags can indicate OpenClaw’s presence on a corporate device:
Look for ~/.openclaw/, ~/clawd/, or ~/.clawdbot directories on host machines.
Scan the network with internal tools, or public ones like Shodan, to identify the HTML fingerprints of Clawdbot control panels.
Monitor for WebSocket traffic on ports 3000 and 18789.
Keep an eye out for mDNS broadcast messages on port 5353 (specifically openclaw-gw.tcp).
Watch for unusual authentication attempts in corporate services, such as new App ID registrations, OAuth Consent events, or User-Agent strings typical of Node.js and other non-standard user agents.
Look for access patterns typical of automated data harvesting: reading massive chunks of data (scraping all files or all emails) or scanning directories at fixed intervals during off-hours.
Controlling shadow AI
A set of security hygiene practices can effectively shrink the footprint of both shadow IT and shadow AI, making it much harder to deploy OpenClaw in an organization:
Use host-level allowlisting to ensure only approved applications and cloud integrations are installed. For products that support extensibility (like Chrome extensions, VS Code plugins, or OpenClaw skills), implement a closed list of vetted add-ons.
Conduct a full security assessment of any product or service, AI agents included, before allowing them to hook into corporate resources.
Treat AI agents with the same rigorous security requirements applied to public-facing servers that process sensitive corporate data.
Implement the principle of least privilege for all users and other identities.
Don’t grant administrative privileges without a critical business need. Require all users with elevated permissions to use them only when performing specific tasks rather than working from privileged accounts all the time.
Configure corporate services so that technical integrations (like apps requesting OAuth access) are granted only the bare minimum permissions.
Periodically audit integrations, OAuth tokens, and permissions granted to third-party apps. Review the need for these with business owners, proactively revoke excessive permissions, and kill off stale integrations.
Secure deployment of agentic AI
If an organization allows AI agents in an experimental capacity — say, for development testing or efficiency pilots — or if specific AI use cases have been greenlit for general staff, robust monitoring, logging, and access control measures should be implemented:
Deploy agents in an isolated subnet with strict ingress and egress rules, limiting communication only to trusted hosts required for the task.
Use short-lived access tokens with a strictly limited scope of privileges. Never hand an agent tokens that grant access to core company servers or services. Ideally, create dedicated service accounts for every individual test.
Wall off the agent from dangerous tools and data sets that aren’t relevant to its specific job. For experimental rollouts, it’s best practice to test the agent using purely synthetic data that mimics the structure of real production data.
Configure detailed logging of the agent’s actions. This should include event logs, command-line parameters, and chain-of-thought artifacts associated with every command it executes.
Set up SIEM to flag abnormal agent activity. The same techniques and rules used to detect LotL attacks are applicable here, though additional efforts to define what normal activity looks like for a specific agent are required.
If MCP servers and additional agent skills are used, scan them with the security tools emerging for these tasks, such as skill-scanner, mcp-scanner, or mcp-scan. Specifically for OpenClaw testing, several companies have already released open-source tools to audit the security of its configurations.
Corporate policies and employee training
A flat-out ban on all AI tools is a simple but rarely productive path. Employees usually find workarounds — driving the problem into the shadows where it’s even harder to control. Instead, it’s better to find a sensible balance between productivity and security.
Implement transparent policies on using agentic AI. Define which data categories are okay for external AI services to process, and which are strictly off-limits. Employees need to understand why something is forbidden. A policy of “yes, but with guardrails” is always received better than a blanket “no”.
Train with real-world examples. Abstract warnings about “leakage risks” tend to be futile. It’s better to demonstrate how an agent with email access can forward confidential messages just because a random incoming email asked it to. When the threat feels real, motivation to follow the rules grows too. Ideally, employees should complete a brief crash course on AI security.
Offer secure alternatives. If employees need an AI assistant, provide an approved tool that features centralized management, logging, and OAuth access control.