9 useful car gadgets I’d pay full price for (but some are on sale now)
It’s easy to upgrade your car with these handy gadgets, especially with discounts ahead of Amazon’s Spring Sale.
Latest news – Read More
It’s easy to upgrade your car with these handy gadgets, especially with discounts ahead of Amazon’s Spring Sale.
Latest news – Read More

As defenders have improved their ability to detect malicious code, attackers have adapted by reducing their reliance on bespoke implants. As a result, data exfiltration is no longer primarily driven by custom malware or specialized tooling. Instead, many modern exfiltration operations leverage legitimate, widely deployed utilities already present in enterprise environments, along with benign cloud storage locations as the destination of the exfiltration connections.
This shift significantly complicates detection. Tools and services used for routine business operations can be repurposed to transfer stolen data outside the network without triggering traditional security controls. In many real-world incidents, exfiltration does not rely on novel protocols, custom command-and-control (C2) infrastructure, or overtly malicious binaries. Instead, attackers abuse trusted, allow-listed tools such as cloud command-line interfaces, file synchronization utilities, managed file transfer platforms, and legitimate file storage services. In these scenarios, the distinction between legitimate use and malicious abuse is subtle, contextual, and difficult to identify.
This research originated from a fundamental question: If attackers don’t require malicious software or infrastructure to exfiltrate data, what signals can defenders rely on to detect their behavior?
The Exfiltration Framework was developed to explore this question by systematically analyzing how legitimate tools are abused for data exfiltration. While many existing frameworks catalog the misuse of legitimate tools by platform or technology domain, the Exfiltration Framework takes a cross-platform perspective and categorizes tools by exfiltration tactic rather than environment or implementation.
The goal of this project is not to catalog attack techniques or enumerate tools, but to understand how benign utilities are abused for data exfiltration and which telemetry defenders can realistically rely on for detection. By focusing on observable behavior rather than tool presence, this research aims to support detection strategies that remain effective even when attackers operate entirely within trusted software and permitted infrastructure. A secondary objective is to identify behavioral patterns that recur across tools and can be applied more generically to detect exfiltration activity.
The Exfiltration Framework is a defensive project designed to systematically document how legitimate tools are abused for data exfiltration. Early in its development, the goal was to provide a comparative overview of exfiltration-capable tools, similar in spirit to matrix-style projects that summarize capabilities at a high level. While useful for classification, this approach proved insufficient for capturing the behavioral and forensic details required for detection and investigation.
As a result, the framework evolved toward a structured, feature-oriented model inspired by projects such as LOLBAS, where tool capabilities, behaviors, and artifacts are documented in a consistent and extensible format. This design allows exfiltration-relevant characteristics to be organized clearly and compared across tools without oversimplifying their behavior.
The framework is intentionally scoped to legitimate, widely available tools commonly present in enterprise environments. It does not attempt to catalog all possible exfiltration mechanisms, nor does it analyze custom malware, exploit-based techniques, or novel C2 protocols. Instead, it concentrates on utilities routinely used for legitimate purposes that can naturally blend into normal activity, making their abuse particularly difficult to detect.
The Exfiltration Framework is designed to capture behavioral and forensic characteristics that are directly useful for detection, investigation, and comparative analysis. Rather than documenting tool functionality or attack procedures, each tool is represented using a structured, normalized schema that emphasizes how exfiltration manifests operationally across endpoint, network, and cloud telemetry.
This normalization allows defenders to compare tools with very different implementations based on shared behavioral characteristics, and to reason about exfiltration activity independently of the specific utility involved.
To reflect real-world enterprise environments, tools are grouped into three categories:
This categorization illustrates how exfiltration can occur across multiple layers of the environment, from endpoints to cloud infrastructure, while highlighting differing detection trade-offs.
The Exfiltration Framework is designed around the premise that detecting data exfiltration via legitimate tools requires understanding how those tools behave when misused, rather than relying on static identifiers or tool presence alone. Each field in the framework schema was selected to capture signals that defenders can realistically observe and correlate across endpoint, network, and cloud telemetry.
Rather than attempting to model every possible abuse pattern, the framework focuses on a small set of fields that consistently influence detection outcomes across tools and environments.
This design choice reflects the reality that effective detection logic is highly environment-specific. By emphasizing what to look for rather than how to detect it, the framework supports reuse across organizations and avoids coupling the research to a specific detection platform or rule format.
To illustrate how these fields are applied, the following excerpt shows one particular entry in the framework, a normalized representation of the actionable items gleaned from research into the MOVEit Transfer tool.

This normalized representation enables tools with very different implementations to be compared based on shared behavioral characteristics, supporting consistent analysis across endpoint, network, and cloud contexts. The standardized format also facilitates reuse beyond manual analysis, including integration into automated or AI-assisted detection workflows.
The framework currently analyzes a curated set of legitimate tools observed in real-world exfiltration scenarios and commonly found in enterprise environments.
Built-in operating system tools
Third-party endpoint tools
Cloud tools
This list is not intended to be exhaustive, but represents tools selected based on documented abuse in the wild, prevalence in enterprise environments, and relevance to defensive detection efforts.
Analysis of these tools revealed several recurring patterns with direct implications for detection. Although individual utilities differ in implementation, their abuse for data exfiltration often converges in ways that undermine tool-centric detection approaches. The observations below highlight how attackers leverage legitimate functionality, existing trust relationships, and normal operational patterns to obscure exfiltration activity.
Across a wide range of tools analyzed in this research, outbound network traffic generated during data exfiltration often converges on common, legitimate patterns. Whether the utility is a native command-line tool, a third-party endpoint application, or a cloud client, data transfer typically occurs over standard application-layer protocols such as HTTPS. Typically the network traffic uses expected destination ports and encrypted payloads as well. This convergence reflects attackers’ preference for tools and services that are already permitted and widely used within enterprise environments.
In practice, exfiltration performed via cloud command-line interfaces, storage clients, or synchronization tools frequently targets trusted cloud platforms or externally hosted infrastructure. Public reporting shows that attackers commonly leverage legitimate cloud services for data theft, resulting in outbound traffic that is difficult to distinguish from authorized business activity at the network layer.
For example, a cloud storage client uploading data to an external bucket and a synchronization utility transferring files to a remote peer may both generate long-lived HTTPS sessions with steady outbound throughput. From a network perspective, these transfers are characterized by encrypted traffic over standard ports, sustained outbound connections, and destinations associated with legitimate cloud storage providers. This combination closely resembles routine backup or synchronization activity and has been observed in multiple ransomware and extortion investigations involving tools such as rclone and cloud storage clients. As a result, network flow log-based detection approaches based on protocol identification, destination allow-listing, or port filtering provide limited visibility into exfiltration activity.
Layer 7 metadata such as Transmission Control Protocol (TCP) flags or certificate data may provide additional detection opportunities in some cases, but even there standard attributes were the norm. This convergence highlights the need to correlate network telemetry with execution context, data volume relative to baselines, and destination characteristics — such as ownership or prior association with the organization — rather than relying on network traffic alone.
While network-level behavior often converges across exfiltration tools, the forensic artifacts left on the endpoint can vary significantly depending on the utility and execution method used. Some tools generate a rich and persistent footprint, including configuration files, local state databases, cached credentials, scheduled tasks, or detailed logs. When present, these artifacts provide valuable context for incident response, supporting threat hunting, timeline reconstruction, and attribution, as observed in real-world abuse of tools such as rclone and Syncthing. While tool renaming is common and sometimes difficult to detect in process telemetry, masqueraded tools may still generate many of the same forensic artifacts, leading to additional opportunities for identification.
Other tools operate with a much lighter or more transient footprint. Command-line utilities executed with inline arguments, temporary configurations, or fileless techniques may leave little evidence beyond short-lived process execution, command-line telemetry, and ephemeral network connections. Public reporting shows that PowerShell-based exfiltration can rely almost entirely on execution context and in-memory behavior, leaving few durable artifacts on disk. In these cases, forensic visibility depends heavily on the availability and quality of endpoint logging, including process creation, command-line auditing, and script execution telemetry.
This variability reinforces a key finding of the research: There is no uniform forensic signature for exfiltration using legitimate tools. Effective detection therefore requires correlating endpoint telemetry with network and cloud data, rather than assuming that exfiltration activity will consistently leave persistent artifacts.
Cloud-native tools present a significant detection challenge because they operate within services that are already central to enterprise workflows. Authentication flows, API interactions, and data transfer patterns observed during exfiltration often closely resemble legitimate activity such as backups, deployments, or routine synchronization. Public incident reporting shows that attackers frequently abuse officially supported cloud clients to move data to attacker-controlled storage while maintaining the appearance of normal cloud usage.
In these scenarios, traditional IOCs, such as domains or IP addresses, provide limited value. Cloud platforms rely on large, shared infrastructure, resulting in highly generic service endpoints used by both legitimate users and attackers. As a result, detecting or blocking exfiltration based on network indicators alone is often impractical and risks significant operational disruption.
Compounding this challenge, many behavioral network detections explicitly allow-list major cloud providers to reduce noise from expected business activity. While operationally necessary, this practice further limits visibility into cloud-based exfiltration, enabling attackers to bypass both legacy IOC-based detections and higher-level behavioral controls by operating entirely within trusted cloud services. This is where cloud-native security products with visibility into which tenants, subscriptions or individual storage buckets are owned by the organization, as well as whether the identity initiating the file transfer typically interacts with that cloud resource, can assist with detection.
Masquerading is frequently used to reduce the visibility of data exfiltration by exploiting assumptions about trusted binaries and execution contexts. Rather than introducing unfamiliar tools, attackers often rename legitimate utilities or execute them from locations typically associated with benign software, allowing exfiltration activity to blend into normal endpoint operations and undermining detections based solely on binary names or file paths.
A well-documented example involves rclone, a legitimate cloud synchronization tool repeatedly abused in ransomware and data theft operations. Incident response reporting shows rclone binaries being renamed and staged in trusted locations to evade scrutiny while enabling large-scale data transfers to attacker-controlled cloud storage under the appearance of routine administrative activity.
These cases demonstrate that filename- or path-based trust assumptions are insufficient for detecting exfiltration activity. Effective detection requires correlating execution context, parent process relationships, command-line usage, and network behavior to identify misuse of otherwise legitimate tools, particularly when they are deliberately presented as benign components of the operating environment.
A recurring pattern across multiple exfiltration tools is the use of small, incremental data transfers instead of large, single exfiltration events. This technique, which MITRE tracks as T1030, relies on splitting data into smaller units and transmitting it over extended periods. By doing this, attackers can remain below volume-based detection thresholds and reduce the likelihood of drawing attention. This approach has been observed in real-world data theft and ransomware operations involving legitimate transfer and synchronization tools.
This behavior is not protocol- or tool-specific. File transfer utilities, synchronization tools, cloud clients, and scripting environments can all be configured to transfer data gradually, often closely resembling routine background activity. Public reporting on the abuse of tools like rclone and Syncthing shows how repeated low-volume transfers can collectively result in significant data loss while remaining difficult to distinguish from legitimate use.
Because the timing, size, and frequency of these transfers often align with expected operational patterns — such as business hours or periodic jobs — detection typically requires longitudinal analysis rather than single-event alerts. Without baselining normal usage, low-and-slow exfiltration can persist unnoticed.
Stealth in exfiltration scenarios often results from organizational policy rather than technical sophistication. Tools that are explicitly permitted or widely deployed frequently operate under relaxed monitoring, creating low-friction paths for data exfiltration.
In living-off-the-land scenarios, attackers deliberately abuse trusted utilities to benefit from existing allow-listing and policy exemptions, rather than advanced evasion techniques. Public reporting shows that legitimate cloud and synchronization tools are often misused precisely because their activity is expected and rarely scrutinized. In some ransomware incidents, attackers do not attempt to minimize volume or hide behavior at all, instead exfiltrating large amounts of data directly using trusted tools and infrastructure, relying on policy trust and limited inspection to avoid detection.
The following table summarizes several of the trends observed during the longitudinal analysis detailed above into categories:
|
Tool category |
Example tools |
Network behavior |
Abuse patterns |
High-value signals |
|
Native |
PowerShell, robocopy |
HTTPS, SMB |
Low-and-slow transfer, fileless execution |
Parent process, encoded params, timing |
|
Third-party |
rclone, Syncthing |
HTTPS, P2P |
Masquerading, background sync |
Binary rename, config artifacts |
|
Cloud-based |
AWS CLI, AzCopy |
HTTPS, Cloud APIs |
Legitimate credential abuse |
Destination anomalies, account context |
As data exfiltration increasingly relies on legitimate, trusted tools rather than custom malware, defenders must rethink how they approach detection. This research shows that meaningful visibility does not come from identifying tools in isolation, but from understanding the specific behaviors those tools exhibit when misused. By analyzing and normalizing execution patterns, network characteristics, and forensic artifacts across a wide range of benign utilities, the Exfiltration Framework provides a practical foundation for behavior-driven detection grounded in real telemetry. Ultimately, improving exfiltration detection requires not only broader visibility, but a deeper understanding of how trusted tools can be repurposed — and how those behaviors can be observed, contextualized, and detected in practice.
The Exfiltration Framework is intended to evolve with the threat landscape. Contributions from defenders, researchers, and incident responders are welcome, particularly when grounded in real-world observations of how legitimate tools are abused for data exfiltration.
Whether it is documenting additional tools, refining existing entries, or sharing detection-relevant insights, community input helps keep the framework accurate and practically useful. Details on how to contribute are available in the project repository.
We would like to thank the following researchers who have contributed to this project, via research into use of tools for exfiltration, review of the paper and framework, or general guidance and input:
Cisco Talos Blog – Read More
Amazon found evidence that the FMC software vulnerability has been exploited since late January, and found links to Russia.
The post Cisco Firewall Vulnerability Exploited as Zero-Day in Interlock Ransomware Attacks appeared first on SecurityWeek.
SecurityWeek – Read More
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has urged government agencies to apply patches for two security flaws impacting Synacor Zimbra Collaboration Suite (ZCS) and Microsoft Office SharePoint, stating they have been actively exploited in the wild.
The vulnerabilities in question are as follows –
CVE-2025-66376 (CVSS score: 7.2) – A stored cross-site scripting
The Hacker News – Read More
Already sanctioned in the US and the UK, these rulings prohibit companies and a couple of principals from entering or doing business in the European Union.
darkreading – Read More
What do the words bakso, sate, and rendang bring to mind? For many, the answer is “nothing”; foodies will recognize them as Indonesian staples; while those who follow cybersecurity news will remember an attack on the Node Package Manager (npm) ecosystem — the tool that lets developers use prebuilt libraries instead of writing every line of code from scratch.
In mid-November, security researcher Paul McCarty reported the discovery of a spam campaign aimed at cluttering the npm registry. Of course, meaningless packages have appeared in the registry before, but in this case, tens of thousands of modules were found with no useful function. Their sole purpose was to inject completely unnecessary dependencies into projects.
The package names featured randomly inserted Indonesian dish names and culinary terms such as bakso, sate, and rendang, which is how the campaign earned the moniker “IndonesianFoods”. The scale was impressive: at the time of discovery, approximately 86 000 packages had been identified.
Below, we dive into how this happened, and what the attackers were actually after.
At first glance, the IndonesianFoods packages didn’t look like obvious junk. They featured standard structures, valid configuration files, and even well-formatted documentation. According to researchers at Endor Labs, this camouflage allowed the packages to persist in the npm registry for nearly two years.
It’s not as if the attackers were aggressively trying to insert their creations into external projects. Instead, they simply flooded the ecosystem with legitimate-looking code, waiting for someone to make a typo or accidentally pick their library from search results. It’s a bit unclear exactly what you’d have to be searching for to mistake a package name for an Indonesian dish, but the original research notes that at least 11 projects somehow managed to include these packages in their builds.
A small portion of these junk packages had a self-replication mechanism baked in: once installed, they would create and publish new packages to the npm registry every seven seconds. These new modules featured random names (also related to Indonesian cuisine) and version numbers — all published, as you’d expect, using the victim’s credentials.
Other malicious packages integrated with the TEA blockchain platform. The TEA project was designed to reward open-source creators with tokens in proportion to the popularity and usage of their code — theoretically operating on a “Proof of Contribution” model.
A significant portion of these packages contained no actual functionality at all, yet they often carried a dozen dependencies — which, as you might guess, pointed to other spam projects within the same campaign. Thus, if a victim mistakenly includes one of these malicious packages, it pulls in several others, some of which have their own dependencies. The result is a final project cluttered with a massive amount of redundant code.
There are two primary theories. The most obvious is that this entire elaborate spam campaign was designed to exploit the aforementioned TEA protocol. Essentially, without making any useful contribution to the open-source community, the attackers earn TEA tokens — which are standard digital assets that can be swapped for other cryptocurrencies on exchanges. By using a web of dependencies and self-replication mechanisms, the attackers pose as legitimate open-source developers to artificially inflate the significance and usage metrics of their packages. In the README files of certain packages, the attackers even boast about their earnings.
However, there’s a more chilling theory. For instance, researcher Garrett Calpouzos suggests that what we’re seeing is merely a proof of concept. The IndonesianFoods campaign could be road-testing a new malware delivery method intended to be sold later to other threat actors.
At first glance, the danger to software development organizations might not be obvious: sure, IndonesianFoods clutters the ecosystem, but it doesn’t seem to carry an immediate threat like ransomware or data breaches. However, redundant dependencies bloat code and waste developers’ system resources. Furthermore, junk packages published under your organization’s name can take a serious toll on your reputation within the developer community.
We also can’t dismiss Calpouzos’s theory. If those spam packages pulled into your software receive an update that introduces truly malicious functionality, they could become a threat not just to your organization, but to your users as well — evolving into a full-blown supply chain attack.
Spam packages don’t just wander into a project on their own; installing them requires a lapse in judgment from a developer. Therefore, we recommend regularly raising awareness among employees — even the tech-savvy ones — about modern cyberthreats. Our interactive training platform, KASAP (Kaspersky Automated Security Awareness Platform), can help with that.
Additionally, you can prevent infection by using a specialized solution for protecting containerized environments. It scans images and third-party dependencies, integrates into the build process, and monitors containers during runtime.
Kaspersky official blog – Read More
It’s free, easy to set up, and lets you avoid upgrading to a mesh system.
Latest news – Read More
It’ll even analyze your lab results to deliver personalized recommendations. But here’s what you need to know first.
Latest news – Read More
The PTZ webcam from Obsbot packs a punch in a tiny form factor. But you’ll have to be willing to pay to play.
Latest news – Read More
Yes, it was a challenge to install. But then GLF OS took me by surprise – in the best possible way.
Latest news – Read More