Defining a new methodology for modeling and tracking compartmentalized threats

  • In the evolving cyberthreat landscape, Cisco Talos is witnessing a significant shift towards compartmentalized attack kill chains, where distinct stages — such as initial compromise and subsequent exploitation — are executed by multiple threat actors. This trend complicates traditional threat modeling and actor profiling, as it requires understanding the intricate relationships and interactions between various groups, explained in the previous blog.
  • The traditional Diamond Model of Intrusion Analysis’ feature-centered approach (adversary, capability, infrastructure and victim) to pivoting can lead to inaccuracies when analyzing “compartmentalized” attack kill chains that involve multiple distinct threat actors. Without incorporating context of relationships, the model faces challenges in accurately profiling actors and constructing comprehensive threat models.
  • We have identified several methods for analyzing compartmentalized attacks and propose an extended Diamond Model, which adds a “Relationship Layer” to enrich the context of the relationships between the four features.
  • In a collaboration between Cisco Talos and Vertex Project, a Synapse model update has just been published which introduces the entity:relationship providing modelling support to this methodology.
  • We illustrate our investigative approach and application of the extended Diamond Model for effective pivoting by examining the ToyMaker campaign, where ToyMaker functioned as a financially-motivated initial access (FIA) group, handing over access to the Cactus ransomware group.

Impacts on defenders

Defining a new methodology for modeling and tracking compartmentalized threats

The convergence of multiple threat actors operating within the same overall intrusion creates additional layers of obfuscation, making it difficult to differentiate the activities of one threat actor from another, or to identify when access has been handed off from one to the next. At each point where outsourcing occurs or access is handed off, the Diamond Model of the adversary changes. Likewise, the ability to leverage the output of kill chain analysis for the purpose of pivoting, clustering, and attribution becomes significantly more difficult as analysts may be forced to operate under the assumption that multiple actors are involved unless they can prove otherwise, where historically the opposite assumption was likely made.

Additionally, misattributing attacks due to tactics, techniques and procedures (TTPs) present in earlier stages of the intrusion may impact the way in which incident response or investigative activities are conducted post-compromise. They may also create uncertainty around the motivation(s) behind an attack or why an organization is being targeted in some cases. 

Analysis processes and analytical models must be updated to reflect these new changes in the way that adversaries conduct intrusions, as existing methodologies often create more confusion than clarity.

Introduction to threat modeling

NIST SP 800-53 (Rev. 5) defines threat modeling as “a form of risk assessment that models aspects of the attack and defense sides of a logical entity, such as a piece of data, an application, a host, a system, or an environment.”

For many organizations, this involves evaluating their preventative, detective and corrective security controls from an adversarial perspective to identify deficiencies in their ability to prevent, detect or respond to threats based on specific tactics, techniques, and procedures (TTPs). For example, adversary emulation simulates an attack scenario and demonstrates how an organization could reasonably expect their security program to respond if a specific threat is encountered.

Intrusion analysis is the process of analyzing computer intrusion activity. This involves reconstructing intrusion attack timelines, analyzing forensic artifacts and identifying the scope and impact of activity. Intrusion analysis typically results in a better understanding of an attack or adversary, and may also result in the development of a model to reflect what is known about the threat. This model can then be used to support more effective detection content development and threat modeling activities in the future. The symbiotic relationship between intrusion analysis and threat modeling allows organizations to effectively incorporate new knowledge and information about threats and threat actors into their security programs to ensure continued effectiveness.

Over the past several years, different analytical models have been developed to assist with intrusion analysis and threat modeling that provide logical ways to organize contextual details about threats and threat actors so that they can be communicated and incorporated more effectively. Two of the most popular models are the Diamond Model and the Kill Chain Model.

Defining a new methodology for modeling and tracking compartmentalized threats

The Kill Chain Model shown above is typically used to break an intrusion down into distinct stages/phases so that the attack can be reconstructed and analyzed. This allows analysts to build a realistic model that reflects the TTPs and other characteristics present during the intrusion. This information can then be shared so that other organizations can determine whether their own security controls would be effective at combatting the same or similar intrusion(s) or whether they have encountered the same threat in the past. 

Defining a new methodology for modeling and tracking compartmentalized threats

The Diamond Model, shown above, is commonly used across the industry for building a profile of a specific threat or threat actor. This model is developed by populating each quadrant based on information about an adversary’s characteristics, capabilities, infrastructure tendencies and typical targeting/victimology.  A fully populated diamond model creates an extensive profile of a given threat or threat actor.

It is important to note that an analysis may incorporate both (or other) models, and they are not mutually exclusive. There are also several other modeling frameworks that exist for similar purposes that are also often used in concert, such as the MITRE ATT&CK and D3FEND frameworks. For example, in some cases the information used to populate the Diamond Model may be the result of kill chain analyses of multiple intrusions over time that are ultimately attributed to the same threat actor(s). By leveraging the output of multiple kill chain analyses, one can build a more comprehensive model that reflects changes to characteristics or TTPs associated with a threat actor being tracked over time as well as improve overall understanding of the nature of a given threat.

Challenges applying existing models to compartmentalized threats

One of the key strengths of the Diamond Model is its concept of “centered approaches” for analytic pivoting — including victim-, capability-, infrastructure- and adversary-centered methods of investigation. These approaches enable analysts to uncover new malicious activities and reveal how each facet of an intrusion across the Diamond’s four dimensions intersects with others. For instance, in the paper’s infrastructure-centered example, an analyst might begin with a single IP address seen during an intrusion, then pivot to the domain it resolves to, scrutinize WHOIS registration details, and discover additional domains or IPs registered by the same entity. Further examination may reveal malware connected to or distributed by those domains. In such scenarios, the Diamond Model’s systematic method of traversing from one node to another can rapidly expose an interconnected web of adversaries, capabilities, and victims.

Defining a new methodology for modeling and tracking compartmentalized threats

However, the original centered approach can introduce errors when dealing with a “compartmentalized” attack kill chain involving multiple distinct threat actors. In many cases, adversaries are now leveraging various relationships simultaneously while working towards their longer term mission objectives. This could include the outsourcing of tooling development, rental of infrastructure services for distribution or command and control (C2), or access-sharing agreements leveraged post-compromise to facilitate hand-off once initial access (IA), persistence or privilege escalation has been achieved. This compartmentalization has complicated many analytical activities including attribution, threat modeling, and intrusion analysis. Likewise, the modeling methodologies that were initially developed to combat intrusion operations in previous years no longer accurately reflect today’s threat landscape.

To illustrate the complexity of compartmentalization, let’s consider a hypothetical scenario that closely mirrors real-world events. In this scenario, four distinct threat actor groups are involved:

  1. Actor A: A financially motivated threat actor aiming to profit by collecting logs from infostealer malware.
  2. Actor B: A malware developer who creates and sells infostealer malware.
  3. Actor C: A Traffic Distribution Service (TDS) provider.
  4. Actor D: A ransomware group.

In this scenario, a financially-motivated threat actor (Actor A) who is seeking to infect victims with information-stealing malware to steal victims’ sensitive information may outsource the development of their malware to Actor B. They may engage the developer directly or purchase it from a storefront. Likewise, the distribution of the malware itself is conducted by outsourcing it to Actor C, who operates a spam botnet or traffic distribution service (TDS) that is offered for rent for a usage-based fee. Once Actor C has successfully achieved code execution on a system, they may infect it with the malware they initially received from Actor A, who is charged “per-install.” 

Likewise, once Actor A has successfully performed enumeration of the environment, they identify that they were successful in gaining access to a high value target. Rather than simply focus on monetizing information-stealing malware logs, they choose to monetize their access to the exfiltrated data by selling it to Actor D, who then leverages that access to deploy ransomware and extort the victim. 

In this hypothetical scenario, Actor C, who would be classified as a financially-motivated initial access (FIA) broker, may also be distributing multiple malware families at any given time and leverage traffic filtering to manage final payload delivery. They may even host these payloads on the same infrastructure. The nature of the business relationships described in this scenario are shown below.

Defining a new methodology for modeling and tracking compartmentalized threats

While this scenario covers a single attack, it highlights a situation where applying the traditional analytical models poses several challenges. For example, consider the infrastructure used by Actor C, the TDS provider. The infrastructure that facilitates malware distribution is not solely dedicated to Actor A’s operations. This means that other malware found by pivoting the distribution infrastructure should not be considered as capabilities associated with Actor A. In addition, the malware’s targets are highly associated with the Actor C’s targeted network and should not be strongly considered as the motivation for the victimology of Actor A. In this compartmentalized scenario, the interconnected web of adversaries, capabilities and victims exposed by pivoting with the Diamond Model should not be associated with each other, as they originate from different threat actors and should not be modeled as part of a single threat actor profile.

In even more complex cases, a threat actor may choose to engage multiple distributors simultaneously or work with different distributors on a weekly basis depending on real-time pricing and service availability. A threat actor conducting ransomware operations may choose to procure access from several initial access brokers (IABs), each with their own characteristics, capabilities and motivations. Likewise, several otherwise unrelated threat actors operating in different capacities throughout the kill chain present complications when attempting to take the result of the analysis and incorporate it into existing attribution data or when attempting to identify overlaps with other clusters of malicious activity. Modeling the IABs themselves also presents complications, as their characteristics and TTPs are often encountered in attacks where they may have only been operating within a subset of the overall phases of the intrusion. 

State-sponsored or -aligned threat actors’ campaigns have been documented using anonymization networks or residential proxies to hide their activities. This will create the same kind of activity overlap described by the usage of a TDS.

Extending the Diamond Model with the Relationship Layer

To extend the Diamond Model to include the complexities posed by compartmentalized attacks, we propose an extension to the original Diamond Model by integrating a “Relationship Layer.” This additional layer is designed to contextualize the interactions between the four features (adversary, infrastructure, capability and victim) of individual diamonds representing distinct threat actors. By incorporating this layer, threat analysts can construct a nuanced understanding of compartmentalized contexts.

The Relationship Layer allows for the articulation of common relational dynamics such as “purchased from” to indicate a transactional association, “handover from” to reflect a transfer of operational control or resources, and “leaked from” to convey the use of leaked tools. Additionally, it describes the connections between adversarial groups, encompassing a variety of interactions such as “commercial relationship,” “partnership agreements,” “subcontracting arrangements,” “shared operational goals,” and more. 

The integration of the Relationship Layer enables analysts to contextualize the interactions within the Diamond Model’s four features, thereby enhancing their ability to perform logical pivoting and accurate attribution. This refinement offers a more sophisticated framework for analyzing modern, compartmentalized cyberthreats, providing a clearer representation of the complex web of relationships that characterize these operations.

Let’s look at the scenario involving Actors A through D again. Figure 4 shows how we can use the extended Diamond Model to describe the relationships between entities involved in the intrusion activity:

Defining a new methodology for modeling and tracking compartmentalized threats

Each of the actors, A through D, possesses their own Diamond Model, reflecting their distinct roles as adversaries with unique capabilities, victims and infrastructures. We have extended each Diamond Model by integrating an additional Relationship Layer to illustrate the contextual relationships between these features. For instance, the infrastructure used by Actor A for Traffic Distribution Services (TDS) is linked to Actor C’s infrastructure through a “purchased from” relationship. Consequently, when performing analytical pivoting, analysts should account for this relationship and not attribute all infostealers distributed via the TDS infrastructure solely to Actor A’s capabilities. Similarly, the victims of those infostealers should not be automatically classified as Actor A’s victims.

Another illustrative case involves the relationship between the victims of Actor A and Actor D. Actor D obtained initial access through a transaction with Actor A, denoted by the “purchased from” relationship within the Relationship Layer. This relationship offers analysts crucial context, allowing them to avoid attributing the tools used in the initial access phase to Actor D’s capabilities.

The Relationship Layer also elucidates the connections between adversaries. On the graph, we denote these inter-adversary connections as “commercial relationships,” providing additional context that aids in actor profiling. This extension understanding allows analysts to discern the nature of interactions between threat actors, facilitating more accurate and insightful profiling efforts.

Integrating the Relationship Layer with the Cyber Kill Chain

The Cyber Kill Chain framework serves as a structured approach to analyzing cyberattacks, enabling security professionals to break down intrusions into discrete, sequential stages — from initial reconnaissance to actions on objectives. By organizing attacks in this manner, analysts can pinpoint attacker behaviors, anticipate adversary actions and develop targeted mitigation strategies, significantly enhancing overall threat intelligence.

Integrating the extended Diamond Model into the Cyber Kill Chain framework offers a more comprehensive view of compartmentalized campaigns by illustrating how each adversary contributes to different stages of an attack. This combined perspective enhances understanding by mapping out the intricate web of relationships among multiple threat actors, thereby providing a clearer picture of how resources, capabilities and infrastructure are shared or transferred throughout an attack’s lifecycle. Figure 5 illustrates the integration of the extended Diamond Model with the Cyber Kill Chain using the Actor A–D example.

Defining a new methodology for modeling and tracking compartmentalized threats

The example above demonstrates the distinct roles that each adversary assumes at various stages of the kill chain in a hypothetical campaign. In this scenario, the victim is initially compromised by an infostealer, which Actor A acquired from Actor B, and subsequently faces a ransomware attack orchestrated by Actor D. To further enrich the analysis, we highlight the “handover” relationship between Actor C and Actor A, emphasizing its significance as both actors’ activities manifest within the targeted environment. This approach provides a more comprehensive view of the attack flow, allowing for a deeper understanding of how adversarial interactions and transitions unfold throughout the campaign.

This enriched view not only clarifies attacker tradecraft but also bolsters actor profiling and attribution efforts. By aligning specific tactics and resources with the threat groups deploying them, analysts can more accurately trace operations back to their origins. This approach also provides insights into adversary motivations, allowing defenders to tailor their response strategies effectively. For instance, understanding that an IAB is financially motivated might suggest a lower immediate threat to certain targets, while recognizing that access has been sold to a state-sponsored actor would escalate the priority of the threat response.

Identifying compartmentalized attacks

Identifying compartmentalization within the scope of an intrusion typically involves trying to determine where positive control is transferred between adversaries either pre- or post-compromise. It is essential to identify compartmentalization as this will significantly impact the overall understanding of the adversar(ies) and the capabilities available to them. Indicators of collaboration among distinct threat actors can vary significantly depending on the context and the phase of activity, and these can be categorized based on whether the actions occur before or after the compromise of a system or environment. It is important to note that while there are several examples listed in the following sections, compartmentalization can and does look different across intrusions and these are by no means comprehensive. Likewise, while the below elements are useful indicators that an analyst should investigate possible transfer of access, they are not necessarily indicative that a handoff has occurred. As more of these elements are encountered and evidence collected, an analyst may be able to strengthen their assessment that compartmentalization has occurred.

Pre-compromise

In the early stages of an intrusion, compartmentalization can often be identified by observing how tooling has been sourced, how malicious content is being delivered to potential victims and the initial/early execution flow of malicious components in the case that code execution has been achieved.

This stage may also be completely independent. In situations where a state-sponsored group is tasked with espionage operation, it may pass on the access to a ransomware group, making the state-sponsored group an IAG. It is not guaranteed that the ransomware group is aware of the nature of its IAG, but just by doing its activity it will fulfill the state-sponsored group objective of making incident analysis and attribution complex.

Shared tooling

While many of the indicators associated with the use of tooling are often identified in later stages of an intrusion, we characterize this compartmentalization as occurring pre-compromise as development and procurement activities must generally occur before the campaign is launched. It is often useful to identify if the threat actor procured tooling from third parties. This may involve identifying key characteristics of the malicious components being analyzed and searching/monitoring hacking forums and darknet marketplaces (DNMs) to identify whether a seller is advertising a capability matching the one used in the intrusion. Likewise, malware that has historically been used by one threat actor may be transferred to another threat actor, either on purpose or inadvertently in the case of source code leaks. In either case, analysis of contextual information surrounding the use of the tooling can help analysts identify when the tooling doesn’t match the threat actors’ known TTPs.

Shared delivery infrastructure

In the case of email-based delivery, analysis of the infrastructure used to send malicious emails, the content of the message, and the infrastructure used for hosting and delivering payloads may indicate that delivery has been outsourced in some capacity. Likewise, in the case of malvertising campaigns, analysis of the ad campaigns, traffic distribution infrastructure and gating methodologies may suggest the same. In many cases the infrastructure used is often observed distributing multiple distinct, otherwise unrelated malware families over a short period of time as the threat actor operating the delivery infrastructure may conduct business with multiple entities at any point in time. Analyzing activity associated with this infrastructure before, during, and after the intrusion may inform the analysis of whether compartmentalization has occurred.

Shared droppers/downloaders

When analyzing an intrusion, there is often a point at which code execution is achieved. This may be the point in which a malicious script-based component is delivered and executed by a victim. In many cases, these function as downloaders and are solely responsible for retrieving or extracting and executing follow-on payloads that allow an adversary to expand their ability to operate in an environment. Analysis of the dropper/downloader mechanisms used may identify cases where the same mechanism is used to deliver unrelated threats over time, indicating that delivery may have been outsourced. We have categorized this activity as “pre-compromise” to further differentiate it from handoffs that may occur later in the intrusion, once persistence has been achieved, etc.

Post-compromise 

In addition to the aforementioned types of compartmentalization that often occur early in an intrusion, there is another set of handoffs that may occur once an adversary has achieved compromise. These are typically used to transfer control of access from one party to another and may be performed for a variety of purposes, as described in our previous blog. This activity can often be identified by analyzing handoff behaviors, the motivation of the threat actors involved, and monitoring for typical indicators that an IAB is involved.

Handoff behaviors

In some cases, information can be collected related to the amount of time that has occurred between an IAB obtaining access to the environment, and the beginning of follow-on activity. This may include an IAB gaining access, establishing persistence, collecting information from the environment and exfiltrating that to adversary C2. Following this initial activity, the infection may conduct very little malicious activity aside from periodic C2 polling occurring on the system for an extended period of time. After an extended period, additional malicious components may be delivered that establish new C2 connections and new activity may be observed. This type of pattern is indicative that a handoff of access may have occurred and should be investigated further. Similarly, analysis of the behaviors of the threat actor before and after this handoff may strengthen or weaken an assessment as completely different TTPs may be observed between the threat actors involved.

The race to domain admin

Another set of characteristics that may strengthen an assessment that handoff has occurred is by analyzing the series of actions taken once access has been gained. In the case of FIA, for instance, we often observe repeatable processes for attempting to gain domain administrator access as quickly as possible. This makes the access more lucrative for the IAG and more seamlessly enables the deployment of additional malware components, such as ransomware. An FIA group may quickly progress from initial access to domain administrator access in a short period of time with little to no effort spent on identifying high-value targets in the environment. Once domain administrator access has been gained the intrusion activity may stop while the threat actor attempts to monetize that access and facilitate handoff to the threat actor who ultimately purchases it. SIA groups on the other hand, may take a more steady and stealth oriented approach, to conduct reconnaissance and proliferate throughout the victim enterprise without being detected. In many instances an SIA group might conduct initial exfiltration of restricted data, before handing access off to the secondary threat actor.

Dark web tracking

Monitoring hacking forums and darknet marketplaces can be extremely valuable for identifying when an IAB is involved in an intrusion. Since FIA brokers are primarily focused on achieving the maximum profit as quickly as possible, they will often post advertisements for access to environments that they have achieved. In many cases these advertisements include generic information about the company/organization involved such as size (number of employees), rounded financial information based on publicly available sources such as quarterly filings, industry, etc. Locating advertisements that match the profile of the victim of an intrusion can strengthen an assessment that an IAB is involved and provide additional intelligence collection avenues that may be pursued further to collect additional information about the IAB involved, who they typically work with, and more.

C2 analysis

Analysis of C2 infrastructure involved throughout the intrusion presents another opportunity for identifying any handoffs that have occurred. As previously mentioned, in some cases the handoff is performed by delivering a new payload and establishing a new C2 connection with another threat actor’s infrastructure. In the case of frameworks, analysis of the server logs can provide additional information where the same server has been used to administer multiple victims. Administrative panels used to manage malware infections are often useful for informing analysis related to the nature of threat actors involved and the business models they are working within. Some admin panels may be explicitly built for the purpose of facilitating handoffs, RaaS and C2aaS platforms being examples of this.

Case Study: ToyMaker

During the course of performing threat hunting and incident response, Cisco Talos sometimes encounters scenarios where compartmentalized operations involve multiple attackers participating in the same attack kill chain. Using the ToyMaker campaign as an example, we demonstrate how we identified the participation of various attackers during our investigation and utilized the extended Diamond Model to clarify the distinct activities and roles of these attackers across different stages of the attack kill chain.

APT, Cactus or FIA? 

Talos investigated the ToyMaker campaign in 2023. The attackers conducted operations for six consecutive days, during which they compromised a server of the victim organization, exfiltrated credentials and deployed the proprietary LAGTOY backdoor. We consider this “first wave” post-compromise activity. Since we did not find any common financial crime malware in this attack, and the attackers used their proprietary tools and C2 infrastructures, we considered the possibility that it might be the activity of an APT group. However, the TTPs and indicators of compromise (IOCs) did not overlap with previously observed campaigns, so we did not attribute the campaign early in the investigation.

However, during the investigation, Talos identified TTPs and hands-on-keyboard activity consistent with Cactus ransomware activity appearing in the victim’s network almost 3 weeks after the initial compromise. We consider this the “second wave” of malicious activity. After using various tools for lateral movement within the network, the attackers launched a ransomware attack within a matter of days. At this point, Talos started a more in-depth investigation, including exploring the connections and disparities between the ransomware attack and the initial access. We formulated several hypotheses at this point:

  • Hypothesis A: Both the initial compromise and subsequent activities were conducted by Cactus ransomware, and therefore LAGTOY might be a tool exclusively used by Cactus.
  • Hypothesis B: The initial access might have been carried out by a different attack group and have no relation to Cactus’s activities.
  • Hypothesis C: The initial access might have been carried out by a different attack group, but there is some connection to Cactus.

Hypothesis A was the most intuitive assumption at the beginning of the investigation. However, as the investigation progressed, Talos made the following observations:

  • Initial access activity removed the created user account before the end of activity: Before the actions following the initial access activity ceased, the attackers deleted the user account they had created.
  • Differences in TTPs: Variations in TTPs were observed between the two attack traces, either through differing approaches to similar TTPs or entirely distinct TTPs. For instance, the operators conducting initial access relied on PuTTY for credential exfiltration, while the secondary activity employed Secure Shell (SSH) alongside other tools. In terms of file packaging, the second wave utilized parameters that preserved file paths (-spf), a method not seen in the first set of actions. Furthermore, the second wave predominantly involved off-the-shelf tools, whereas the first wave featured bespoke tools unique to the attackers.
  • No tools and IoC overlapping: We found no common tools and shared infrastructure between the two waves of malicious activity.
  • No use of LAGTOY: We observed that although the first wave deployed LAGTOY, it was never used throughout the course of the intrusion. Why would a threat actor deploy a custom-made malware immediately after initial compromise but never use it? It is possible that LAGTOY might have been designated as a last resort access channel, if the attackers’ access through compromised credentials was blocked. It is also likely that LAGTOY wasn’t used because it was never meant to be used in the intrusion going forward, i.e. LAGTOY was deployed by a distinct Initial Access Threat Actor, different from Cactus. Furthermore, we had no evidence of Cactus developing and using LAGOTY in their operations. Our assessment was now leaning towards Hypothesis B: The initial access might have been carried out by a different attack group and have no relation to Cactus’s activities.
  • Time gap between the first and second waves: There was approximately a gap of 3 weeks with no observed attack activity before the second wave of attacks began. For big-game double extortion threat actors, speed is paramount. A successful initial compromise must be capitalized by performing rapid recon, endpoint and file enumeration, data exfiltration and ransomware deployment. For such operations that tend to focus on a blitz, it is abnormal to see a gap of weeks with lulls in activity. Therefore, we must consider the possibility that there may have been a handoff of access between two distinct threat actors conducting the first and second wave of attacks. Furthermore, a gap of 3 weeks suggests that the first threat actor did not have a secondary actor already aligned/available for immediate access; they had to find Cactus. Talos’ assessment was now leaning towards Hypothesis C: The initial access might have been carried out by a different attack group, but there is some connection to Cactus.
Defining a new methodology for modeling and tracking compartmentalized threats
  • Shared credentials: Within the first six days of activity, we observed credential harvesting and exfiltration. Three weeks later, the second wave began which we attributed to Cactus. This second wave was kickstarted using the same credentials stolen in the first wave. Therefore, there was indeed a connection between the two waves of activity: the shared stolen credentials.

The totality of patterns and abnormalities collected during our research shifted our assessments toward the hypothesis involving an initial access group, leading us to reanalyze the LAGTOY tool used in the first wave of activities conducted post compromise. We discovered that this backdoor is the same as HOLERUN, which Mandiant reported as being used by UNC961. This finding, combined with the previous public reporting and observations, allows us to confirm that the attack involved two distinct attacker groups (ToyMaker aka UNC961, and Cactus).

Mandiant’s public reporting noted that UNC961’s intrusion activities often preceded the deployment of Maze and Egregor ransomware by distinct follow-on actors. While Egregor is considered a direct successor to Maze, there is no evidence indicating any connection to Cactus. In the campaign we investigated, Cactus used compromised credentials from the first wave of attacks on the victim’s machine. Based on these findings, Talos assesses with high confidence that ToyMaker provided initial access for the Cactus group. Given ToyMaker’s focus on financial gain and their history of selling initial access to ransomware groups, we classify them as an FIA group.

Leveraging the extended Diamond Model for further analysis and defensive strategy

Defining a new methodology for modeling and tracking compartmentalized threats

Building on the analysis and context provided, the extended Diamond Model allows Talos to effectively represent the threat actors involved in this campaign, highlighting the intricacies of their collaborative relationships. In Figure 6, we utilize two distinct diamonds to symbolize the ToyMaker group and the Cactus ransomware group. The Relationship Layer plays a crucial role in delineating the connections between ToyMaker’s victims and Cactus’ victims, as well as illustrating the initial access provider-receiver dynamics.

These relationships underscore the importance of carefully reviewing and investigating any capabilities and infrastructure indicators identified on the victim’s machine associated with either threat actor. For example, the hosts infected by LAGTOY are potentially at risk of ransomware attacks, or tools discovered on Cactus’ victims might be from LAGTOY or potentially other initial access groups. 

We can also leverage the relationship information provided by the extended Diamond Model to identify additional potential victims of Cactus ransomware by hunting for hosts infected with the LAGTOY backdoor. Similarly, examining victims associated with ToyMaker can lead to discovering other ransomware attack victims. For defenders, this relationship data is crucial for prioritizing detection efforts and ensuring that the activities of ToyMaker and other initial access groups are not overlooked, as they can serve as precursors to further attacks. By maintaining vigilance and focusing on these initial access indicators, security teams can proactively identify and mitigate threats before they escalate into full-blown ransomware incidents.

Cisco Talos Blog – ​Read More

The ransomware landscape in 2025 | Kaspersky official blog

May 12 is World Anti-Ransomware Day. On this memorable day, established in 2020 by both INTERPOL and Kaspersky, we want to discuss the trends that can be traced in ransomware incidents and serve as proof that negotiations with attackers and payments in cryptocurrency are becoming an increasingly  bad idea.

Low quality of decryptors

When a company’s infrastructure is encrypted as a result of an attack, the first thing a business wants to do is to get back to normal operations by recovering data on workstations and servers as quickly as possible. From the ransom notes, it may seem that, after paying the ransom, the company will receive a decryptor app that will quickly return all the information to its original state and allow resuming work processes almost painlessly. In practice, this almost never happens.

First, some extortionists simply deceive their victims and don’t send a decryptor at all. Such cases became widely known, for example, thanks to the leak of internal correspondence of the Black Basta ransomware group.

Second, the cybercriminals specialize in encryption, not decryption, so they put little effort into their decryptor applications; the result is that they work poorly and slowly. It may turn out that restoring data from a backup copy is much faster than using the attackers’ utility. Their decryptors often crash when encountering exotic file names or access-rights conflicts (or simply for no apparent reason), and they do not have a mechanism for continuing decryption from the point where it was interrupted. Sometimes, due to faulty logic, they simply corrupt files.

Repeated attacks

It’s common knowledge that a blackmailer will always be able to keep on blackmailing; blackmailing with ransomware is just the same. Cybercriminal gangs communicate with each other, and “affiliates” switch between ransomware-as-a-service providers. In addition, when law enforcement agencies successfully stop a gang, they’re not always able to arrest all of its members, and those who’ve evaded capture take up their old tricks in another group. As a result, information about someone successfully collecting a ransom from a victim becomes known to the new gang, which tries to attack the same organization – often successfully.

Tightening of legislation

Modern attackers not only encrypt, but also steal data, which creates long-term risks for a company. After a ransomware attack, a company has three main options:

  • publicly report the incident and restore operations and data without communicating with the cybercriminals;
  • report the incident, but pay a ransom to restore the data and prevent its publication;
  • conceal the incident by paying a ransom for silence.

The latter option has always been a ticking time bomb – as the cases of Westend Dental and Blackbaud prove. Moreover, many countries are now passing laws that make such actions illegal. For example:

  • the NIS2 (network and information security) directive and DORA (Digital Operational Resilience Act) adopted in the EU require companies in many industries, as well as large and critical businesses, to promptly report cyber incidents, and also impose significant cyber resilience requirements on organizations;
  • a law is being discussed in the UK that would prohibit government organizations and critical infrastructure operators from paying ransoms, and would also require all businesses to promptly report ransomware incidents;
  • the Cybersecurity Act has been updated in Singapore, requiring critical information infrastructure operators to report incidents, including ones related to supply-chain attacks and to any customer service interruptions;
  • a package of federal directives and state laws in the U.S. prohibiting large payments (more than $100,000) to cybercriminals, and also requiring prompt reporting of incidents is under discussion and has been partially adopted in the United States.

Thus, even having successfully recovered from an incident, a company that secretly paid extortionists risks receiving unpleasant consequences for many years to come if the incident becomes public (for example, after the extortionists are arrested).

Lack of guarantees

Often, companies pay not for decryption, but for an assurance that stolen data won’t be published and that the attack will remain confidential. But there’s never any guarantee that this information won’t surface somewhere later. As recent incidents show, disclosure of the attack itself and stolen corporate data can be possible in several scenarios:

  • As a result of an internal conflict among attackers. For example, due to disagreements within a group or an attack by one group on the infrastructure of another. As a result, the victims’ data is published in order to take revenge, or it’s leaked to help in destroying the assets of a competing gang. In 2025, victims’ data appeared in a leak of internal correspondence of the Black Basta gang; another disclosure of victims’ data was made when the DragonForce group destroyed and seized the infrastructure of two rivals, BlackLock and Mamona. On May 7, the Lockbit website was hacked and data from the admin panel was made publicly available – listing and describing in detail all the group’s victims over the past six months.
  • During a raid by law enforcement agencies on a ransomware group. The police, of course, won’t publish the data itself, but the fact that the incident took place would will be disclosed. Last year, Lockbit victims became known like this.
  • Due to a mistake made by the ransomware group itself. Ransomware groups’ infrastructure is often not particularly well protected, and the stolen data can be accidentally found by security researchers, competitors, or just random people. The most striking example was a giant collection of data stolen from five large companies by various ransomware gangs, and published in full by the hacktivist collective DDoSecrets.

Ransomware may not be the main problem

Thanks to the activities of law enforcement agencies and the evolution of legislation, the portrait of a “typical ransomware group” has changed dramatically. The activity of large groups typical of incidents in 2020-2023 has decreased, and ransomware-as-a-service schemes have come to the fore, in which the attacking party can be very small teams or even individuals. An important trend has emerged: as the number of encryption incidents has increased, the total amount of ransoms paid has decreased. There are two reasons for this: firstly, victims increasingly refuse to pay, and secondly, many extortionists are forced to attack smaller companies and ask for a smaller ransom. More detailed statistics can be found in our report on Securelist.

But the main change is that there’ve been more cases where attackers have mixed motives; for example, one and the same group conducts espionage campaigns and simultaneously infects the infrastructure with ransomware. Sometimes the ransomware serves only as a smokescreen to disguise espionage, and sometimes the attackers are apparently carrying out someone’s order for information extraction, and using extortion as an additional source of income. For business owners and managers, this means that in the case of a ransomware incident, it’s impossible to fully understand the attacker’s motivation or check its reputation.

How to deal with a ransomware incident

The conclusion is simple: paying money to ransomware operators may be not the solution, but a prolongation and deepening of the problem. The key to a quick business recovery is a response plan prepared in advance.

Organizations need to implement detailed plans for IT and infosec departments to respond to a ransomware incident. Special attention should be given to scenarios for isolating hosts and subnets, disabling VPN and remote access, and deactivating accounts (including primary administrative ones), with a transition to backup accounts. Regular training on restoring backups is also a good idea. And don’t forget to store those backups in an isolated system where they cannot be corrupted by an attack.

To implement these measures and be able to respond ASAP while an attack has not yet affected the entire network, it’s necessary to implement a constant deep monitoring process: large companies will benefit from a XDR solution, while smaller businesses can get high-quality monitoring and response by subscribing to an MDR service.

Kaspersky official blog – ​Read More

Catching a phish with many faces

Here’s a brief dive into the murky waters of shape-shifting attacks that leverage dedicated phishing kits to auto-generate customized login pages on the fly

WeLiveSecurity – ​Read More

The IT help desk kindly requests you read this newsletter

The IT help desk kindly requests you read this newsletter

Welcome to this week’s edition of the Threat Source newsletter. 

Authority bias is one of the many things that shape how we think. Taking the advice of someone with recognized authority is often far easier (and usually leads to a better outcome) than spending time and effort in researching the reasoning and logic behind that advice. Put simply, it’s easier to take your doctor’s advice on health matters than it is to spend years in medical school learning why the advice you received is necessary. 

This tendency to respect and follow authoritative instructions translates into our use of computers, too. If you’re reading this, you’ve likely been the recipient of many questions about computer-related matters from friends and family. However, your trust can be abused, even by someone who seems knowledgeable and respectable. 

Attackers have learned that by impersonating individuals with some form of authority, such as banking staff, tax officials or IT professionals, they can persuade victims to carry out actions against their own interests. In our most recent Incident Response Quarterly Trends update, we describe how ransomware actors masquerade as IT agents when contacting their victims, instructing them to install remote access software. This allows the threat actor to set up long-term access to the device and continue the pursuit of their malicious objectives. 

If someone contacts you out of the blue professing to be an IT or bank/tax expert with urgent or helpful instructions, end the conversation immediately. Follow up with a call to the main contact details of the team or organization that contacted you to verify if the call was genuine. Be aware of the scams that the bad guys are using and spread awareness far and wide. Expect threat actors to attempt to exploit human nature and its own vulnerabilities. 

The one big thing 

Threat hunting is an integral part of any cyber security strategy because identifying potential incursions early allows issues to be swiftly resolved before harm is incurred. There are many different approaches to threat hunting, each of which may uncover different threats.

Why do I care? 

As threat actors increasingly use living-off-the-land binaries (LOLBins) — i.e. using either dual-use tools or the tools that they find already in place on compromised systems — detecting the presence of an intruder is no longer a case of simply finding their malware.  

Spotting bad guys is still possible, but requires a slightly different approach: either looking for evidence of the potential techniques they use, or finding evidence that things aren’t quite as they should be. 

So now what? 

Read about the different types of threat hunting strategies the Talos IR team uses and investigate how these can be used within your environment to improve your chances of finding incursions early.

Top security headlines of the week 

MySQL turns 30  
The popular database was founded on May 23, 1995 and is at the heart of many high-traffic applications such as Facebook, Netflix, Uber, Airbnb, Shopify, and Booking.com. (Oracle

Disney Slack attack wasn’t Russian protesters, just a Cali dude with malware 
A resident of California has pleaded guilty to conducting an attack in which 1.1 TB of data was stolen. The attack was conducted by releasing a trojan masquerading as an AI art generation application. (The Register

Ransomware Group Claims Attacks on UK Retailers 
The DragonForce ransomware group says it orchestrated the disruptive cyberattacks that hit UK retailers Co-op, Harrods, and Marks & Spencer (M&S). (Security Week

Attackers Ramp Up Efforts Targeting Developer Secrets 
Attackers are increasingly seeking to steal secret keys or tokens that have been inadvertently exposed in live environments or published in online code repositories. (Dark Reading)

Can’t get enough Talos? 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools 
A new spam campaign is targeting Brazilian users with a clever twist — abusing the free trial period of trusted remote monitoring tools and the country’s electronic invoice system to spread malicious agents. Read now

Threat Hunting with Talos IR
Talos recently published a blog on the framework behind our Threat Hunting service, featuring this handy video:

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507   
MD5: 2915b3f8b703eb744fc54c81f4a9c67f   
VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507/  
Typical Filename: VID001.exe  
Detection Name: Win.Worm.Bitmin-9847045-0  

SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
MD5: 7bdbd180c081fa63ca94f9c22c457376   
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
Typical Filename: img001.exe  
Detection Name: Simple_Custom_Detection  

SHA256: 47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca   
MD5: 71fea034b422e4a17ebb06022532fdde   
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
Typical Filename: VID001.exe   
Detection Name: Coinminer:MBT.26mw.in14.Talos 

SHA256: c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0   
MD5: 8c69830a50fb85d8a794fa46643493b2 
VirusTotal: https://www.virustotal.com/gui/file/c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0  
Typical Filename: AAct.exe 
Detection Name: W32.File.MalParent 

Cisco Talos Blog – ​Read More

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools

  • Cisco Talos identified a spam campaign targeting Brazilian users with commercial remote monitoring and management (RMM) tools since at least January 2025. Talos observed the use of PDQ Connect and N-able remote access tools in this campaign. 
  • The spam message uses the Brazilian electronic invoice system, NF-e, as a lure to entice users into clicking hyperlinks and accessing malicious content hosted in Dropbox. 
  • Talos has observed the threat actor abusing RMM tools in order to create and distribute malicious agents to victims. They then use the remote capabilities of these agents to download and install Screen Connect after the initial compromise.
  • Talos assesses with high confidence that the threat actor is an initial access broker (IAB) abusing the free trial periods of these RMM tools. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools

Talos recently observed a spam campaign targeting Portuguese-speaking users in Brazil with the intention of installing commercial remote monitoring and management (RMM) tools. The initial infection occurs via specially crafted spam messages purporting to be from financial institutions or cell phone carriers with an overdue bill or electronic receipt of payment issued as an NF-e (see Figures 1 and 2). 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 1. Spam message purporting to be from a cell phone provider. 
Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 2. Spam message masquerading as a bill from a financial institution. 

Both messages link to a Dropbox file, which contains the malicious binary installer for the RMM tool. The file names also contain references to NF-e in their names: 

  • AGENT_NFe_<random>.exe 
  • Boleto_NFe_<random>.exe 
  • Eletronica_NFe_<random>.exe 
  • Nf-e<random>.exe 
  • NFE_<random>.exe 
  • NOTA_FISCAL_NFe_<random>.exe 

Note: <random> means the filename uses a random sequence of letters and numbers in that position. 

The victims targeted in this campaign are mostly C-level executives and financial and human resources accounts across several industries, including some educational and government institutions. This assessment is based on the most common recipients found in the messages Talos observed during this campaign. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 3. Targeted recipients.

Abusing RMM tools for profit 

This campaign’s objective is to lure the victims into installing an RMM tool, which allows the threat actor to take complete control of the target machine. N-able RMM Remote Access is the most common tool distributed in this campaign and is developed by N-able, Inc., previously known as SolarWinds. N-able is aware of this abuse and took action to disable the affected trial accounts. Another tool Talos observed in some cases is PDQ Connect, a similar RMM application. Both provide a 15-day free trial period.

To assess whether these actors were using a trial version rather than stolen credentials to create these accounts, Talos checked samples older than 15 days and confirmed all of them returned errors that the accounts were disabled, while newer samples found in the last 15 days were all active.

Talos also examined the email accounts used to register for the service. They all use free email services such as Gmail or Proton Mail, as well as usernames following the theme of the spam campaign, with few exceptions where the threat actors used personal accounts. These exceptions are potentially compromised accounts which are being abused by the threat actors to create additional trial accounts. Talos did not find any samples in which the registered account was issued by a private company, so we can assess with high confidence these agents were created using trial accounts instead of stolen credentials.

N-able is aware of this abuse and took action to disable the affected trial accounts.

Talos found no evidence of a common post-infection behavior for the affected machines, with most machines staying infected for days before any other malicious activity was executed by the tool. However, in some cases, we observed the threat actor installing an additional RMM tool and removing all security tools from the machine a few days after the initial compromise. This is consistent with actions of initial access broker (IAB) groups. 

An IAB’s main objective is to rapidly create a network of compromised machines and then sell access to the network to third parties. Threat actors commonly use IABs when looking for specific target companies to deploy ransomware on. However, IABs have varied priorities and may sell their services to any threat actors, including state-sponsored actors.  

Adversaries’ abuse of commercial RMM tools has steadily increased in recent years. These tools are of interest to threat actors because they are usually digitally signed by recognized entities and are a fully featured backdoor. They also have little to no cost in software or infrastructure, as all of this is generally provided by the trial version application.  

Talos created a trial account to test what features were available for a trial user. In the case of the N-able remote access tool, the trial version offers a full set of features only limited by the 15-day trial period. Talos was able to confirm that by using a trial account, the threat actor has full access to the machine, including remote desktop like access, remote command execution, screen streaming, keystroke capture and remote shell access. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 4. N-able management interface showing available remote access tools. 
Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 5. Administrative shell executed on a remote machine. 

The threat actor also has access to a fully featured file manager to easily read and write files to the remote file system. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 6. N-able file manager. 

The network traffic these tools create is also disguised as regular traffic, with many tools using communication over HTTPS and connecting to resources which are part of the infrastructure provided by the application provider. For example, N-able Remote Access uses a domain associated with its management interface, hosted on Amazon Web Services (AWS): 

  • hxxps://upload1[.]am[.]remote[.]management/ 
  • hxxps://upload2[.]am[.]remote[.]management/ 
  • hxxps://upload3[.]am[.]remote[.]management/ 
  • hxxps://upload4[.]am[.]remote[.]management/ 

Disclaimer: The URLs above are part of the management infrastructure for the RMM tools described in this blog and are not controlled by the threat actor. Customers must complete an assessment before enabling block signatures for these domains. 

The domain the agent uses is the same for any customer using the tool, with only the username and API key differentiating which customer the agent belongs to, as can be seen in Figure 7. This makes it even more difficult to identify the origin of the attacks and perform threat actor attribution.

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 7. Example configuration file. 

By extracting the configuration files inside the agent installer files still available on Dropbox, we can see some email addresses follow the same theme of the spam emails, using names of finance-related users and domains, while others could be potentially compromised accounts being used to create trial accounts for N-able Remote Access.  

With these trial versions being limited only by time and providing full remote-control features with little to no cost to the threat actors, Talos expects these tools to become even more common in attacks. 

Cisco Secure Firewall Application control is able to detect the unintended usage of RMM tools in customer’s networks. Instructions on how to set up Application control can be found at Cisco Secure Firewall documentation

Coverage 

Ways our customers can detect and block this threat are listed below. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here. 

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat. 

Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device. 

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products. 

Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles.  Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work.  Please contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access. 

Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.  

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.  

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.  

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org

ClamAV detections are also available for this threat:  

Txt.Backdoor.NableRemoteAccessConfig-10044370-0
Txt.Backdoor.NableRemoteAccessConfig-10044371-0 
Txt.Backdoor.NableRemoteAccessConfig-10044372-0 

Indicators of Compromise 

Disclaimer: The URLs below are part of the management infrastructure for the RMM tools described in this blog and are not controlled by the threat actor. An assessment must be done by customers before enabling block signatures for these domains. 

IOCs for this threat can be found on our GitHub repository here. 

Network IOCs 

hxxps://upload1[.]am[.]remote[.]management/ 
hxxps://upload2[.]am[.]remote[.]management/ 
hxxps://upload3[.]am[.]remote[.]management/ 
hxxps://upload4[.]am[.]remote[.]management/ 
198[.]45[.]54[.]34[.]bc[.]googleusercontent[.]com 

RMM Installer – Hashes 

03b5c76ad07987cfa3236eae5f8a5d42cef228dda22b392c40236872b512684e 
0759b628512b4eaabc6c3118012dd29f880e77d2af2feca01127a6fcf2fbbf10 
080e29e52a87d0e0e39eca5591d7185ff024367ddaded3e3fd26d3dbdb096a39 
0de612ea433676f12731da515cb16df0f98817b45b5ebc9bbf121d0b9e59c412 
1182b8e97daf59ad5abd1cb4b514436249dd4d36b4f3589b939d053f1de8fe23 
14c1cb13ffc67b222b42095a2e9ec9476f101e3a57246a1c33912d8fe3297878 
2850a346ecb7aebee3320ed7160f21a744e38f2d1a76c54f44c892ffc5c4ab77 
4787df4eea91d9ceb9e25d9eb7373d79a0df4a5320411d7435f9a6621da2fd6b 
51fa1d7b95831a6263bf260df8044f77812c68a9b720dad7379ae96200b065dd 
527a40f5f73aeb663c7186db6e8236eec6f61fa04923cde560ebcd107911c9ff 
57a90105ad2023b76e357cf42ba01c5ca696d80a82f87b54aea58c4e0db8d683 
63cde9758f9209f15ee4068b11419fead501731b12777169d89ebb34063467ea 
79b041cedef44253fdda8a66b54bdd450605f01bbb77ea87da31450a9b4d2b63 
a2c17f5c7acb05af81d4554e5080f5ed40b10e3988e96b4d05c4ee3e6237c31a 
b53f9c2802a0846fc805c03798b36391c444ab5ea88dc2b36bffc908edc1f589 
c484d3394b32e3c7544414774c717ebc0ce4d04ca75a00e93f4fb04b9b48ecef 
ca11eb7b9341b88da855a536b0741ed3155e80fc1ab60d89600b58a4b80d63a5 
d1efebcca578357ea7af582d3860fa6c357d203e483e6be3d6f9592265f3b41c 
e2171735f02f212c90856e9259ff7abc699c3efb55eeb5b61e72e92bea96f99c 
e34b8c9798b92f6a0e2ca9853adce299b1bf425dedb29f1266254ac3a15c87cd 
ebdefa6f88e459555844d3d9c13a4d7908c272128f65a12df4fb82f1aeab139f 
f52b4d81c73520fd25a2cc9c6e0e364b57396e0bb782187caf7c1e49693bebbf 
f5efd939372f869750e6f929026b7b5d046c5dad2f6bd703ff1b2089738b4d9c 
F68ae2c1d42d1b95e3829f08a516fb1695f75679fcfe0046e3e14890460191cf 
a71e274fc3086de4c22e68ed1a58567ab63790cc47cd2e04367e843408b9a065

Cisco Talos Blog – ​Read More

Safeguarding your browsing history | Kaspersky official blog

In April, the release of version 136 of Google Chrome finally addressed a privacy issue for the browser that’s been widely known about since 2002 (which issue, btw, is also present in all other major browsers). This was real bad news for unscrupulous marketers, who’d been exploiting it wholesale for 15 years. From this menacing description, you might be surprised to learn that the threat is a familiar and seemingly harmless convenience: links that your browser highlights a different color after you visit them.

From a blue sky to purple rain

Changing the color of links to visited sites (by default from blue to purple) was first introduced 32 years ago in the NCSA Mosaic browser. After that, this user-friendly practice was adopted by almost all browsers in the 1990s. And it later became the standard for Cascading Style Sheets (CSS) — a language for adding stylization to web pages. Such recoloring occurs by default in all popular browsers today.

However, as early as in 2002, researchers noticed that this feature could be abused by placing hundreds or thousands of invisible links on a page and using JavaScript to detect which of them the browser renders as visited. In this way, a rogue site could partially uncover a user’s browsing history.

In 2010, researchers discovered that this technique was being used in the wild by some major sites to snoop on visitors — among which were YouPorn, TwinCities, and 480 other sites then popular. It was also found that platforms like Tealium and Beencounter were offering history-sniffing services, while the advertising firm Interclick was implementing this technology for analytics, and even faced legal action. Although it won the lawsuit, the major browsers have since modified their code for processing links to make it impossible to read whether a link was visited or not.

However, advances in web technologies created new workarounds for snooping on browsing history. A 2018 study described four new ways to check the state of links — two of which affected all tested browsers except the Tor Browser. One of the vulnerabilities — CVE-2018-6137 — made it possible to check visited sites at up to 3000 links per second. Meanwhile new, increasingly sophisticated attacks to extract browsing history continue to appear.

Why history theft is dangerous

Exposing your browsing history, even partially, poses several threats to users.

Not-so-private life. Knowing what sites you visit (especially if it relates to medical treatment, political parties, dating/gambling/porn sites, and similar sensitive topics), attackers can weaponize this information against you. They can then tailor a scam or bait to your individual case — be it extortion, a fake charity, the promise of new medication, or something else.

Targeted checks. A history-sniffing site could, for example, run through all the websites of the major banks to determine which one you use. Such information can be of use to both cybercriminals (say, for creating a fake payment form to fool you) and legitimate companies (say, for seeing which competitors you’ve looked at).

Profiling and deanonymization. We’ve written many times about how advertising and analytics companies use cookies and fingerprinting to track user movements and clicks across the web. Your browsing history serves as an effective fingerprint, especially when combined with other tracking technologies. If an analytics firm’s site can see what other sites you visited and when, it essentially functions as a super-cookie.

Guarding against browser history theft

Basic protection appeared in 2010 almost simultaneously in the Gecko (Firefox) and WebKit (Chrome and Safari) browser engines. This guarded against using basic code to read the state of links.

Around the same time, Firefox 3.5 introduced the option to completely disable the recoloring of visited links. In the Firefox-based Tor Browser, this option is enabled by default — but the option to save browsing history is disabled. This provides a robust defense against the whole class of attacks but sorely impacts convenience.

Unless you sacrifice an element of comfort, however, sophisticated attacks will still be able to sniff your browsing history.

Attempts are underway at Google to significantly change the status quo: starting with version 136, Chrome will have visited link partitioning enabled by default. In brief, it works like this: links are only recolored if they were clicked from the current site; and when attempting a check, a site can only “see” clicks originating from itself.

The database of website visits (and clicked links) is maintained separately for each domain. For example, suppose bank.com embeds a widget showing information from banksupport.com, and this widget contains a link to centralbank.com. If you click the centralbank.com link, it will be marked as visited — but only within the banksupport.com widget displayed on bank.com. If the exact same banksupport.com widget appears on some other site, the centralbank.com link will appear as unvisited. Chrome’s developers are so confident that partitioning is the long-awaited silver bullet that they’re nurturing tentative plans to switch off the 2010 mitigations.

What about users?

If you don’t use Chrome, which, incidentally has plenty of other privacy issues, you can take a few simple precautions to ward off the purple menace.

  • Update your browser regularly to stay protected against newly discovered vulnerabilities.
  • Use incognito or private browsing if you don’t want others to know what sites you visit. But read this post first — because private modes are no cure-all.
  • Periodically clear cookies and browsing history in your browser.
  • Disable the recoloring of visited links in the settings.
  • Use tools to block trackers and spyware, such as Private Browsing in Kaspersky Premium, or a specialized browser extension.

To find out how else browsers can snoop on you, check these blogposts out:

Kaspersky official blog – ​Read More

Nitrogen Ransomware Exposed: How ANY.RUN Helps Uncover Threats to Finance 

The financial sector is heavily targeted by cybercriminals. Banks, investment firms, and credit unions are prime victims of attacks aimed at stealing sensitive data or holding it hostage for massive ransoms. One emerging threat in this landscape is Nitrogen Ransomware, a malicious group discovered in September 2024.  

It has since then been notoriously renowned for several successful attacks like that on SRP Federal Credit Union in South Carolina in December 2024. However, there is still a scarcity of information on the group’s TTPs, and this deficit highlights the value of solutions like ANY.RUN’s threat intelligence and malware analysis suite.

Why Financial Sector Is Vulnerable 

The numbers don’t lie: in 2024, 10% of all cyberattacks targeted the financial industry, according to reports.  

From ransomware to financial fraud and cloud infrastructure exploits, banks and credit unions face all the kinds of threats that there are. The stakes are high — cyberattacks now cost organizations up to $2.5 billion per incident, with ransomware attacks alone spiking to 20–25 major incidents daily, a fourfold increase in financial losses since 2017. 

Why is the financial sector so attractive? It’s simple: money and data. Financial institutions hold sensitive customer information and control vast sums of capital, which makes them tempting targets for ransomware groups like Nitrogen. Early detection and adversary tactics analysis are critical to minimizing damage, and that’s where services like ANY.RUN’s Interactive Sandbox and Threat Intelligence Lookup come in handy. 

Meet Nitrogen Ransomware 

There are traces of Nitrogen from July 2023, but it’s consensual to track it from September 2024. It was initially observed targeting not only finance but also construction, manufacturing, and tech, primarily in the United States, Canada, and the United Kingdom. The routine was to encrypt critical data and demand a ransom to unlock it. One of their confirmed victims, SRP Federal Credit Union, a South Carolina-based institution serving over 195,000 customers, fell prey on December 5, 2024. 

Little is known about Nitrogen’s tactics due to limited public data, but a report by StreamScan provides a starting point.  

It offers key indicators of compromise and some insights into the methods. Interestingly, Nitrogen shares similarities with another ransomware strain, LukaLocker, including identical file extensions for encrypted files and similar ransom notes. This overlap raises questions about their origins, but deeper analysis is needed to confirm connections. 

The StreamScan report is the primary source of information on Nitrogen, detailing a few critical IOCs

  • Ransomware File: A malicious executable with the SHA-256 hash 55f3725ebe01ea19ca14ab14d747a6975f9a6064ca71345219a14c47c18c88be 
  • Mutex: A unique identifier, nvxkjcv7yxctvgsdfjhv6esdvsx, created by the ransomware before encryption. 
  • Vulnerable Driver: truesight.sys, a legitimate but exploitable driver used to disable antivirus and endpoint detection tools. 
  • System Manipulation: Use of bcdedit.exe to disable Windows Safe Boot, hindering system recovery. 

While this report is a good start, it’s light on details. This is where ANY.RUN steps in, offering deeper insights through dynamic analysis and threat intelligence enrichment.  

ANY.RUN’s Threat Intelligence Versus Nitrogen 

Let’s research some of the above-mentioned indicators via Threat Intelligence Lookup to find more IOCs, behavioral data, and technical details on Nitrogen attacks.  

1. Tracking the Mutex 

Before encrypting files, Nitrogen creates a unique mutex (nvxkjcv7yxctvgsdfjhv6esdvsx) to ensure only one instance of the ransomware runs at a time. Using ANY.RUN’s Threat Intelligence Lookup, analysts can search for this mutex and uncover over 20 related samples, with the earliest dating back to September 2, 2024.  

syncObjectName:”nvxkjcv7yxctvgsdfjhv6esdvsx” 

Mutex search results in TI Lookup  

For each sample, an analysis session can be explored to enrich the understanding of the threat and gather additional indicators not featured in public research.

All sandbox analyses contain a selection of linked IOCs

ANY.RUN’s analyses also link Nitrogen to LukaLocker, as both share similar code structures and behaviors. By identifying additional IOCs from related tasks, ANY.RUN helps organizations update their detection systems to block Nitrogen before it strikes. 

An analysis session in the sandbox where Luka was detected 

Collect threat intelligence with TI Lookup to improve your company’s security 



Get 50 free requests


2. Exposing the Vulnerable Driver 

Nitrogen exploits truesight.sys, a legitimate driver from RogueKiller AntiRootkit, to kill AV/EDR processes and thus disable antivirus and endpoint detection tools. This driver, listed in the LOLDrivers catalog, is used by threat actors because it’s not inherently malicious, so it does not trigger standard defenses. 

truesight.sys description in LOLDrivers’ catalog

ANY.RUN’s TI Lookup reveals over 50 analyses linked to truesight.sys: 

sha256:”Bfc2ef3b404294fe2fa05a8b71c7f786b58519175b7202a69fe30f45e607ff1c” 

Sandbox sessions featuring the abused driver  

By parsing these analyses, teams see how the driver can be abused, from terminating security processes to evading detection.  

Malicious behavior exposed by a sandbox analysis 

The driver’s name as a search query with the “CommandLine” parameter gives a selection of system events involving the driver:  

commandLine:”*truesight.sys” 

System events observed via TI Lookup 

ANY.RUN’s Interactive Sandbox’s ability to detect and flag this activity ensures organizations can block such exploits early. 


Enrich your threat knowledge with TI Lookup

Learn to Track Emerging Cyber Threats

Check out expert guide to collecting intelligence on emerging threats with TI Lookup



3. Catching System Manipulation 

Nitrogen uses the Windows utility bcdedit.exe to disable Safe Boot, a recovery mechanism that could otherwise help restore an infected system. As the StreamScan report says: 

Example from the StreamScan report

ANY.RUN allows analysts to use YARA rules to search for this behavior, identifying samples that tamper with system settings.  

YARA rule from the StreamScan report 

A simple YARA search in ANY.RUN’s TI Lookup returned several files linked to this tactic, each with associated analysis sessions that reveal additional IOCs. 

YARA rule search in TI Lookup 

By integrating these IOCs into SIEM or EDR systems, organizations can detect and block attempts to modify Windows boot settings before encryption begins, stopping Nitrogen in its tracks. To defend against threats like Nitrogen, security teams should: 

  • Monitor for unusual use of PowerShell, WMI, and DLL sideloading. 
  • Block known malicious infrastructure and domains. 
  • Educate employees about phishing and social engineering tactics. 
  • Use threat intelligence services to proactively hunt for related IOCs and TTPs

Conclusion 

The financial sector’s battle against ransomware is far from over, but solutions like ANY.RUN are leveling the playing field. By dissecting Nitrogen Ransomware’s tactics —system manipulation, driver exploitation, and mutex creation — ANY.RUN empowers cybersecurity teams to detect, analyze, and respond to the threat faster. Its dynamic analysis capabilities let analysts observe malware in action, from file encryption to system components abuse, in a safe sandbox environment. Meanwhile, its TI Lookup enriches threat data by providing additional indicators, uncovering connections to other attacks, campaigns, and techniques. 

Nitrogen is a reminder that today’s cyberattacks are not only persistent — they’re precise. As Nitrogen and similar groups continue to evolve, staying proactive with dynamic analysis and enriched threat intelligence is the key to keeping financial institutions safe, to avoid direct capital losses, reputation damage.

About ANY.RUN

ANY.RUN helps more than 500,000 cybersecurity professionals worldwide. Our interactive sandbox simplifies malware analysis of threats that target both Windows and Linux systems. Our threat intelligence products, TI Lookup, YARA Search, and Feeds, help you find IOCs or files to learn more about the threats and respond to incidents faster.

Request free trial of ANY.RUN’s services → 

The post Nitrogen Ransomware Exposed: How ANY.RUN Helps Uncover Threats to Finance  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Mamona: Technical Analysis of a New Ransomware Strain

Editor’s note: The current article is authored by Mauro Eldritch, offensive security expert and threat intelligence analyst. You can find Mauro on X. 

These days, it’s easy to come across new ransomware strains without much effort. But the ransomware threat landscape is far broader than it seems, especially when you dive into the commodity ransomware scene. This type of ransomware is developed by a group that sells a builder to third-party operators, with no formal agreement or contract between them, unlike the more organized Ransomware-as-a-Service (RaaS) model. 

On this side of the fence, we see countless new products appearing on the cybercrime shelf every day. They’re much harder to track, as victims, strains, infrastructure, and builds often have no direct connection to each other. 

Let’s take a look at one of them: Mamona Ransomware. Never heard of it? That’s probably because it’s a new strain but despite its short lifespan, it has already made some noise. It’s been spotted in campaigns run by BlackLock affiliates (who are also linked to Embargo), one of its online builders was exposed and later leaked on the clearnet, and the DragonForce group even stole the main website’s .env file, publishing it on their Dedicated Leak Site on Tor under the headline: “Is this your .env file?” 

So, let’s find out what this is all about. 

Mamona Ransomware in action

Mamona Ransomware: Key Takeaways 

  • Emerging threat: Mamona is a newly identified commodity ransomware strain. 
  • No external communication: The malware operates entirely offline, with no observed Command and Control (C2) channels or data exfiltration. 
  • Local encryption only: All cryptographic processes are executed locally using custom routines, with no reliance on standard libraries. 
  • Obfuscated delay technique: A ping to 127[.]0.0[.]7 is used as a timing mechanism, followed by a self-deletion command to minimize forensic traces. 
  • False extortion claims: The ransom note threatens data leaks, but analysis confirms there is no actual data exfiltration. 
  • File encryption behavior: User files are encrypted and renamed with the .HAes extension; ransom notes are dropped in multiple directories. 
  • Decryption available: A working decryption tool was identified and successfully tested, enabling file recovery. 
  • Functional, despite poor design: The decrypter features an outdated interface but effectively restores encrypted files. 

This emerging ransomware can be clearly observed within ANY.RUN’s cloud-based sandbox environment. You can explore a full analysis session below for a detailed visual breakdown. 

View analysis session with Mamona ransomware 

Offline and Dangerous: Mamona’s Silent Tactics 

When you hear about ransomware, your first educated guess is usually a threat that comes from the outside, exfiltrates sensitive files, encrypts the local versions, and then demands a ransom. Pretty much the full ransomware cycle. But this one is different. it has no network communication at all, acting surprisingly as a mute ransomware. So far, the only connections it attempts are local, plus one to port 80 (HTTP), where no data is actually sent or received. 

A connection to port 80 is attempted, but not established 

This lack of network communication strongly suggests that the encryption key is either generated locally on the fly or hardcoded within the binary itself. In the medium term, this increases the chances of reverse-engineering a working decrypter which, fortunately, we already have in this case. 

A closer look reveals that the encryptor relies entirely on homemade routines. There are no calls to standard cryptographic libraries, no use of the Windows CryptoAPI, and no references to external modules like OpenSSL. Instead, all cryptographic logic is implemented internally using low-level memory manipulation and arithmetic operations. 

Speed up and simplify analysis of malware and phishing threats with ANY.RUN’s Interactive Sandbox 



Sign up with business email


One key routine is located at internal offsets such as 0x40E100. This function is repeatedly called after pushing registers and buffer pointers to the stack and exhibits patterns typical of custom symmetric logic. 

Custom encryption logic with no standard crypto 

The symmetric structure reinforces the hypothesis of a static or trivially derived key, making Mamona a strong example of commodity ransomware that prioritises simplicity over cryptographic robustness. 

Still, just because this malware doesn’t communicate with external hosts doesn’t mean it can’t cause serious local damage. Let’s take a closer look. 

How Mamona Executes Its Attack 

The first thing Mamona does is execute a ping command as a crude time delay mechanism, chaining it with a self-deletion routine via cmd.exe. 

The use of ping 127[.]0.0[.]7 is a classic trick in commodity malware: instead of using built-in sleep APIs or timers (which can be flagged by behavioural monitoring), the malware sends ping requests to a loopback IP address, effectively pausing execution.  

Interestingly, it uses 127[.]0.0[.]7 instead of the more common 127[.]0.0[.]1, likely as a basic form of obfuscation. It’s still within the reserved localhost block (127[.]0.0[.]0/8) but may bypass simple detection rules that specifically target 127[.]0.0[.]1. 


A crude yet useful delay mechanism

Once the short delay is complete, the second part of the command attempts to delete the executable from disk using Del /f /q. Since a process can’t delete itself while it’s still running, this whole sequence is executed in a separate shell process. This is a simple but effective form of self-cleanup, aimed at reducing forensic traces post-infection. 

Even if the mechanism isn’t simple, ANY.RUN understands the hidden intention and flags the behavior

Mamona begins with a straightforward reconnaissance phase, harvesting basic host data like the system’s name and configured language. It then proceeds to drop a ransom note (README.HAes.txt) not only on the Desktop, but recursively inside multiple folders, increasing the chances the victim will see it. 

Recon routine and ransom note dropping

Following the ransom note deployment, Mamona begins encrypting user files, renaming them with the .HAes extension and making them inaccessible. To reinforce the impact, it changes the system wallpaper to a stark warning: “Your files have been encrypted!” 

Files receive a new extension

The ransom note shares links to a dedicated leak site (DLS) and a victim’s chat support, both on Tor. Also, it states that “we have stolen a significant amount of your important files from your network” and “Refuse to pay: your stolen data will be published publicly” but that actually does not happen, as we discussed earlier. There’s literally no network activity so this seems to be a threat to coerce the victim into paying the ransom. 

“Mamona, R.I.P!”. Ransom note, with a couple of lies

But we have an ace up our sleeve. For this engagement, alongside the malware sample, we also managed to obtain a decrypter thanks to Merlax, a friend and fellow malware researcher. Let’s take a look at how it works. 

Undoing Mamona’s Damage 

We’re dealing with a Ctrl-Z in .exe form, so let’s give it a chance and see how it performs. Visually, it’s a mess: the interface looks like a homemade project built with an older version of Visual Studio. UI elements are poorly rendered, often misaligned or clipping outside window boundaries.  

But the backend does its job far better than the frontend, and the files are back to normal. 

Files on the desktop went back to normal

By analysing the decrypter, we find an interesting internal function at offset 0x40C270. Much like in the ransomware sample, we observe a series of low-level operations: alignment to 4-byte boundaries (and $0xfffffffc, %ecx), fixed memory offsets (add $0x23), and repeated use of instructions such as mov, lea, and arithmetic operations, all indicative of a custom-built symmetric routine. 

Despite the absence of a traditional XOR operation, the logic appears reversible and consistent with homemade encryption mechanisms. 

Disassembly of the decrypter around offset 0x40C270

We have already infected our test machine and vaccinated it, and we are ready for the next stop on our journey: the ATT&CK Matrix. As usual, ANY.RUN takes care of that automagically. 


Learn to analyze malware in a sandbox

Learn to analyze cyber threats

Follow along a detailed guide to using ANY.RUN’s Interactive Sandbox for malware and phishing analysis



Mapping the Threat: Mamona via MITRE ATT&CK 

ANY.RUN’s ATT&CK integration makes it easy to understand and track malware behaviour by profiling its events, tactics, and techniques.  

Mamona’s ATT&CK Matrix on ANY.RUN

Let’s take a look at how Mamona’s behaviour fits into this framework: 

  • Discovery: T1012: Query Registry + T1082: System Information Discovery. The reconnaissance routine where the malware queries different local registries like hostname and language. 
  • Execution: T1059.003: Command and Scripting Interpreter. Since Mamona spawns CMD to invoke ping as a cheap delay mechanism and then moves on to its self-deletion. 
  • Defense Evasion: T1070.004: Indicator Removal. The self deletion routine attached to the previous ping command. 
  • Impact: T1486: Data Encrypted for Impact. The encryption process where all our files end up having the “.HAes” extension. 

This sums up Mamona’s behavior, which deviates from the usual pattern seen in commodity ransomware. It shows no network activity, no Command and Control channels over Telegram, Discord, or similar platforms. Instead, it relies on a weak, locally executed key generation routine and doesn’t include any form of double extortion, making its threats of data theft and publication purely coercive. 

What it does have is a retro-styled decrypter that, despite its clunky and outdated interface, simply works. 

Mamona Threat Impact 

The Mamona ransomware campaign presents significant risks despite its offline, minimalistic design: 

For end users: Victims face immediate file encryption, system disruption, and psychological pressure through false claims of data theft. The ransom note’s threatening tone adds urgency, even though there’s no actual data exfiltration. 

For organizations: Mamona can interrupt workflows, encrypt shared drives, and complicate incident response, especially in environments lacking offline backups or real-time monitoring. Its simplicity also makes it harder to detect through conventional network-based defenses. 

For security teams: The absence of C2 traffic and use of locally executed logic reduce visibility in traditional detection systems. Its use of basic commands like ping and cmd.exe mimics legitimate activity, requiring deeper behavioral analysis to flag accurately. 

For the broader threat landscape: Mamona exemplifies the rise of easy-to-use, builder-based ransomware that favors simplicity over sophistication. Its leaked builder lowers the entry barrier for attackers, raising concerns about wider adoption by low-skilled threat actors. 

Conclusion 

The analysis of Mamona Ransomware shows how even a quiet, offline threat can cause disruptions.  

This strain highlights a rising trend: ransomware that trades complexity for accessibility. It’s easy to deploy, harder to detect with traditional tools, and still effective enough to encrypt systems and pressure victims into paying. Its leaked builder and low barrier to entry only raise the risk of widespread abuse by less sophisticated attackers. 

By analyzing Mamona in real time using ANY.RUN’s Interactive Sandbox, we were able to capture the full attack chain, from initial execution and system changes to ransom note deployment and encryption logic, all without needing external network traces. 

Here’s how this type of dynamic analysis helps defenders stay ahead: 

  • Detect threats faster: Spot unusual behavior, even in offline-only attacks. 
  • See everything in motion: Monitor local activity, file operations, and persistence techniques as they happen. 
  • Speed up investigations: Gather and interpret IOCs without jumping from one tool to another. 
  • Respond more effectively: Share artifacts and tactics across security teams. 

Experience real-time visibility with ANY.RUN and catch threats others might miss. 

Try ANY.RUN’s Interactive Sandbox today 

IOCs 

SHA256:b6c969551f35c5de1ebc234fd688d7aa11eac01008013914dbc53f3e811c7c77 

SHA256:c5f49c0f566a114b529138f8bd222865c9fa9fa95f96ec1ded50700764a1d4e7 

Ext:.HAes 

File:README.HAes.txt 

References 

https://bazaar.abuse.ch/sample/c5f49c0f566a114b529138f8bd222865c9fa9fa95f96ec1ded50700764a1d4e7

https://bazaar.abuse.ch/sample/b6c969551f35c5de1ebc234fd688d7aa11eac01008013914dbc53f3e811c7c77

https://app.any.run/tasks/cdcc75cd-d1f0-4fae-8924-d1aa44525e7e

The post Mamona: Technical Analysis of a New Ransomware Strain appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Proactive threat hunting with Talos IR

Proactive threat hunting with Talos IR

At Cisco Talos, we understand that effective cybersecurity isn’t just about responding to incidents — it’s about preventing them from happening in the first place. One of the most powerful ways we do this is through proactive threat hunting. Our Talos Incident Response (Talos IR) team works closely with organizations to not only address existing threats but to anticipate and mitigate potential future risks. A key component of our threat-hunting approach is the Splunk SURGe team’s PEAK Threat Hunting Framework, which enables us to conduct comprehensive and proactive hunts with precision.

What is the PEAK Threat Hunting Framework?

The PEAK Framework (Prepare, Execute, and Act with Knowledge) offers a structured methodology for conducting effective and focused threat hunts. It ensures that every hunt is aligned with an organization’s specific needs and threat landscape. At the core of the PEAK framework are baseline hunts, which lay the foundation for proactive threat detection, alongside advanced techniques such as hypothesis-driven hunts and model-assisted threat hunts (M-ATH), which further enhance threat detection and mitigation.

Baseline hunts: the foundation of proactive threat hunting

Baseline hunts involve establishing a clear understanding of an organization’s normal operating environment in terms of user activity, network traffic and system processes. By documenting and analyzing this baseline, Talos IR can identify anomalous behavior that may signal malicious activity.

While these hunts can be a reactive measure, it’s important to use them proactively to detect threats trying to blend in with regular operations, such as insider threats, advanced persistent threats (APTs) and even novel attack techniques that might otherwise go undetected.

The key steps in baseline hunts are:

  1. Defining Normal Activity: Understanding what “normal” looks like in your environment, using data from system logs, user behavior, and network traffic.
  2. Anomaly Detection: Proactively hunting for deviations from the baseline that could indicate potential threats.
  3. Refining the Baseline: Continuously improving and updating the baseline to account for emerging threats and changes in your infrastructure.

Hypothesis-driven hunts: Testing assumptions about threats

In addition to baseline hunts, Talos IR also uses hypothesis-driven hunts to proactively test assumptions about potential threats. These hunts are guided by specific hypotheses or educated guesses about what attackers might be doing in a given environment. Rather than relying on a static, one-size-fits-all approach without adjustments, hypothesis-driven hunts are dynamic, adapting to the specific questions and emerging threats that arise.

For example, a hypothesis-driven hunt might begin with the assumption that a particular group of users is being targeted by a phishing campaign. The hunt would focus on testing this assumption by looking for evidence of malicious emails, unusual login patterns or attempts to collect or exfiltrate data.

The key steps in hypothesis-driven hunts are:

  1. Forming Hypotheses: Based on threat intelligence and past incidents, teams generate specific hypotheses about possible attack vectors or adversary behaviors.
  2. Testing Hypotheses: Using data sources such as endpoint telemetry, authentication logs or network traffic, the hypothesis is tested to see if evidence supports the theory.
  3. Analyzing Results: If the hypothesis is validated, further investigation is done to understand the full scope of the potential threat.

Model-assisted threat hunts: Leveraging machine learning to find hidden threats

Another powerful tool in Talos IR’s proactive hunting approach is model-assisted threat hunts (M-ATHs). These hunts leverage machine learning and advanced statistical models to sift through vast amounts of data and identify patterns that may indicate hidden threats. M-ATHs allow our team to detect threats that would be difficult to find using traditional methods.

Machine learning models are trained to detect suspicious behavior across different domains — such as user activity, network traffic or system logs — by looking for deviations from typical patterns. Over time, as these models learn from new data and threat intelligence, they improve in their ability to detect emerging threats.

The key steps in M-ATHs are:

  1. Data Collection: Gathering large datasets from multiple sources, including network traffic, endpoint data, authentication logs, and more.
  2. Model Training: Using machine learning algorithms to identify patterns in normal and malicious behavior.
  3. Anomaly Detection: The trained model helps identify new, previously undetected anomalies or potential threats by looking for deviations from established patterns.
  4. Refinement: The model is refined as new data is collected and analyzed, improving its ability to detect subtle threats.

Empowering threat hunts with Talos Threat Intelligence

A crucial element that enriches and empowers every Talos IR threat hunt is Talos Threat Intelligence. By integrating up-to-date, high-fidelity threat intelligence into our hunts, we enhance the accuracy, relevance, and speed of our investigations. Talos Threat Intelligence provides a continuous stream of data on emerging threats, attack trends and adversary tactics, which helps us refine hypotheses, adjust baselines and improve our machine learning models.

This intelligence is not just a complement to our hunting process; it is embedded in every stage. It helps guide our hypothesis-driven hunts, sharpens our baseline detections and feeds into the models we use for anomaly detection. With Talos Threat Intelligence, we ensure that every hunt is aligned with the latest threat landscape, empowering your team with the knowledge needed to stay one step ahead of attackers.

Proactive engagements for IR Retainer customers

For Talos IR Retainer customers, baseline hunts, hypothesis-driven hunts, and model-assisted threat hunts provide a valuable layer of ongoing, proactive support. These hunts help organizations detect and mitigate threats before they escalate into full-blown incidents. Our expert hunters work directly with your teams, ensuring that you stay ahead of evolving threats.

Some key benefits of these proactive engagements include:

  • Early Detection: Identifying abnormal activities that could signal a breach or malicious action, reducing the risk of an attack spreading.
  • Continuous Improvement: As we refine the baseline and hunting models, your security posture improves over time, allowing for faster and more accurate threat detection.
  • Actionable Insights: Proactive hunts deliver actionable intelligence that helps your teams strengthen their defenses, based on the latest threat trends and attack methods.

Why it matters

The cybersecurity landscape is constantly evolving, and traditional defensive methods alone are no longer sufficient. Threat actors are adept at blending malicious activity with normal operations, making it difficult to spot attacks using conventional means. By conducting baseline hunts, hypothesis-driven hunts and model-assisted threat hunts, Talos IR gives your organization the tools it needs to stay ahead of adversaries.

As new evidence is uncovered during a hunt, our team adapts and refines the investigation in real time — evolving the hypothesis, adjusting the scope or pivoting to new areas of focus based on what the data reveals.

If an active threat, adversary or malicious activity is detected during a hunt, Talos IR can dynamically pivot the engagement and escalate the situation to our 24/7 on-call Incident Response team. This ensures rapid response for containment, mitigation and eradication, effectively minimizing the potential impact of the threat.

Our Talos IR team collaborates seamlessly with the hunting team to deliver real-time support in identifying, isolating and neutralizing active threats. This integrated approach ensures your systems remain secure and prevents the threat from escalating further.

At Talos, our goal is to empower your team with the knowledge and tools to detect threats proactively, before they turn into incidents. Through our IR Retainer services, we provide continuous support to help you improve your security posture and stay one step ahead of emerging threats, all while leveraging the full power of Talos Threat Intelligence.

For more information about this service, download our At-a-Glance:

Cisco Talos Blog – ​Read More

Apple beefs up parental controls: what it means for kids | Kaspersky official blog

Earlier this year, Apple announced a string of new initiatives aimed at creating a safer environment for young kids and teens using the company’s devices. Besides making it easier to set up kids’ accounts, the company plans to give parents the option of sharing their children’s age with app developers so as to be able to control what content they show.

Apple says these updates will be made available to parents and developers later this year. In this post, we break down the pros and cons of the new measures. We also touch on what Instagram, Facebook (and the rest of Meta) have to do with it, and discuss how the tech giants are trying to pass the buck on young users’ mental health.

Before the updates: how Apple protects kids right now

Before we talk about Apple’s future innovations, let’s quickly review the parental control status quo on Apple devices. The company introduced its first parental controls way back in June 2009 with the release of the iPhone 3.0, and has been developing them bit by bit ever since.

As things stand, users under 13 must have a special Child Account. These accounts allow parents to access the parental control features built into Apple’s operating systems. Teenagers can continue using a Child Account until the age of 18, as their parents see fit.

Child Accounts on Apple devices

What Apple’s Child Account management center currently looks like. Source

Now for the new stuff…

The company has announced a series of changes to its Child Account system related to how parental status is verified. Additionally, it’ll soon be possible to edit a child’s age if it was entered incorrectly. Previously, for accounts of users under 13, it wasn’t even an option: Apple suggested waiting “for the account to naturally age up”. In borderline cases (accounts of kids just under 13), you could try a workaround involving changing the birth date — but such tricks won’t be needed for much longer.

But perhaps the most significant innovation relates to simplifying the creation of these Child Accounts. Henceforth, if parents don’t set up a device before their under-13-year-old starts using it, the child can do it themselves. In this case, Apple will automatically apply age-appropriate web content filters and only allow pre-installed apps, such as Notes, Pages, and Keynote.

Upon visiting the App Store for the first time to download an app, the child will be prompted to ask a parent to complete the setup. On the other hand, until parental consent is given, neither app developers nor Apple itself can collect data on the child.

At this point, even the least tech-savvy parent might ask the logical question: what if my child enters the wrong age during setup? Say, not 10, but 18. Won’t the deepest, darkest corners of the internet be opened up to them?

How Apple intends to solve the age verification issue

The single most substantial of Apple’s new initiatives announced in early 2025 attempts to address the problem of online age verification. The company proposes the following solution: parents will be able to select an age category and authorize sharing this information with app developers during installation or registration.

This way, instead of relying on young users to enter their date-of-birth honestly, developers will be able to use the new Declared Age Range API. In theory, app creators will also be able to use age information to steer their recommendation algorithms away from inappropriate content.

Through the API, developers will only know a child’s age category — not their exact date of birth. Apple has also stated that parents will be able to revoke permission to share age information at any time.

In practice, access to the age category will become yet another permission that young users will be able to give (or, more likely, not give) to apps — just like permissions to access the camera and microphone, or to track user actions across apps.

This is where the main flaw of the proposed solution lies. At present, Apple has given no guarantee that if a user denies permission for age-category access, they won’t be able to use a downloaded app. This decision rests with app developers, as there are no legal consequences for allowing children access to inappropriate content. Moreover, many companies are actively seeking to grow their young audience, since young kids and teens spend a lot of their time online (more on this below).

Finally, let’s mention Apple’s latest innovation: its updating its age-rating system. It will now consist of five categories: 4+, 9+, 13+, 16+, and 18+. In the company’s own words, “This will allow users a more granular understanding of an app’s appropriateness, and developers a more precise way to rate their apps”.

Apple's new age rating system

Apple is updating its age rating system — it will comprise five categories. Source

Apple and Meta disagree over who’s responsible for children’s safety online

The problem of verifying a young person’s age online has long been a hot topic. The idea of showing ID every time you want to use an app is, naturally, hardly a crowd-pleaser.

At the same time, taking all users at their word is asking for trouble. After all, even an 11-year-old can figure out how to edit their age in order to register on TikTok, Instagram, or Facebook.

App developers and app stores are all too eager to lay the responsibility for verifying a child’s age at anyone else’s doorstep but their own. Among app developers, Meta is particularly vocal in advocating that age verification is the duty of app stores. And app stores (especially Apple’s) insist that the buck stops with app developers.

Many view Apple’s new initiatives on this matter as a compromise. Meta itself has this to say:

“Parents tell us they want to have the final say over the apps their teens use, and that’s why we support legislation that requires app stores to verify a child’s age and get a parent’s approval before their child downloads an app”.

All very well on paper — but can it be trusted?

Child safety isn’t the priority: why you shouldn’t trust tech giants

Entrusting kids’ online safety to companies that directly profit from the addictive nature of their products doesn’t seem like the best approach. Leaks from Meta, whose statements on Apple’s solution we cited above, have repeatedly shown that the company targets young users deliberately.

For example, in her book Careless People, Sarah Wynne-Williams, former global public policy director at Facebook (now Meta), recounts how in 2017 she learned that the company was inviting advertisers to target teens aged 13 to 17 across all its platforms, including Instagram.

At the time, Facebook was selling the chance to show ads to youngsters at their most psychologically vulnerable — when they felt “worthless”, “insecure”, “stressed”, “defeated”, “anxious”, “stupid”, “useless”, and/or “like a failure”. In practice, this meant, for example, that the company would track when teenage girls deleted selfies to then show them ads for beauty products.

Another leak revealed that Facebook was actively hiring new employees to develop products aimed at kids as young as six, with the goal of expanding its consumer base. It’s all a bit reminiscent of tobacco companies’ best practices back in the 1960s.

Apple has never particularly prioritized kids’ online safety, either. For a long time its parental controls were quite limited, and kids themselves were quick to exploit holes in them.

It wasn’t until 2024 that Apple finally closed a vulnerability allowing kids to bypass controls just by entering a specific nonsensical phrase in the Safari address bar. That was all it took to disable Screen Time controls for Safari — giving kids access to any website. The vulnerability was first reported back in 2021, yet it took three years for the company to react.

Content control: what really helps parents

Child psychology experts agree that unlimited consumption of digital content is bad for children’s psychological and physical health. In his 2024 book The Anxious Generation, US psychologist Jonathan Haidt describes how smartphone and social media use among teenage girls can lead to depression, anxiety, and even self-harm. As for boys, Haidt points to the dangers of overexposure to video games and pornography during their formative years.

Apple may have taken a step in the right direction, but it’ll be for nothing if third-party app developers decide not to play ball. And as the example of Meta illustrates, relying on their honesty and integrity seems premature.

Therefore, despite Apple’s innovations, if you need a helping hand, you’ll find one… at the end of your own arm. If you want to maintain control over what and how much your child consumes online with minimal interference in their life, look no further than our parental control solution.

Kaspersky Safe Kids lets you view reports detailing your child’s activity in apps and online in general. You can use these to customize restrictions and prevent digital addiction by filtering out inappropriate content in search results and, if necessary, blocking specific sites and apps.

What other online threats do kids face, and how to neutralize them? Essential reading:

Kaspersky official blog – ​Read More