How to implement zero trust: first steps and success factors

This year marks the 15th anniversary of the first guide to implementing the zero trust security concept, which, according to a Gartner survey, almost two-thirds of surveyed organizations have adopted to some extent. Admittedly (in the same Gartner survey), for 58% of them this transition is far from complete, with zero trust covering less than half of infrastructure. Most organizations are still at the stage of piloting solutions and building the necessary infrastructure. To join the vanguard, you need to plan the transition to zero trust with eyes wide open to the obstacles that lie ahead, and to understand how to overcome them.

Zero trust best practices

Zero trust is a security architecture that views all connections, devices, and applications as untrusted and potentially compromised — even if they’re part of the organization’s internal infrastructure. Zero trust solutions deliver continuous adaptive protection by re-verifying every connection and transaction based on a potentially changed security context. This way, companies can mold their information security to the real-world conditions of hybrid cloud infrastructures and remote working.

In addition to the oldest and best-known guidelines, such as Forrester’s first report and Google’s BeyondCorp, the components of zero trust are detailed in NIST SP 800-207 (Zero Trust Architecture), while the separate NIST SP 1800-35B offers implementation recommendations. There are also guidelines that map specific infosec measures and tools to the zero trust methodology, such as CIS Controls v8. CISA offers a handy maturity model, though it’s primarily optimized for government agencies.

In practice, zero trust implementation rarely follows the rule book, and many CISOs end up having to mix and match recommendations from these guidance documents with the guidelines of their key IT suppliers (for example, Microsoft), prioritizing and selecting measures based on their specific situation.

What’s more, all these guides are less than forthcoming in describing the complexities of implementation.

Executive buy-in

Zero trust migration isn’t purely a technical project, and therefore requires substantial support on the administrative and executive levels. In addition to investing in software, hardware, and user training, it demands significant effort from various departments, including HR. Company leadership needs to understand why the changes are needed and what they’ll bring to the business.

To get across the value and importance of a project, the “incident cost” or “value at risk” needs to be clearly communicated on the one hand, as do the new business opportunities on the other. For example, zero trust protection can enable broader use of SaaS services, employee-owned devices, and cost-effective network organization solutions.

Alongside on-topic meetings, this idea should be reinforced through specialized cybersecurity training for executives. Not only does such training instill specific infosec skills, it also allows your company to run through crisis management and other scenarios in a cyberattack situation — often using specially designed business games.

Defining priorities

To understand where and what zero trust measures to apply in your infrastructure, you’ll need a detailed analysis of the network, applications, accounts, identities, and workloads. It’s also crucial to identify critical IT assets. Typically making up just a tiny part of the overall IT fleet, these “crown jewels” either contain sensitive and highly valuable information, or support critical business processes. Consolidating information about IT assets and their value will make it easier to decide which components are most in need of zero trust migration, and which infosec measures will facilitate it. This inventory will also unearth outdated segments of the infrastructure for which migration to zero trust would be impractical or technically infeasible.

You need to plan in advance for the interaction of diverse infrastructure elements, and the coexistence of different infosec measures to protect them. A typical problem goes as follows: a company has already implemented some zero trust components (for example, MFA and network segmentation), but these operate completely independently, and no processes and technologies are planned to enable these components to work together within a unified security scenario.

Phased implementation

Although planning for zero trust architecture is done holistically, its practical implementation should begin with small, specific steps. To win managerial support and to test processes and technologies in a controlled environment, start with measures and processes that are easier to implement and monitor. For example, introduce multi-factor authentication and conditional access just for office computers and the office Wi-Fi. Roll out tools starting with specific departments and their unique IT systems, testing both user scenarios and the performance of infosec tools, all while adjusting settings and policies accordingly.

Which zero trust architecture components are easier to implement, and what will help you achieve the first quick wins depends on your specific organization. But each of these quick wins should be scalable to new departments and infrastructure segments; and where zero trust has already been implemented, additional elements of the zero trust architecture can be piloted.

While a phased implementation may seem to increase the risk of getting stuck at the migration stage and never completing the transition, experience shows that a “big bang” approach — a simultaneous shift of the entire infrastructure and all processes to zero trust — fails in most cases. It creates too many points of failure in IT processes, snowballs the load on IT, alienates users, and makes it impossible to correct any planning and implementation errors in a timely and minimally disruptive manner.

Phased implementation isn’t limited to first steps and pilots. Many companies align the transition to zero trust with adopting new IT projects and opening new offices; they divide the migration of infrastructure into stages — essentially implementing zero trust in short sprints while constantly monitoring performance and process complexity.

Managing identities… and personnel

The cornerstone of zero trust is a mature Identity Access Management (IAM) system, which needs to be not only technically sound but also supported administratively at all times. Data on employees, their positions, roles, and resources available to them must be kept constantly up-to-date, requiring significant support from HR, IT, and the leadership of other key departments. It’s imperative to involve them in building formal processes around identity management, taking care to ensure that they feel personally responsible for these processes. It must be stressed that this isn’t a one-off job — the data needs to be checked and updated frequently to prevent situations such as access creep (when permissions issued to an employee for a one-time project are never revoked).

To improve information security and make zero trust implementation a truly team effort, sometimes it’s even necessary to change the organizational structure and areas of responsibility of employees — breaking down silos that confine people within narrow job descriptions. For example, one large construction company shifted from job titles such as “Network Engineer” and “Server Administrator” to the more generic “Process Engineer” to underscore the interconnectivity of the roles.

Training and feedback

Zero trust migration doesn’t pass unnoticed by employees. They have to adapt to new authentication procedures and MFA tools, learn how to request access to systems that don’t grant it by default be aware that they might occasionally need to re-authenticate to a system they logged in to just an hour ago, and that previously unseen tools like ZTNA, MDM, or EDR (often bundled in a single agent, but sometimes separate), may suddenly appear on their computers. All this requires training and practice.

For each phase of implementation, it’s worth forming a “focus group” of business users. These users will be the first to undergo training and can help refine training materials in terms of language and content, as well as provide feedback on how the new processes and tools are working. Communication with users should be a two-way street: it’s important to convey the value of the new approach, while actively listening to complaints and recommendations to adjust policies (both technical and administrative), address shortcomings, and improve the user experience.

Kaspersky official blog – ​Read More

Microsoft Patch Tuesday for May 2025 — Snort rules and prominent vulnerabilities

Microsoft Patch Tuesday for May 2025 — Snort rules and prominent vulnerabilities

Microsoft has released its monthly security update for May of 2025 which includes 78 vulnerabilities affecting a range of products, including 11 that Microsoft marked as “critical”.  

Microsoft noted five vulnerabilities that have been observed to be exploited in the wild. CVE-2025-30397 is a remote code execution vulnerability in the Microsoft Scripting Engine. There were also four elevation of privilege vulnerabilities being actively exploited, CVE-2025-32709, CVE-2025-30400, CVE-2025-32701 and CVE-2025-32706 affecting the Ancillary Function Driver for WinSock, the DWM Core Library and the Windows Common Log File System Driver.  

The eleven “critical” entries consist of five remote code execution (RCE) vulnerabilities, four elevation of privilege vulnerabilities, one information disclosure vulnerability and one spoofing vulnerability. Three of the critical vulnerabilities have been marked as “Exploitation more likely”: CVE-2025-30386 –a Microsoft Office RCE vulnerability, CVE-2025-30390 –an Azure ML Compute elevation of privilege vulnerability, and CVE-2025-30398 – a Nuance PowerScribe 360 information disclosure vulnerability.  

The most notable of the “critical” vulnerabilities listed affect Microsoft Office. CVE-2025-30386 is a RCE vulnerability with base CVSS 3.1 score of 8.3. To successfully exploit CVE-2025-30386, an attacker could send a victim an email, and without the victim clicking the link, viewing or interacting with the email, trigger a use-after-free scenario, allowing arbitrary code to be executed. Microsoft has assessed that the attack complexity is “Low”, and exploitation is “More likely”. Another RCE vulnerability affecting Microsoft Office, CVE-2025-30377, has a CVSS 3.1 base score of 8.4, and has been assessed an attack complexity of “Low”, but exploitation is considered “Less Likely”. 

Two RCE vulnerabilities affect the Remote Desktop Client. CVE-2025-29966 and CVE-2025-29967 are both Heap-cased Buffer Overflow vulnerabilities with CVSS 3.1 base scores of 8.8 with “Low” attack complexity and exploitation “Less Likely”. An attacker controlling a Remote Desktop Server could trigger the buffer overflow in a vulnerable when a vulnerable Remote Desktop Client connects to the server. 

CVE-2025-29833 is a RCE affecting the Virtual Machine Bus. This is a Time-of-check Time-of-use (TOCTOU) Race Condition which has been assessed an attack complexity of “High” and exploitation is “Less Likely”. 

Talos would also like to highlight the following “important” vulnerabilities as Microsoft has determined that exploitation is “More likely”: 

  • CVE-2025-24063 – Kernel Streaming Service Driver Elevation of Privilege Vulnerability 
  • CVE-2025-29841 – Universal Print Management Service Elevation of Privilege Vulnerability 
  • CVE-2025-29971 – Web Threat Defense (WTD.sys) Denial of Service Vulnerability 
  • CVE-2025-29976 – Microsoft SharePoint Server Elevation of Privilege Vulnerability 
  • CVE-2025-30382 – Microsoft SharePoint Server Remote Code Execution Vulnerability 
  • CVE-2025-30385 – Windows Common Log File System Driver Elevation of Privilege Vulnerability 
  • CVE-2025-30388 – Windows Graphics Component Remote Code Execution Vulnerability

A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page.  

In response to these vulnerability disclosures, Talos is releasing a new Snort rule set that detects attempts to exploit some of them. Please note that additional rules may be released at a future date and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. 

The rules included in this release that protect against the exploitation of many of these vulnerabilities are 64848-64867. There are also these Snort 3 rules: 64852-64853, 301192-301200, and 301203 

Cisco Talos Blog – ​Read More

How phishing emails are sent from no-reply@accounts.google.com | Kaspersky official blog

Imagine receiving an email that says Google has received a subpoena to release the contents of your account. The email looks perfectly “Googley”, and the sender’s address appears legitimate too: no-reply@accounts.google.com. A little unnerving (or maybe panic-inducing?) to say the least, right?

And what luck — the email contains a link to a Google support page that has all the details about what’s happening. The domain name in the link looks legit, too, and seems to belong to Google…

Regular readers of our blog have probably already guessed that we’re talking here about a new phishing scheme. And they’d be right. This time, the scammers are exploiting several genuine Google services to fool their victims and make the emails look as convincing as possible. Here’s how it works…

How phishing email mimics an official Google notification

The screenshot below shows the email that kicks off the attack; and it does a really credible job of pretending to be an alert from Google’s security system. The message informs the user that the company has received a subpoena requesting access to the data in their Google account.

The “from” field contains a genuine Google address: no-reply@accounts.google.com. This is the exact same address Google’s security notifications come from. The email also contains a few details that reinforce the illusion of authenticity: a Google Account ID, a support ticket number, and a link to the case. And, most importantly, the email tells the recipient that if they want to learn more about the case materials or contest the subpoena, they can do so by clicking a link.

The link itself looks quite plausible, too. The address includes the official Google domain and the support ticket number mentioned above. And it takes a savvy user to spot the catch: Google support pages are located at support.google.com, but this link leads to sites.google.com instead. The scammers are, of course, counting on users who either don’t understand such technicalities or don’t notice the word substitution.

If the user isn’t logged in, clicking the link takes them to a genuine Google account login page. After authorizing, they land on a page at sites.google.com, which quite convincingly mimics the official Google support site.

Fake Google Support page created with Google Sites

This is what a fake Google Support page linked in the email looks like. Source

Now, it just so happens that the sites.google.com domain belongs to the legitimate Google Sites service. Launched back in 2008, it’s a fairly unsophisticated website builder — nothing out of the ordinary. The important nuance about Google Sites is that all websites created within the platform are automatically hosted on a google.com subdomain: sites.google.com.

Attackers can use such an address to both lull victims’ vigilance and circumvent various security systems, as both users and security solutions tend to trust the Google domain. It’s little wonder that scammers have increasingly been using Google Sites to create phishing pages.

Spotting fakes: the devil’s in the (email) details

We’ve already described the first sign of a dodgy email: the address of the fake support page located at sites.google.com. Look to the email header for more red flags:

Phishing disguised as an official Google email: note the "to" and "mailed-by" fields

Spot the fake: look at the “to” and “mailed-by” fields in the header. Source

The fields to pay attention to are “from“, “to“, and “mailed-by“. The “from” one seems fine: the sender is the official Google email, no-reply@accounts.google.com.

But lo and behold, the “to” field just below it reveals the actual recipient address, and this one sure looks phishy: me[@]googl-mail-smtp-out-198-142-125-38-prod[.]net. The address is trying hard to imitate some technical Google address, but the typo in the company domain name is a dead giveaway. Moreover, it has absolutely no business being there — this field is supposed to contain the recipient’s email.

As we keep examining the header, another suspicious address pops up in the “mailed-by” field. Now, this one is clearly nowhere near Google territory: fwd-04-1.fwd.privateemail[.]com. Yet again, nonsense like this has no place in an authentic email. For reference, here’s what these fields look like in a real Google security alert:

Genuine Google security alert

The “to” and “mailed-by” fields in a genuine Google security alert

Unsurprisingly, these subtle signs would likely be lost on the average user — especially when they’re already freaked out by the looming legal trouble. Adding to the confusion is the fact that the fake email is actually signed by Google: the “signed-by” field shows accounts.google.com. In the next part of this post, we explain how the criminals managed to achieve this, and then we’ll talk about how to avoid becoming a victim.

Reconstructing the attack step by step

To figure out exactly how the scammers managed to send such an email and what they were after, cybersecurity researchers reenacted the attack. Their investigation revealed that the attackers used Namecheap to register the (now-revoked) googl-mail-smtp-out-198-142-125-38-prod[.]net domain.

Next, they used the same service again to set up a free email account on this domain: me[@]googl-mail-smtp-out-198-142-125-38-prod[.]net. In addition, the criminals registered a free trial version of Google Workspace on the same domain. After that the scammers registered their own web application in the Google OAuth system, and granted it access to their Google Workspace account.

Google OAuth is a technology that allows third-party web applications to use Google account data to authenticate users with their permission. You’ve likely encountered Google OAuth as a way to authenticate for third-party services: it’s the system you use every time you click a “Sign in with Google” button. Besides that, applications can use Google OAuth to obtain permission to, for example, save files to your Google Drive.

But let’s get back to our scammers. After a Google OAuth application is registered, the service allows sending a notification to the email address associated with the verified domain. Interestingly enough, the administrator of the web application is free to manually enter any text as the “App name” — which seems to be what the criminals exploited.

In the screenshot below, researchers demonstrate this by registering an app with the name “Any Phishing Email Text Inject Here with phishing URLs…”.

Google OAuth allows setting a completely arbitrary web app name, and scammers are taking advantage of this

Registering a web app with an arbitrary name in Google OAuth: the text of a scam email with a phishing link can be entered as a name. Source

Google then sends a security alert containing this phishing text from its official address. This email goes to the scammers’ email address on the domain registered through Namecheap. This service allows forwarding the received notification from Google to any addresses. All they need do is set a specific forwarding rule and specify the email addresses of potential victims.

How scammers set up a forwarding rule to deliver a phishing email that appears like it's coming from Google

Setting up a forwarding rule that allows sending the fake email to multiple recipients. Source

How to protect yourself from phishing attacks like this one

It’s not entirely clear what the attackers were hoping to achieve with this phishing campaign. Using Google OAuth to authenticate doesn’t mean the victim’s Google account credentials are shared with the scammers. The process generates a token that only provides limited access to the user’s account data — depending on the permissions the user authorized and the settings configured by the scammers.

The fake Google Support page the deceived user lands on suggested that the goal was to convince them to download some “legal documents” supposedly related to their case. The nature of these documents is unknown, but chances are they contained malicious code.

The researchers reported this phishing campaign to Google. The company acknowledged this as a potential risk for users and is currently working on a fix for the OAuth vulnerability. However, how long it will take to resolve the issue remains unknown.

In the meantime, here’s some advice to help you avoid becoming a victim of this and other intricate phishing schemes.

  • Stay calm if you get an email like this. Begin by carefully examining all the email header fields and comparing them to legitimate emails from Google — you likely have some in your inbox. If you see any discrepancies, don’t hesitate to hit “Delete”.
  • Be wary of websites on the google.com domain that are created with Google Sites. Lately, scammers have been increasingly exploiting it for a wide range of phishing schemes.
  • As a general rule, avoid clicking links in emails.
  • Use a robust security solution that will provide timely warnings about danger and block phishing links.

Follow the links below to read about five more examples of out-of-the-ordinary phishing.

Kaspersky official blog – ​Read More

Evolution of Tycoon 2FA Defense Evasion Mechanisms: Analysis and Timeline

Attackers keep improving ways to avoid being caught, making it harder to detect and investigate their attacks. The Tycoon 2FA phishing kit is a clear example, as its creators regularly add new tricks to bypass detection systems. 

In this study, we’ll take a closer look at how Tycoon 2FA’s anti-detection methods have changed over the past several months and suggest ways to spot them effectively. 

This article will discuss: 

  • A review of old and new anti-detection techniques. 
  • How the new tricks compared to the old ones. 
  • Tips for spotting these early. 

Knowing how attackers dodge detection and keeping user detection rules up to date are key to fighting these anti-detection methods. 

What is Tycoon 2FA 

Tycoon 2FA is a modern Phishing-as-a-Service (PhaaS) platform designed to bypass two-factor authentication (2FA) for Microsoft 365 and Gmail. It was first identified by Sekoia analysts in October 2023, though the Saad Tycoon group, which promotes this tool through private Telegram channels, has been active since August 2023. 

Tycoon 2FA uses an Adversary-in-the-Middle (AiTM) approach, where attackers set up a phishing page through a reverse proxy server. After a user enters their credentials and completes the 2FA process, the server captures session cookies, allowing attackers to reuse the session and bypass security measures. 

Currently, Tycoon 2FA is highly popular and widely used by cybercriminals, including the Saad Tycoon group. The platform offers ready-made phishing pages and an easy-to-use admin panel, making it accessible even to less technically skilled attackers. 

Discover the latest examples of Tycoon 2FA attacks using this search query in ANY.RUN’s Threat Intelligence Lookup:

threatName:”Tycoon” 

In 2024, an updated version of Tycoon 2FA was released, featuring enhanced evasion techniques, including dynamic code generation, obfuscation, and traffic filtering to block bots. Phishing emails are now frequently sent from legitimate, potentially compromised email addresses. 

The evolution of this phishing kit continues, with ANY.RUN researchers noting regular updates and new evasion mechanisms in its malicious software. This article aims to investigate and provide technical details on how Tycoon 2FA has evolved, is evolving, and may continue to evolve. 

Before We Begin

Tune in to ANY.RUN’s live webinar on Wednesday, May 14 | 3:00 PM GMT. We welcome heads of SOC teams, managers, and security specialists of different tiers who want to: 

  • Solve common security issues
  • Optimize their work processes 
  • Find out how to save company’s resources 

Analysis of a Tycoon2FA Attack from 01.10.2024 

Let’s begin the analysis with a typical Tycoon2FA attack observed October of 2024. The attack begins with a malicious URL and employs multiple evasion techniques to avoid detection. Below, we well break down each stage of the attack, highlighting its protective mechanisms and their purposes. 

View sandbox session 

Stage 1: Initial Attack Mechanisms 

The attack starts with a request to the following URL: 

hxxps://stellarnetwork[.]sucileton[.]com/EQn1RAKa/ 

Evasion Mechanism #1: Basic Code Obfuscation 

The page’s source code is obfuscated, making it difficult for automated systems or analysts to interpret its functionality.  

Figure 1: Evasion Mechanisms in Stage 1 of Tycoon 2FA

This is a foundational defense to hinder initial analysis. 

Speed up and simplify detection of malware and phishing threats like Tycoon2FA
with proactive analysis in ANY.RUN’s Interactive Sandbox 



Sign up with business email


Evasion Mechanism #2: “Nomatch” Check 

The code compares the URL (part of the attacker’s infrastructure) against a “nomatch” value. This check appears to be a decoy or placeholder, as the comparison always returns False. It may serve as a flag for services like Cloudflare. 

Figure 2: “Nomatch” Evasion Mechanism 

Evasion Mechanism #3: Domain Comparison 

The code verifies whether the current page’s domain matches the attacker’s designated infrastructure domain. If the domains match, the attack proceeds to load Stage 2 (the malicious payload) into the Document Object Model (DOM). If not, a fake error page is displayed. 

Figure 3: Domain Comparison Evasion Mechanism

Evasion Mechanism #4: Redirect to Fake Litespeed 404 Page on Failed Checks 

If the domain check fails, the user is redirected to a fake “Litespeed 404” error page.  

Figure 4: Fake 404 Page Redirect 

The page is designed to appear legitimate and deter further investigation. 

Figure 5: Example of Fake Litespeed 404 Page 

Purpose of Stage 1 Evasion Mechanisms 

Figure 6: Flowchart of Stage 1 Protective Checks in Tycoon 2FA 

These mechanisms are designed to prevent the malicious code from executing or revealing its behavior in isolated scenarios, such as: 

  • Malware analysis sandboxes. 
  • Offline inspection of saved HTML files. 

By ensuring the code only runs in the attacker’s controlled environment, these checks reduce the likelihood of detection. 

If all Stage 1 checks are passed, the malicious payload (Stage 2) is injected into the page’s DOM, advancing the attack to its next phase. 

Stage 2: Main Evasion Mechanisms 

Evasion Mechanism #5: Cloudflare Turnstile CAPTCHA  

Before loading the main content, Tycoon 2FA requires users to pass a Cloudflare Turnstile CAPTCHA. This protects the malicious page from web crawlers, Safebrowsing services, or automated systems that could capture and analyze the page’s content. 

Figure 7: Cloudflare CAPTCHA in Tycoon 2FA 

Evasion Mechanism #6: Debugger Timing Check 

During Stage 2, the code also measures the time taken to launch a debugger, a technique used to detect whether the page is running in a real browser or a sandboxed environment. In this sample, the check is rudimentary, and the timing result is not actively used, suggesting it may be a placeholder or incomplete feature. 

Figure 8: Debugger Timing Check Mechanism 

Evasion Mechanism #7: C2 Server Queries 

Tycoon 2FA sends a series of requests to the attacker’s Command-and-Control (C2) servers to determine whether to proceed to Stage 3. This process involves two steps: 

  • GET Request to Secondary C2 Domain
    A GET request is sent to another C2 domain, expecting a single-byte response: ‘0’ or ‘1’.  
    • If ‘1’ is received, the attack halts, and the user is redirected to a legitimate page.  
    • If ‘0’ is received, the attack continues. 
Figure 10: Code Fragment for Stage 3 Validation Checks 

If all Stage 2 checks are successfully passed, the page reloads, and the Stage 3 payload, the core malicious component, is retrieved and executed. 

Stage 3: Payload Unpacking 

Evasion Mechanism #8: Base64 + XOR Obfuscation 

The payload in Stage 3 is obfuscated using a combination of Base64 encoding and XOR encryption with a predefined key. This protects the malicious code from being easily analyzed or detected. 

Figure 11: Code for XOR-Base64 Deobfuscation

After deobfuscation (use this CyberChef recipe), the next stage is revealed, advancing the attack. 

Stage 4: Dynamic Payload Retrieval 

During the payload retrieval, a POST request is sent to the attacker’s C2 server. The request body contains data derived from the initial phishing URL, with logic that varies based on whether the victim’s email is included in the URL. 

Figure 12: Code for Sending POST Request 

Evasion Mechanism #9: Encrypted Payload Delivery

The C2 server responds with a JSON file containing ciphertext and decryption parameters. The specific data received depends on the contents of the POST request. The payload is decrypted to reveal the URL for the next stage. 

Figure 13: Code for Decrypting POST Request Response

To see the sample of the retrieved payload and its decryption, visit JSFiddle. The result of this action is the URL for Stage 5. 

Stage 5: Fake Login Page Delivery 

The content in Stage 5 is mostly unobfuscated, presenting a fake Microsoft Outlook login page designed to deceive the victim. It includes SVG assets and a stylesheet to mimic the legitimate interface. 

Figure 14: Loading of Fake MS Outlook Login Page

At the end of the page’s source code, an additional JavaScript script reuses the Base64 + XOR obfuscation technique (previously seen in Stage 3) to hide further malicious code. 

Figure 15: Base64/XOR Obfuscation in Stage 5

Deobfuscating this script reveals the next stage of the attack. 

Stage 6: Fake Authorization and Data Exfiltration 

The frontend mimics a Microsoft Outlook login page, designed to trick victims into entering their credentials. 

Figure 16: Loaded Fake MS Outlook Login Page 

At the end of the source code, a JavaScript script implements several new protective and operational mechanisms: 

Evasion Mechanism #10: Browser Detection 

The script identifies the victim’s browser to tailor the attack or detect analysis environments (e.g., sandboxes).  

Figure 17: Code for Browser Detection 

Evasion Mechanism #11: Clipboard Manipulation 

The script replaces the clipboard contents with junk data to interfere with analysis or debugging attempts.  

Figure 18: Code for Clipboard Manipulation 

Evasion Mechanism #9 (Reused): Payload Encryption/Decryption 

The script encrypts and decrypts the payload using hardcoded keys and initialization vectors (IVs), protecting data sent to and received from the C2 server.  

Figure 19: Code for Payload Encryption/Decryption 

Evasion Mechanism #12: C2 Routing with Dynamic URLs 

A randomly generated URL for data exfiltration is created using the RandExp library, following a pattern determined by the Tycoon 2FA operation mode (e.g., checkmail, checkpass, twofaselected). This ensures varied C2 communication paths, complicating detection.  

Figure 20: Code for Generating Random Exfiltration URLs 

Evasion Mechanism #13: Redirect API Validation 

The script checks the validity of a redirect API, likely used by Tycoon 2FA operators to monitor client status or subscription activity. 

Figure 21: Code for Redirect API Validation

Finally, the stolen data (e.g., credentials) is exfiltrated to a third C2 domain in the attack chain. The response from the C2 server dictates the phishing page’s behavior, such as prompting for 2FA or updating the account status.  

Figure 22: Code for Data Exfiltration

A JSFiddle snippet demonstrates the encryption/decryption of sent/received data.  

At the end of Stage 6, there is a link to another script, which loads the next stage of the attack. 

Figure 23: Link to Next Tycoon 2FA Stage 

Stage 7: Phishing Framework Enhancements 

After deobfuscation, Stage 7 reveals additional functionality for the Phishing-as-a-Service (PhaaS) framework, defining critical operations for the phishing interface. 

The code includes logic for:  

  • Managing the user interface behavior.  
  • Handling transitions between frames.  
  • Rendering core page elements.  
  • Implementing a state machine for the phishing page.  
  • Validating user inputs (e.g., email, password, OTP). 
Figure 24: Code fragment for phishing page State Machine 

Execution Chain Summary 

The complete execution chain, combining all mechanisms from Stages 1–7, is visualized below: 

Detailed breakdown of Tycoon2FA’s attack

With a comprehensive understanding of Tycoon 2FA’s attack flow, we can now analyze newer samples and compare them to this baseline to identify changes or additions to its anti-detection mechanisms. 

New Tycoon2FA Evasion Mechanisms: Timeline 

As we’ve discussed, Tycoon 2FA is steadily evolving, with its developers rolling out more sophisticated anti-detection mechanisms.  

Let’s now examine the latest evasion methods that have emerged in attacks since October. 

Attack Detected on 6 December 2024  

This sample introduces new anti-detection mechanisms in Stage 2, enhancing the malicious payload’s ability to avoid analysis and debugging environments. The following mechanisms were observed: 

Evasion Mechanism #14: Debug Environment Detection 

The script checks if the page is loaded in a legitimate browser rather than a debugging environment, such as Selenium (WebDriver), PhantomJS, or Burp Suite. If a debugging runtime is detected, the attack stops, and the user is redirected to about:blank.   

Figure 25: Code for Detecting Debugging Runtime

Evasion Mechanism #15: Keystroke Interception 

The code intercepts keyboard shortcuts associated with opening browser developer tools or other debugging functions, preventing their default actions. This hinders manual analysis by users or researchers.  

Figure 26: Code for Keystroke Interception 

The intercepted shortcuts include:   

  • F12: Opens DevTools (Firefox).  
  • Ctrl + U: Displays page source code.  
  • Ctrl + Shift + I: Opens DevTools (Generic).  
  • Ctrl + Shift + C: Opens DevTools (Chrome).  
  • Ctrl + Shift + J: Opens browser console (Firefox).  
  • Ctrl + Shift + K: Duplicates current tab (Edge).  
  • Ctrl + H: Opens browser history (Generic).  
  • Meta + Alt + I: Opens insert menu (varies by browser).  
  • Meta + Alt + C: Copies selected text.  
  • Meta + U: Shows page source or accessibility menu. 

Evasion Mechanism #16: Context Menu Blocking 

The script disables the right-click context menu, preventing access to browser tools or page source inspection. 

Figure 27: Code for Disabling Context Menu 

Improved Evasion Mechanism #6: Debugger Timing Check

Building on the rudimentary version in earlier samples, this implementation fully measures the time taken to launch a debugger. If the timing is abnormally long (suggesting a sandbox environment), the script redirects to a legitimate page, halting execution. 

Figure 28: Enhanced Debugger Timing Check Implementation 

Attack Detected on 17 December 2024  

In another attack from December, the threat introduced a new capability to enhance the phishing page’s authenticity, making it more convincing to victims. 

Evasion Mechanism #17: Dynamic Multimedia via Legitimate CDN 
Specifically, the phishing page dynamically loads a logo and custom background tailored to the domain of the victim’s email address, increasing its visual credibility.  

Figure 29: Phishing page with custom background

The multimedia content is delivered through Microsoft’s legitimate AADCDN network, leveraging trusted infrastructure to evade detection and reduce suspicion. 

Figure 30: Use of AADCDN for loading custom logos/backgrounds 

Attack Detected on 3 April 2025 

This sample introduces multiple new evasion mechanisms across various stages, reflecting Tycoon 2FA’s continued evolution in obfuscation, redirection, and anti-analysis techniques. 

Stage 1: Enhanced Obfuscation 

Evasion Mechanism #18: Complex JavaScript Code 

The payload uses Base64 obfuscation for JavaScript keywords, and method calls (e.g., document.write()) are invoked via object property access, complicating static analysis.  

The next stage’s content involves URL-encoding/decoding, further obscuring the code.  

Figure 31: More sophisticated code in Stage 1

Stage 2: New Evasion Techniques 

When Stage 2 code is deobfuscated, we can observe new evasion methods. 

Evasion Mechanism #19: Invisible Obfuscation 

The code employs whitespace-based “invisible” obfuscation, using proxy object calls and getter methods to retrieve and execute code via eval(). This technique makes the code harder to read and analyze. 

Figure 32: Invisible obfuscation code #1 
Figure 33: Invisible obfuscation code #2 

The form sent during the transition from Stage 2 to Stage 3 is now created as a FormData object, replacing the previous HTML <form> element approach, reducing detectability. 

Figure 34: Old HTML form declaration 
Figure 35: New FormData declaration 

Evasion Mechanism #20: Custom Fake Page Redirect 

Unlike earlier samples that redirected to legitimate sites (e.g., eBay) upon failing checks, this revision redirects to a custom fake HTML page, enhancing deception and avoiding reliance on external domains. 

Figure 36: Example of custom fake page

Evasion Mechanism #21: Custom CAPTCHA 

A custom CAPTCHA replaces the previously used Cloudflare Turnstile, likely to complicate signature-based and behavioral analysis and mitigate potential issues with Cloudflare’s security services.  

Figure 37: Custom CAPTCHA Code Fragment 
Figure 37: Custom CAPTCHA 

Stage 5: Clipboard Protection 

Evasion Mechanism #22: Disabling Clipboard Copying 

In addition to filling the clipboard with junk data (as seen in earlier samples), this revision prevents copying from the login form’s input fields, further hindering analysis.  

Figure 38: Code for Disabling Clipboard Copying 

Stage 6: Enhanced Data Exfiltration 

Evasion Mechanism #23: Custom Binary Encoding 

Data exfiltration now uses binary encoding for payloads, adding an additional layer of obfuscation.  

Figure 39: Code for binary encoding of payload 

To see the payload example, we can use CyberChef. The result is decrypted data.  

The decryption key is IV: 1234567890123456 

Figure 40: Example of decrypted payload

Attack Detected on 14 April 2025 

This sample introduces a more complex method for launching the Stage 1 payload, leveraging redirect chains to obscure the attack’s entry point. 

Evasion Mechanism #24: Extended Redirect Chain  

Clicking the initial phishing link triggers a redirect to Google Ads, followed by another redirect to a malicious URL that uses the following format:  

hxxps://<domain>/?<2nd_domain>=<base64_payload> 

A script then extracts the Base64 payload from location.search(), decodes it, and constructs the URL for the Stage 1 payload.  

This extended redirect chain makes it harder to trace the attack’s origin. 

Figure 41: Code for calculating phishing page URL

Tycoon 2FA has become more sophisticated in initiating its malicious payload, employing a longer redirect chain to obscure the entry point of the attack.  

The process is as follows: 

Figure 42: New redirect chain to Stage 1

Additionally, in POST requests, the cf-turnstile-response field (previously used for Cloudflare validation) is now filled with a placeholder value (qweqwe), confirming Tycoon 2FA’s shift away from Cloudflare. 

Evasion Mechanism #25: Rotating CAPTCHAs 

This revised version replaces the previously used custom CAPTCHA with Google reCAPTCHA.  

Figure 43: Use of Google reCAPTCHA in Tycoon 2FA 

Historical data shows Tycoon 2FA has cycled through different CAPTCHAs, such as IconCaptcha (observed in a submission on April 7, 2025).  

Figure 44: Example of IconCaptcha in Tycoon 2FA  

The use of varying CAPTCHAs complicates signature-based detection. 

Attack Detected on 23 April 2025 

Around this period, Tycoon 2FA introduced a new anti-detection mechanism focused on browser fingerprinting to detect sandbox environments and bot activity. 

View sandbox session 

Evasion Mechanism #26: Browser Fingerprinting

After opening the phishing link, a page is loaded requesting image element and executing a Base64-encoded script in case of an error.  

Figure 45: Suspicious onerror handler in image element 


Learn to analyze malware in a sandbox

Learn to analyze cyber threats

Follow along a detailed guide to using ANY.RUN’s Interactive Sandbox for malware and phishing analysis



After decoding with CyberChef, the script reveals functionality for:  

  • Collecting browser environment details: screen parameters, browser/platform name, URL, host, protocol, console properties, and document body.  
Figure 46: Code for collecting execution environment data 
  • Gathering additional data: timezone, JavaScript runtime internals, iframe checks, and graphical interface properties.  
Figure 47: Code for collecting browser properties 
  • Collecting miscellaneous technical information. 

The collected data is formatted as JSON, inserted into an invisible form, and sent to the attacker’s server via a POST request.  

Figure 48: Invisible form for sending fingerprint data 
Figure 49: Example of sent fingerprint data

The server analyzes the fingerprint data and returns a response with a Location header, leading to one of two outcomes:  

  • Redirect to a Legitimate Page: If checks fail (e.g., sandbox detected), the user is redirected to a legitimate site like Emirates, Tesla, or SpaceX. 
Figure 50: Example of failed C2 check redirect 
  • Redirect to Stage 1 Payload: If checks pass, the user is directed to the Tycoon 2FA Stage 1 phishing page. 
Figure 51: Example of successful transition to phishing page 

This mechanism also allows the attacker to geographically restrict the operation of the malware, enabling Tycoon2FA to launch in certain regions while terminating the attack process in others. 

Attack Detected on 6 May 2025 

Evasion Mechanism #27: Obfuscation through Encryption 

In this sample, we can observe that the Tycoon2FA operator began using AES encryption for payload obfuscation, not just for uploading/downloading stolen and service data in the final stages of execution.

Figure 52: Code for obfuscation via encryption

In all other parts, the execution chain of the new samples remains similar to the original.  

All Tycoon2FA Evasion Mechanisms 

Description  Sample  Date Observed 
Basic Obfuscation  https://app.any.run/tasks/7a87388b-8e07-4944-8d65-1422f56d303f  1 October 2024 
Nomatch Check 
Current Page Location Check 
Redirect to Fake Litespeed 404 Page on Failed Checks 
Cloudflare Turnstile 
Debugger Timing Check 
C2 Server Authorization for Payload Execution 
Base64/XOR Obfuscation 
Encryption of C2 Control/Exfiltrated Data 
10  Victim Browser Detection 
11  Clipboard Content Manipulation 
12  C2 Request Routing 
13  Redirect API Validation 
14  Debug Environment Detection (Selenium, etc.)  https://app.any.run/tasks/57f31060-cc3e-4a65-9fa9-f460ede5f39c  6 December 2024 
15  Keystroke Interception 
16  Context Menu Blocking 
17  Use of Legitimate CDN for Corporate Logos/Backgrounds  https://app.any.run/tasks/9700f36a-d506-4e5e-8f96-cdddc83e37a0  17 December 2024 
18  Complex JavaScript Code  https://app.any.run/tasks/d40e75ba-e4e8-4b51-b4a5-6614c8be7891  03 April 2025 
19  Invisible (Hangul) Obfuscation 
20  Redirect to Custom Fake Page on Failed Checks 
21  Use of Custom CAPTCHA Instead of Cloudflare 
22  Disabling Clipboard Copying 
23  Binary Encoding for Exfiltrated Data 
24  Extended Redirect Chain Before Payload Execution  https://app.any.run/tasks/3bb9892b-4c3d-4c5e-a44d-d569cab8578e  7 April 2025 – 14 April 2025 
25  Use of Different CAPTCHAs (reCAPTCHA, IconCaptcha, etc.) 
26  Browser Fingerprinting  https://app.any.run/tasks/7c54c46d-285f-491c-ab50-6de1b7d3b376  23 April 2025 
27  Obfuscation via Encryption  https://app.any.run/tasks/c43d00a5-60d9-433a-8aee-d359eaadf0ab  6 May 2025 

Conclusion 

The operators and developers of the Tycoon 2FA Phishing-as-a-Service (PhaaS) framework continue to actively enhance their product, focusing on complicating analysis of the malicious software. 

Tycoon 2FA is adopting increasingly sophisticated anti-bot techniques, such as rotating CAPTCHAs (e.g., Google reCAPTCHA, IconCaptcha, custom CAPTCHAs) and browser fingerprinting, to protect its infrastructure from crawlers and Safebrowsing solutions. 

The analysis indicates that there are several different versions or types of Tycoon 2FA active at the same time. This is evident because the methods used to avoid detection vary across different samples and time periods. Some techniques show up, disappear, and come back later.  

Alongside the primary focus on Microsoft Outlook authentication phishing, variants targeting Google account authentication have been observed:  

https://app.any.run/browses/b9c0b778-df32-4073-a580-18d7fc330518

https://app.any.run/tasks/a487cada-21b9-48e2-a7f3-470e3eddab0d

Despite the addition of new evasion techniques, some methods lack sophistication and remain relatively easy to bypass: 

  • Obfuscation: Most obfuscation relies on public tools like obfuscate.io, which can be reversed using deobfuscate.io.  
  • Limited JavaScript Exploitation: Tycoon 2FA does not fully leverage advanced JavaScript runtime capabilities, such as prototype manipulation, reflection mechanisms, or other dynamic code restructuring techniques. 

In certain aspects, Tycoon 2FA’s evasion mechanisms seem quite amateur. For example, across all observed samples, C2 payloads and exfiltrated data are encrypted/decrypted using hardcoded keys and initialization vectors (1234567890123456 for both key and IV). Ideally, unique keys should be generated per session to enhance security. 

The core architecture of Tycoon 2FA remains unchanged, relying on three domains: 

  • Primary phishing domain: Hosts the phishing page.  
  • Controller domain: Authorizes or denies further execution based on protective checks.  
  • Exfiltration domain: Receives stolen data. 

Similarly, the execution chain of the framework has remained consistent, enabling detection through behavioral analysis despite the introduction of new evasion mechanisms. 

Recommendations for Detecting Tycoon 2FA 

Given the constant changes in the source code of Tycoon 2FA phishing pages, signature-based analysis is largely ineffective, and behavioral analysis is essential for reliable detection.  

Tycoon 2FA employs a “triangle” of Command-and-Control (C2) domains from a specific pool of top-level domains (TLDs), including .ru, .es, .su, .com, .net, and .org. It also consistently loads a predictable set of JavaScript libraries, CSS stylesheets, and other web content, which can be leveraged for detection: Libraries: 

Okta CSS: 

Misc hyperlinks/web-content: 

To detect Tycoon 2FA, security teams can implement a heuristic based on the following behavioral patterns observed in a single session: 

  • C2 Domain Triangle: Communication with a set of domains from the TLD pool (e.g., .ru, .es, .su, .com, .net, .org).    
  • Resource Loading: Retrieval of the specific JavaScript libraries, CSS stylesheets, or web content listed above.    
  • Session Redirect: A redirect to the official Microsoft authentication page at the end of the session. 

Then there is a high probability that the activity involves Tycoon 2FA phishing. 

The post Evolution of Tycoon 2FA Defense Evasion Mechanisms: Analysis and Timeline appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Redefining IABs: Impacts of compartmentalization on threat tracking and modeling

  • Cisco Talos has observed a growing trend of attack kill chains being split into two stages — initial compromise and subsequent exploitation — executed by separate threat actors. This compartmentalization increases the complexity and difficulty of performing threat modeling and actor profiling.
  • Initial access groups now include both traditional initial access brokers (IABs) as well as opportunistic and state-sponsored threat actors, whose characteristics, motivations and objectives differ significantly.
  • In response to these evolving threats, we have refined the definitions of initial access groups to include subcategories such as financially-motivated initial access (FIA), state-sponsored initial access (SIA), and opportunistic initial access (OIA). 
  • We provide several examples of publicly-known threat groups to explain our methodology and the differentiation between them. Understanding the motivations of initial access groups is crucial for analyzing compartmentalized threats. In the forthcoming blog, we will explain how to model attack kill chains that involve multiple attackers.

What is initial access?

Redefining IABs: Impacts of compartmentalization on threat tracking and modeling

The term “initial access” refers to the initial foothold or entry point that threat actors establish within a target network or system. It is the stage in the cyber attack kill chain in which an attacker has the opportunity to begin working towards their longer-term mission objectives, whatever those may be. Initial access can be gained through a variety of methods, including exploitation of software or hardware vulnerabilities, employment of social engineering tactics to obtain credentials, or delivery of malicious components that, if opened or executed by victims, grant this ability automatically. 

In recent years, we have observed the emergence of threat actors who specialize in gaining initial access to computer networks. These threat actors, also referred to as initial access brokers (IABs), traditionally monetize the access they gain by selling it to other threat actors, who may then utilize the provided access for espionage or financial purposes. In short, IABs play a pivotal role in the overall cybercrime ecosystem, as they enable other malicious actors to quickly and efficiently execute their attacks without requiring them to obtain access themselves.

This distinction between IABs and the threat actors they may transfer network/system access to is extremely important. It directly impacts organizational risk assessment and threat modeling activities, as well as how incident response may be conducted if an intrusion occurs. It also complicates intrusion analysis, as it is often difficult to determine when a potential “handoff” of access occurs between threat actors when analyzing log data collected during an active intrusion.

Additionally, the term “initial access” is sometimes misused to refer to infrastructure leveraged by threat actors, such as operational relay box (ORB) networks and those offered as Infrastructure as a Service (IaaS). In this context, “initial access” specifically refers to access to the target’s network, not a network leveraged by threat actors merely as infrastructure for their campaign.

What are the challenges?

One of the primary challenges in modern intrusion analysis is the ability to correctly identify whether an observed adversary is an IAB. This distinction is operationally critical: when the actor responsible for the intrusion focuses solely on initial access, defenders must anticipate and prepare for the likely involvement of secondary actors who may carry out the core objectives of the attack. However, distinguishing IABs from full-spectrum threat actors has become increasingly difficult, as many initial access operations now exhibit the same level of sophistication, targeting and tooling as those conducted by targeted attackers or advanced persistent threats (APT). This overlap in tradecraft significantly complicates attribution, especially in cases where multiple actors interact across different phases of the intrusion.

Another challenge stems from the fact that compartmentalization is no longer exclusive to financially-motivated cybercriminals. In recent years, state-sponsored threat actors have adopted similar operational models, performing initial access and subsequently handing off to other state-sponsored groups within the same state apparatus (e.g., between military or intelligence units). In some cases, state-sponsored initial access groups even transfer access to financially-motivated ransomware operators. These handoffs may be strategic or opportunistic in nature, but they introduce a key problem for defenders: the appropriate preventative, detective and responsive strategies employed must consider not only the threat actor who obtains initial access, but also any other threat actors that may operate during later stages of an intrusion. Likewise, the hunting and containment strategies employed to defend against financially-motivated IABs may not be suitable against state-sponsored initial access groups, whose access operations are typically more stealthy, targeted, and persistent.

Given this evolution across the threat landscape, we argue that a more granular taxonomy of initial access groups is necessary. Specifically, differentiating initial access groups (IAGs) based on threat actor’s perceived motivation for obtaining initial access is essential for accurate actor profiling, campaign tracking, and threat modeling. This refined categorization enables defenders and analysts to better predict follow-on activity, align response strategies with threat actor intent, and improve long-term attribution and understanding of the threat landscape.

Redefining IABs

As previously mentioned, the concept of obtaining access to protected systems or networks and then transferring that access to third parties is not specific to either financially-motivated or state-sponsored/-aligned threat actors. In response to this shift, we propose expanding the definition of IABs to include several types of initial access groups (IAG) that reflect a broader range of threat actor motivations and affiliations (as not all the groups specialized in gaining initial access are “brokers”, we replace “broker” with “group”) . As such, we define an IAG not strictly by the technical stage of the intrusion in which they operate, but based on their primary operational intent: to obtain and then hand over access to another group. Although initial access groups primarily focus on gaining entry into target environments and may not be heavily involved in later operations within the kill chain, they might have the sophisticated skills necessary for lateral movement, privilege escalation, and other advanced techniques. Being classified as an initial access group does not imply a lack of sophistication in terms of their tactics, techniques and procedures (TTPs) and capabilities. It is also worth noting that while gaining initial access, many IAGs may also maintain persistence on the compromised host or network to ensure the access remains throughout the handover process. 

The determination as to whether a threat actor should be considered an IAG is based on consistent observable behavioral patterns. If a group routinely hands over access, regardless of whether it also performs lateral movement, data staging, or limited post-compromise activity prior to the transfer of access, it should still be considered an IAG, as long as the end goal is delegation to another threat actor.

Rather than treating IAGs as a homogenous category, we further distinguish between actors based on their primary drivers and organizational alignment. Specifically, we introduce a new taxonomy comprising:

  • Financially-motivated initial access (FIA)
  • State-sponsored initial access (SIA)
  • Opportunistic initial access (OIA)
Redefining IABs: Impacts of compartmentalization on threat tracking and modeling

Financially-motivated initial access (FIA)

Financially-motivated initial access (FIA) groups are typified by their focus on compromising systems for financial gain, which is more aligned with the conventional definition of an IAB. Their main objective is the maximization of profits derived by monetizing the access they achieve. These groups may occasionally sell access to state-sponsored actors, either with or without full awareness of the buyer’s identity, but their sole driving force remains financial gain. The motivations behind their transactions are not influenced by political objectives or tasking, but rather by the potential for profit, making them distinct from state-sponsored initial access (SIA) or opportunistic initial access (OIA) groups. This singular focus on financial outcomes guides their operations, regardless of the end use of the access they provide.

ToyMaker, also known as UNC961, is one example of an FIA group. ToyMaker typically exploits known vulnerable internet-facing servers and has used custom implants such as LAGTOY to gain initial access to high-value targets, including critical infrastructure organizations. The group has been observed transferring access to multiple ransomware groups including Maze, Egregor and Cactus.

Another example is TA571, which is a threat actor that has been associated with the operation of spam botnets for malware distribution, as well as the use of the 404TDS which is sometimes incorporated into the spam emails. Prior reporting indicates that TA571 operates as an FIA group, and has been observed distributing a variety of malware families, including those associated with threat actors such as TA866/Asylum Ambuscade, a threat actor that has historically been associated with both financially-motivated and espionage operations. In addition, TA571 has been associated with the distribution of other malware families, including variants of IcedID, NetSupportRAT, DarkGate and others. In the context of the categorization described previously, we would characterize TA571 as an FIA group, as their primary motivation is likely financial in nature.

State-sponsored initial access (SIA)

State-sponsored initial access (SIA) groups are typically embedded within a nation’s military cyber units, intelligence agencies, or state-affiliated contractors. These groups focus on gaining a foothold in high-value targets, often government, critical infrastructure or strategic industries, to help the state-sponsored groups achieve their broader operational goals. This type of handoff is often conducted for the purpose of providing isolation between the different phases of a typical attack kill chain. By insulating each phase from the others, the threat actor can lower the risk of exposure of stage-specific tooling and TTPs, making attribution of attacks significantly more difficult.

It’s important to note that for an actor to be classified as a SIA group, the focus should primarily be on securing initial access rather than executing the entire attack campaign. Even if an actor has the capability to complete the full attack kill chain, a SIA group’s defining characteristic is its regular practice of handing over initial access to affiliated groups. This deliberate handoff differentiates SIA groups from conventional APT groups, underscoring their specialized role within the broader context of state-sponsored cyber operations.

One example of an SIA group is ShroudedSnooper, also known as UNC1860, Scarred Manticore and Storm-0861. ShroudedSnooper is widely considered as an IAG and attributed to Iran by industry vendors. Talos assesses with high confidence that ShroudedSnooper is an SIA group. ShroudedSnooper is associated with the Iranian government, mainly tasked with gaining initial access and then deploying webshells and passive implants such as HTTPSnoop, PipeSnoop and more. These implants are later instrumented to transfer access to other threat groups working under the Iranian APT machinery. Once ShroudedSnooper has established persistent access, subsequent threat actors (for example, Storm-0842) may use the access for data exfiltration and espionage, financial gain via ransomware deployment or disrupting victim operations by deploying wipers.

Opportunistic initial access (OIA)

Opportunistic initial access (OIA) groups often straddle the line between the two previously described categories. OIA groups may be financially-motivated and possess the means to monetize their access by selling it to either financially-motivated or state-sponsored threat actors. They may also operate in different capacities at different times. For example, actors like government contractors may operate as an SIA group as part of their normal means of employment while operating as an FIA group to generate additional income. Once the state-sponsored actor’s operation has been conducted, the initial access may then be re-sold under the pretext of “financial gain” while providing plausible deniability and forensic confusion once the access is reused.

One example of an OIA group is UNC5174. The persona tied to this group, uetus, is suspected to be a former member of Chinese hacktivist groups 騰蛇 (Teng Snake), aka 晓骑营 (Xiaoqiying)/Genesis Day), who research suggests is an IAB for nation state groups. The Teng Snake team was reported selling Personally Identifiable Information (PII) and initial access to the South Korean health department in an underground forum in 2022. In 2023, UNC5174 obtained access to entities that are deemed to be of high interest to espionage groups, primarily targeting organizations in North America, the U.K., Australia and Southeast Asia. Initial access is obtained by exploiting known vulnerabilities in services exposed to the internet and the subsequent deployment of bespoke or open-sourced tooling to maintain persistent access to victims. This access is subsequently monetized by UNC5174 and transferred to state-sponsored groups who then undertake a more comprehensive set of tasks to conduct long-term espionage operations within the victim enterprise.

FIA and SIA groups: Similarities and distinctions

While many IAG characteristics (motivation, objectives, etc.) differ significantly when comparing FIA and SIA groups, many of the TTPs, toolsets and infrastructure employed by FIA and SIA groups are often very similar, making differentiation challenging. For instance, both FIA and SIA groups commonly utilize spear-phishing emails, exploitation of known vulnerabilities and proprietary malware. Despite these similarities, several distinct characteristics observed during our investigations help indicate the potential motivation behind attacks. While these characteristics alone may not definitively confirm motivation, they serve as valuable indicators.

Target selection

SIA groups primarily focus on targets aligned with a nation-state’s strategic interests (e.g. government, critical infrastructure or industries of strategic importance to the tasking organization). Even if SIA groups eventually transfer access to financially-motivated actors, their main objective remains fulfilling the nation-state’s geopolitical goals.

Although FIA groups can also target entities of interest to nation-states, this is typically coincidental, as they are often more opportunistic and generally have a broader targeting scope with potentially higher volume operations.

Data exfiltration practices

FIA groups typically prioritize rapid credential exfiltration rather than spending significant time and effort locating, staging, and exfiltrating strategically important data from compromised environments. From the perspective of the FIA group, authentication data like credentials is one of the primary ways that access can be monetized. For example, during the initial phase of the ToyMaker campaign, despite targeting high-value entities, we observed no attempts to locate data of significant importance, and no data other than credentials was exfiltrated from the environment, supporting the hypothesis that the actor was likely financially-motivated. On the other hand, SIA groups that collaborate with APT groups might also perform data exfiltration after gaining initial access. For example, ShroudedSnooper (Storm-0861) was reported to have exfiltrated mail from the victim’s network after gaining initial access.

Handover process

FIA groups often sell access through dark web forums or underground marketplaces. Monitoring these platforms can aid in identifying compromised organizations and preventing subsequent attacks. On the other hand, SIA groups transfer access discreetly, usually without public advertisement and often within controlled channels or partnerships.

Handover pattern consistency

When collaborating with APT groups, SIA actors typically exhibit a more structured and consistent handover process due to repeated collaboration with the same or similar threat actors. For example, in ShroudedSnooper’s operation, access is often provided to the recipient group via a webshell and is typically leveraged by the recipient (Storm-0842 in many cases) right after the webshell is dropped on the system. This smoother coordination results in more predictable handover patterns based on the toolset and behaviors previously observed since they are often repeated across campaigns as future collaboration between threat actors occurs.

For FIA groups, although the threat actors usually try to sell the access quickly, the handover timing (the time when the buyer starts using the access) can vary significantly due to market transaction processes and the operational timeline of the buyer. However, FIA groups closely aligned with dedicated ransomware gangs may exhibit faster and more predictable handovers as they are accustomed to working with the same threat actor repeatedly over time.

Dwell time

FIA groups generally exhibit shorter dwell times because they aim to monetize their initial access quickly to maintain its value. An SIA group may maintain longer dwell time and prioritize stealth until tasked to transfer access. This differentiates SIA groups from FIA groups because in most cases the access achieved is used operationally rather than monetized quickly. Operational cadence may contribute to longer periods between when the access was gained and when it is operationalized. For example, in the ShroudedSnooper campaign that reportedly targeted victims in Israel, the handover for one victim occurred over a year after initial access was gained.

Relationship consistency

SIA groups typically operate within closed ecosystems or in close coordination with state structures, and are often collaborating with the same APT groups consistently over time. In contrast, FIA groups may monetize initial access by advertising on darknet markets and work with various types of threat actors, including ransomware groups, data theft criminals, malware operators and so on. Analysis of persistent relationships between IAGs and the entities they repeatedly transfer access to may help an analyst determine whether the IAG is FIA or SIA. Threat actor involvement in intrusions where handover has occurred with both financially- and state-sponsored threat actors may indicate an OIA operation.

Relationship characteristics

No redefinition would be complete without a look into the way these IAGs may interact with each other. In order to do so, we have mapped these interactions in two dimensions.

Level of collaboration: Indicates how directly the IAG coordinates or hands off access to the counterpart group (the group receiving access to the compromised environment). Some coordination might be transient while others are tightly integrated, repeated collaborations between threat actors. 

Level of knowledge: This dimension illustrates the level of knowledge the IAG group has about the identity or role of the recipient group (or vice versa). This ranges from anonymous or transactional exchanges to full organizational or operational awareness. 

The quadrant below contains examples that illustrate such interactions, with examples of IAGs mapped according to their category. Each group positioning is explained in the sections ahead. 

Redefining IABs: Impacts of compartmentalization on threat tracking and modeling

Q1 (High Collaboration, Low Knowledge): In the first quadrant, we can find IAGs that work closely with clients in tasking and targeting but without necessarily full knowledge of their recipient’s identity or motivations. For example, a state-sponsored group might regularly acquire access from TA571 for access to a specific victim without revealing its own true identity to them. 

Q2 (High Collaboration, High Knowledge): IAGs inside a larger organization may be tasked with obtaining access to a specific target and then handing the access to another group inside the same organization. These actors operate in tightly integrated ecosystems, often within state-sponsored command structures (SIA). Here, IAGs coordinate directly with known entities, such as intelligence units, under clear tasking or operational alignment.

For example, an SIA like ShroudedSnooper operates under the directive of the state, and access obtained by ShroudedSnooper is typically handed over to another state-sponsored group.

Q3 (Low Collaboration, Low Knowledge): These actors operate independently and rarely interact with or have knowledge of the entities to which they are transferring access. When handoffs do occur, they tend to be infrequent and opportunistic, often through anonymous channels. FIA groups often fall into this category.

Q4 (Low Collaboration, High Knowledge): This is where an IAG passes on access to other groups without intentional collaboration but with knowledge of who they are supplying the access to. For example, an espionage group may transfer access to a ransomware group in the hope that this group’s activities hinder forensic reconstruction and analysis of earlier malicious activities. 

Another example might be an FIA group partnering with a ransomware group. While the FIA group might be aware of the identity of the ransomware group, they might have limited collaboration. This is what has been observed in previous ToyMaker intrusion activity, where they only transfer access to ransomware groups, but we have not observed any evidence of direct interactions on compromised hosts.  

Conclusion

Depending on the type of initial access, the role an IAB plays in an attack, the TTPs used to obtain initial access, the activities conducted directly following initial access, and the timeframe and means by which handoff occurs may differ drastically. As such, it is important to understand both the types of IAGs and the threat actors they maintain business relationships with. When analyzing intrusion activity, one should understand these business relationships, while also differentiating between the threat actor(s) who gained initial access and the threat actor(s) operating at later stages of the intrusion, particularly if some evidence of handoff is observed.

By distinguishing between FIA, SIA and OIA groups, we offer a clearer definition for understanding how these groups operate and interact within the broader threat landscape. In the next blog, we will demonstrate how Talos adjusts diamond models used in intrusion analysis and threat modeling to effectively incorporate the nuances of compartmentalized attack, allowing for more precise threat analysis and improved attribution of complex intrusion campaigns.

Cisco Talos Blog – ​Read More

Defining a new methodology for modeling and tracking compartmentalized threats

  • In the evolving cyberthreat landscape, Cisco Talos is witnessing a significant shift towards compartmentalized attack kill chains, where distinct stages — such as initial compromise and subsequent exploitation — are executed by multiple threat actors. This trend complicates traditional threat modeling and actor profiling, as it requires understanding the intricate relationships and interactions between various groups, explained in the previous blog.
  • The traditional Diamond Model of Intrusion Analysis’ feature-centered approach (adversary, capability, infrastructure and victim) to pivoting can lead to inaccuracies when analyzing “compartmentalized” attack kill chains that involve multiple distinct threat actors. Without incorporating context of relationships, the model faces challenges in accurately profiling actors and constructing comprehensive threat models.
  • We have identified several methods for analyzing compartmentalized attacks and propose an extended Diamond Model, which adds a “Relationship Layer” to enrich the context of the relationships between the four features.
  • In a collaboration between Cisco Talos and Vertex Project, a Synapse model update has just been published which introduces the entity:relationship providing modelling support to this methodology.
  • We illustrate our investigative approach and application of the extended Diamond Model for effective pivoting by examining the ToyMaker campaign, where ToyMaker functioned as a financially-motivated initial access (FIA) group, handing over access to the Cactus ransomware group.

Impacts on defenders

Defining a new methodology for modeling and tracking compartmentalized threats

The convergence of multiple threat actors operating within the same overall intrusion creates additional layers of obfuscation, making it difficult to differentiate the activities of one threat actor from another, or to identify when access has been handed off from one to the next. At each point where outsourcing occurs or access is handed off, the Diamond Model of the adversary changes. Likewise, the ability to leverage the output of kill chain analysis for the purpose of pivoting, clustering, and attribution becomes significantly more difficult as analysts may be forced to operate under the assumption that multiple actors are involved unless they can prove otherwise, where historically the opposite assumption was likely made.

Additionally, misattributing attacks due to tactics, techniques and procedures (TTPs) present in earlier stages of the intrusion may impact the way in which incident response or investigative activities are conducted post-compromise. They may also create uncertainty around the motivation(s) behind an attack or why an organization is being targeted in some cases. 

Analysis processes and analytical models must be updated to reflect these new changes in the way that adversaries conduct intrusions, as existing methodologies often create more confusion than clarity.

Introduction to threat modeling

NIST SP 800-53 (Rev. 5) defines threat modeling as “a form of risk assessment that models aspects of the attack and defense sides of a logical entity, such as a piece of data, an application, a host, a system, or an environment.”

For many organizations, this involves evaluating their preventative, detective and corrective security controls from an adversarial perspective to identify deficiencies in their ability to prevent, detect or respond to threats based on specific tactics, techniques, and procedures (TTPs). For example, adversary emulation simulates an attack scenario and demonstrates how an organization could reasonably expect their security program to respond if a specific threat is encountered.

Intrusion analysis is the process of analyzing computer intrusion activity. This involves reconstructing intrusion attack timelines, analyzing forensic artifacts and identifying the scope and impact of activity. Intrusion analysis typically results in a better understanding of an attack or adversary, and may also result in the development of a model to reflect what is known about the threat. This model can then be used to support more effective detection content development and threat modeling activities in the future. The symbiotic relationship between intrusion analysis and threat modeling allows organizations to effectively incorporate new knowledge and information about threats and threat actors into their security programs to ensure continued effectiveness.

Over the past several years, different analytical models have been developed to assist with intrusion analysis and threat modeling that provide logical ways to organize contextual details about threats and threat actors so that they can be communicated and incorporated more effectively. Two of the most popular models are the Diamond Model and the Kill Chain Model.

Defining a new methodology for modeling and tracking compartmentalized threats

The Kill Chain Model shown above is typically used to break an intrusion down into distinct stages/phases so that the attack can be reconstructed and analyzed. This allows analysts to build a realistic model that reflects the TTPs and other characteristics present during the intrusion. This information can then be shared so that other organizations can determine whether their own security controls would be effective at combatting the same or similar intrusion(s) or whether they have encountered the same threat in the past. 

Defining a new methodology for modeling and tracking compartmentalized threats

The Diamond Model, shown above, is commonly used across the industry for building a profile of a specific threat or threat actor. This model is developed by populating each quadrant based on information about an adversary’s characteristics, capabilities, infrastructure tendencies and typical targeting/victimology.  A fully populated diamond model creates an extensive profile of a given threat or threat actor.

It is important to note that an analysis may incorporate both (or other) models, and they are not mutually exclusive. There are also several other modeling frameworks that exist for similar purposes that are also often used in concert, such as the MITRE ATT&CK and D3FEND frameworks. For example, in some cases the information used to populate the Diamond Model may be the result of kill chain analyses of multiple intrusions over time that are ultimately attributed to the same threat actor(s). By leveraging the output of multiple kill chain analyses, one can build a more comprehensive model that reflects changes to characteristics or TTPs associated with a threat actor being tracked over time as well as improve overall understanding of the nature of a given threat.

Challenges applying existing models to compartmentalized threats

One of the key strengths of the Diamond Model is its concept of “centered approaches” for analytic pivoting — including victim-, capability-, infrastructure- and adversary-centered methods of investigation. These approaches enable analysts to uncover new malicious activities and reveal how each facet of an intrusion across the Diamond’s four dimensions intersects with others. For instance, in the paper’s infrastructure-centered example, an analyst might begin with a single IP address seen during an intrusion, then pivot to the domain it resolves to, scrutinize WHOIS registration details, and discover additional domains or IPs registered by the same entity. Further examination may reveal malware connected to or distributed by those domains. In such scenarios, the Diamond Model’s systematic method of traversing from one node to another can rapidly expose an interconnected web of adversaries, capabilities, and victims.

Defining a new methodology for modeling and tracking compartmentalized threats

However, the original centered approach can introduce errors when dealing with a “compartmentalized” attack kill chain involving multiple distinct threat actors. In many cases, adversaries are now leveraging various relationships simultaneously while working towards their longer term mission objectives. This could include the outsourcing of tooling development, rental of infrastructure services for distribution or command and control (C2), or access-sharing agreements leveraged post-compromise to facilitate hand-off once initial access (IA), persistence or privilege escalation has been achieved. This compartmentalization has complicated many analytical activities including attribution, threat modeling, and intrusion analysis. Likewise, the modeling methodologies that were initially developed to combat intrusion operations in previous years no longer accurately reflect today’s threat landscape.

To illustrate the complexity of compartmentalization, let’s consider a hypothetical scenario that closely mirrors real-world events. In this scenario, four distinct threat actor groups are involved:

  1. Actor A: A financially motivated threat actor aiming to profit by collecting logs from infostealer malware.
  2. Actor B: A malware developer who creates and sells infostealer malware.
  3. Actor C: A Traffic Distribution Service (TDS) provider.
  4. Actor D: A ransomware group.

In this scenario, a financially-motivated threat actor (Actor A) who is seeking to infect victims with information-stealing malware to steal victims’ sensitive information may outsource the development of their malware to Actor B. They may engage the developer directly or purchase it from a storefront. Likewise, the distribution of the malware itself is conducted by outsourcing it to Actor C, who operates a spam botnet or traffic distribution service (TDS) that is offered for rent for a usage-based fee. Once Actor C has successfully achieved code execution on a system, they may infect it with the malware they initially received from Actor A, who is charged “per-install.” 

Likewise, once Actor A has successfully performed enumeration of the environment, they identify that they were successful in gaining access to a high value target. Rather than simply focus on monetizing information-stealing malware logs, they choose to monetize their access to the exfiltrated data by selling it to Actor D, who then leverages that access to deploy ransomware and extort the victim. 

In this hypothetical scenario, Actor C, who would be classified as a financially-motivated initial access (FIA) broker, may also be distributing multiple malware families at any given time and leverage traffic filtering to manage final payload delivery. They may even host these payloads on the same infrastructure. The nature of the business relationships described in this scenario are shown below.

Defining a new methodology for modeling and tracking compartmentalized threats

While this scenario covers a single attack, it highlights a situation where applying the traditional analytical models poses several challenges. For example, consider the infrastructure used by Actor C, the TDS provider. The infrastructure that facilitates malware distribution is not solely dedicated to Actor A’s operations. This means that other malware found by pivoting the distribution infrastructure should not be considered as capabilities associated with Actor A. In addition, the malware’s targets are highly associated with the Actor C’s targeted network and should not be strongly considered as the motivation for the victimology of Actor A. In this compartmentalized scenario, the interconnected web of adversaries, capabilities and victims exposed by pivoting with the Diamond Model should not be associated with each other, as they originate from different threat actors and should not be modeled as part of a single threat actor profile.

In even more complex cases, a threat actor may choose to engage multiple distributors simultaneously or work with different distributors on a weekly basis depending on real-time pricing and service availability. A threat actor conducting ransomware operations may choose to procure access from several initial access brokers (IABs), each with their own characteristics, capabilities and motivations. Likewise, several otherwise unrelated threat actors operating in different capacities throughout the kill chain present complications when attempting to take the result of the analysis and incorporate it into existing attribution data or when attempting to identify overlaps with other clusters of malicious activity. Modeling the IABs themselves also presents complications, as their characteristics and TTPs are often encountered in attacks where they may have only been operating within a subset of the overall phases of the intrusion. 

State-sponsored or -aligned threat actors’ campaigns have been documented using anonymization networks or residential proxies to hide their activities. This will create the same kind of activity overlap described by the usage of a TDS.

Extending the Diamond Model with the Relationship Layer

To extend the Diamond Model to include the complexities posed by compartmentalized attacks, we propose an extension to the original Diamond Model by integrating a “Relationship Layer.” This additional layer is designed to contextualize the interactions between the four features (adversary, infrastructure, capability and victim) of individual diamonds representing distinct threat actors. By incorporating this layer, threat analysts can construct a nuanced understanding of compartmentalized contexts.

The Relationship Layer allows for the articulation of common relational dynamics such as “purchased from” to indicate a transactional association, “handover from” to reflect a transfer of operational control or resources, and “leaked from” to convey the use of leaked tools. Additionally, it describes the connections between adversarial groups, encompassing a variety of interactions such as “commercial relationship,” “partnership agreements,” “subcontracting arrangements,” “shared operational goals,” and more. 

The integration of the Relationship Layer enables analysts to contextualize the interactions within the Diamond Model’s four features, thereby enhancing their ability to perform logical pivoting and accurate attribution. This refinement offers a more sophisticated framework for analyzing modern, compartmentalized cyberthreats, providing a clearer representation of the complex web of relationships that characterize these operations.

Let’s look at the scenario involving Actors A through D again. Figure 4 shows how we can use the extended Diamond Model to describe the relationships between entities involved in the intrusion activity:

Defining a new methodology for modeling and tracking compartmentalized threats

Each of the actors, A through D, possesses their own Diamond Model, reflecting their distinct roles as adversaries with unique capabilities, victims and infrastructures. We have extended each Diamond Model by integrating an additional Relationship Layer to illustrate the contextual relationships between these features. For instance, the infrastructure used by Actor A for Traffic Distribution Services (TDS) is linked to Actor C’s infrastructure through a “purchased from” relationship. Consequently, when performing analytical pivoting, analysts should account for this relationship and not attribute all infostealers distributed via the TDS infrastructure solely to Actor A’s capabilities. Similarly, the victims of those infostealers should not be automatically classified as Actor A’s victims.

Another illustrative case involves the relationship between the victims of Actor A and Actor D. Actor D obtained initial access through a transaction with Actor A, denoted by the “purchased from” relationship within the Relationship Layer. This relationship offers analysts crucial context, allowing them to avoid attributing the tools used in the initial access phase to Actor D’s capabilities.

The Relationship Layer also elucidates the connections between adversaries. On the graph, we denote these inter-adversary connections as “commercial relationships,” providing additional context that aids in actor profiling. This extension understanding allows analysts to discern the nature of interactions between threat actors, facilitating more accurate and insightful profiling efforts.

Integrating the Relationship Layer with the Cyber Kill Chain

The Cyber Kill Chain framework serves as a structured approach to analyzing cyberattacks, enabling security professionals to break down intrusions into discrete, sequential stages — from initial reconnaissance to actions on objectives. By organizing attacks in this manner, analysts can pinpoint attacker behaviors, anticipate adversary actions and develop targeted mitigation strategies, significantly enhancing overall threat intelligence.

Integrating the extended Diamond Model into the Cyber Kill Chain framework offers a more comprehensive view of compartmentalized campaigns by illustrating how each adversary contributes to different stages of an attack. This combined perspective enhances understanding by mapping out the intricate web of relationships among multiple threat actors, thereby providing a clearer picture of how resources, capabilities and infrastructure are shared or transferred throughout an attack’s lifecycle. Figure 5 illustrates the integration of the extended Diamond Model with the Cyber Kill Chain using the Actor A–D example.

Defining a new methodology for modeling and tracking compartmentalized threats

The example above demonstrates the distinct roles that each adversary assumes at various stages of the kill chain in a hypothetical campaign. In this scenario, the victim is initially compromised by an infostealer, which Actor A acquired from Actor B, and subsequently faces a ransomware attack orchestrated by Actor D. To further enrich the analysis, we highlight the “handover” relationship between Actor C and Actor A, emphasizing its significance as both actors’ activities manifest within the targeted environment. This approach provides a more comprehensive view of the attack flow, allowing for a deeper understanding of how adversarial interactions and transitions unfold throughout the campaign.

This enriched view not only clarifies attacker tradecraft but also bolsters actor profiling and attribution efforts. By aligning specific tactics and resources with the threat groups deploying them, analysts can more accurately trace operations back to their origins. This approach also provides insights into adversary motivations, allowing defenders to tailor their response strategies effectively. For instance, understanding that an IAB is financially motivated might suggest a lower immediate threat to certain targets, while recognizing that access has been sold to a state-sponsored actor would escalate the priority of the threat response.

Identifying compartmentalized attacks

Identifying compartmentalization within the scope of an intrusion typically involves trying to determine where positive control is transferred between adversaries either pre- or post-compromise. It is essential to identify compartmentalization as this will significantly impact the overall understanding of the adversar(ies) and the capabilities available to them. Indicators of collaboration among distinct threat actors can vary significantly depending on the context and the phase of activity, and these can be categorized based on whether the actions occur before or after the compromise of a system or environment. It is important to note that while there are several examples listed in the following sections, compartmentalization can and does look different across intrusions and these are by no means comprehensive. Likewise, while the below elements are useful indicators that an analyst should investigate possible transfer of access, they are not necessarily indicative that a handoff has occurred. As more of these elements are encountered and evidence collected, an analyst may be able to strengthen their assessment that compartmentalization has occurred.

Pre-compromise

In the early stages of an intrusion, compartmentalization can often be identified by observing how tooling has been sourced, how malicious content is being delivered to potential victims and the initial/early execution flow of malicious components in the case that code execution has been achieved.

This stage may also be completely independent. In situations where a state-sponsored group is tasked with espionage operation, it may pass on the access to a ransomware group, making the state-sponsored group an IAG. It is not guaranteed that the ransomware group is aware of the nature of its IAG, but just by doing its activity it will fulfill the state-sponsored group objective of making incident analysis and attribution complex.

Shared tooling

While many of the indicators associated with the use of tooling are often identified in later stages of an intrusion, we characterize this compartmentalization as occurring pre-compromise as development and procurement activities must generally occur before the campaign is launched. It is often useful to identify if the threat actor procured tooling from third parties. This may involve identifying key characteristics of the malicious components being analyzed and searching/monitoring hacking forums and darknet marketplaces (DNMs) to identify whether a seller is advertising a capability matching the one used in the intrusion. Likewise, malware that has historically been used by one threat actor may be transferred to another threat actor, either on purpose or inadvertently in the case of source code leaks. In either case, analysis of contextual information surrounding the use of the tooling can help analysts identify when the tooling doesn’t match the threat actors’ known TTPs.

Shared delivery infrastructure

In the case of email-based delivery, analysis of the infrastructure used to send malicious emails, the content of the message, and the infrastructure used for hosting and delivering payloads may indicate that delivery has been outsourced in some capacity. Likewise, in the case of malvertising campaigns, analysis of the ad campaigns, traffic distribution infrastructure and gating methodologies may suggest the same. In many cases the infrastructure used is often observed distributing multiple distinct, otherwise unrelated malware families over a short period of time as the threat actor operating the delivery infrastructure may conduct business with multiple entities at any point in time. Analyzing activity associated with this infrastructure before, during, and after the intrusion may inform the analysis of whether compartmentalization has occurred.

Shared droppers/downloaders

When analyzing an intrusion, there is often a point at which code execution is achieved. This may be the point in which a malicious script-based component is delivered and executed by a victim. In many cases, these function as downloaders and are solely responsible for retrieving or extracting and executing follow-on payloads that allow an adversary to expand their ability to operate in an environment. Analysis of the dropper/downloader mechanisms used may identify cases where the same mechanism is used to deliver unrelated threats over time, indicating that delivery may have been outsourced. We have categorized this activity as “pre-compromise” to further differentiate it from handoffs that may occur later in the intrusion, once persistence has been achieved, etc.

Post-compromise 

In addition to the aforementioned types of compartmentalization that often occur early in an intrusion, there is another set of handoffs that may occur once an adversary has achieved compromise. These are typically used to transfer control of access from one party to another and may be performed for a variety of purposes, as described in our previous blog. This activity can often be identified by analyzing handoff behaviors, the motivation of the threat actors involved, and monitoring for typical indicators that an IAB is involved.

Handoff behaviors

In some cases, information can be collected related to the amount of time that has occurred between an IAB obtaining access to the environment, and the beginning of follow-on activity. This may include an IAB gaining access, establishing persistence, collecting information from the environment and exfiltrating that to adversary C2. Following this initial activity, the infection may conduct very little malicious activity aside from periodic C2 polling occurring on the system for an extended period of time. After an extended period, additional malicious components may be delivered that establish new C2 connections and new activity may be observed. This type of pattern is indicative that a handoff of access may have occurred and should be investigated further. Similarly, analysis of the behaviors of the threat actor before and after this handoff may strengthen or weaken an assessment as completely different TTPs may be observed between the threat actors involved.

The race to domain admin

Another set of characteristics that may strengthen an assessment that handoff has occurred is by analyzing the series of actions taken once access has been gained. In the case of FIA, for instance, we often observe repeatable processes for attempting to gain domain administrator access as quickly as possible. This makes the access more lucrative for the IAG and more seamlessly enables the deployment of additional malware components, such as ransomware. An FIA group may quickly progress from initial access to domain administrator access in a short period of time with little to no effort spent on identifying high-value targets in the environment. Once domain administrator access has been gained the intrusion activity may stop while the threat actor attempts to monetize that access and facilitate handoff to the threat actor who ultimately purchases it. SIA groups on the other hand, may take a more steady and stealth oriented approach, to conduct reconnaissance and proliferate throughout the victim enterprise without being detected. In many instances an SIA group might conduct initial exfiltration of restricted data, before handing access off to the secondary threat actor.

Dark web tracking

Monitoring hacking forums and darknet marketplaces can be extremely valuable for identifying when an IAB is involved in an intrusion. Since FIA brokers are primarily focused on achieving the maximum profit as quickly as possible, they will often post advertisements for access to environments that they have achieved. In many cases these advertisements include generic information about the company/organization involved such as size (number of employees), rounded financial information based on publicly available sources such as quarterly filings, industry, etc. Locating advertisements that match the profile of the victim of an intrusion can strengthen an assessment that an IAB is involved and provide additional intelligence collection avenues that may be pursued further to collect additional information about the IAB involved, who they typically work with, and more.

C2 analysis

Analysis of C2 infrastructure involved throughout the intrusion presents another opportunity for identifying any handoffs that have occurred. As previously mentioned, in some cases the handoff is performed by delivering a new payload and establishing a new C2 connection with another threat actor’s infrastructure. In the case of frameworks, analysis of the server logs can provide additional information where the same server has been used to administer multiple victims. Administrative panels used to manage malware infections are often useful for informing analysis related to the nature of threat actors involved and the business models they are working within. Some admin panels may be explicitly built for the purpose of facilitating handoffs, RaaS and C2aaS platforms being examples of this.

Case Study: ToyMaker

During the course of performing threat hunting and incident response, Cisco Talos sometimes encounters scenarios where compartmentalized operations involve multiple attackers participating in the same attack kill chain. Using the ToyMaker campaign as an example, we demonstrate how we identified the participation of various attackers during our investigation and utilized the extended Diamond Model to clarify the distinct activities and roles of these attackers across different stages of the attack kill chain.

APT, Cactus or FIA? 

Talos investigated the ToyMaker campaign in 2023. The attackers conducted operations for six consecutive days, during which they compromised a server of the victim organization, exfiltrated credentials and deployed the proprietary LAGTOY backdoor. We consider this “first wave” post-compromise activity. Since we did not find any common financial crime malware in this attack, and the attackers used their proprietary tools and C2 infrastructures, we considered the possibility that it might be the activity of an APT group. However, the TTPs and indicators of compromise (IOCs) did not overlap with previously observed campaigns, so we did not attribute the campaign early in the investigation.

However, during the investigation, Talos identified TTPs and hands-on-keyboard activity consistent with Cactus ransomware activity appearing in the victim’s network almost 3 weeks after the initial compromise. We consider this the “second wave” of malicious activity. After using various tools for lateral movement within the network, the attackers launched a ransomware attack within a matter of days. At this point, Talos started a more in-depth investigation, including exploring the connections and disparities between the ransomware attack and the initial access. We formulated several hypotheses at this point:

  • Hypothesis A: Both the initial compromise and subsequent activities were conducted by Cactus ransomware, and therefore LAGTOY might be a tool exclusively used by Cactus.
  • Hypothesis B: The initial access might have been carried out by a different attack group and have no relation to Cactus’s activities.
  • Hypothesis C: The initial access might have been carried out by a different attack group, but there is some connection to Cactus.

Hypothesis A was the most intuitive assumption at the beginning of the investigation. However, as the investigation progressed, Talos made the following observations:

  • Initial access activity removed the created user account before the end of activity: Before the actions following the initial access activity ceased, the attackers deleted the user account they had created.
  • Differences in TTPs: Variations in TTPs were observed between the two attack traces, either through differing approaches to similar TTPs or entirely distinct TTPs. For instance, the operators conducting initial access relied on PuTTY for credential exfiltration, while the secondary activity employed Secure Shell (SSH) alongside other tools. In terms of file packaging, the second wave utilized parameters that preserved file paths (-spf), a method not seen in the first set of actions. Furthermore, the second wave predominantly involved off-the-shelf tools, whereas the first wave featured bespoke tools unique to the attackers.
  • No tools and IoC overlapping: We found no common tools and shared infrastructure between the two waves of malicious activity.
  • No use of LAGTOY: We observed that although the first wave deployed LAGTOY, it was never used throughout the course of the intrusion. Why would a threat actor deploy a custom-made malware immediately after initial compromise but never use it? It is possible that LAGTOY might have been designated as a last resort access channel, if the attackers’ access through compromised credentials was blocked. It is also likely that LAGTOY wasn’t used because it was never meant to be used in the intrusion going forward, i.e. LAGTOY was deployed by a distinct Initial Access Threat Actor, different from Cactus. Furthermore, we had no evidence of Cactus developing and using LAGOTY in their operations. Our assessment was now leaning towards Hypothesis B: The initial access might have been carried out by a different attack group and have no relation to Cactus’s activities.
  • Time gap between the first and second waves: There was approximately a gap of 3 weeks with no observed attack activity before the second wave of attacks began. For big-game double extortion threat actors, speed is paramount. A successful initial compromise must be capitalized by performing rapid recon, endpoint and file enumeration, data exfiltration and ransomware deployment. For such operations that tend to focus on a blitz, it is abnormal to see a gap of weeks with lulls in activity. Therefore, we must consider the possibility that there may have been a handoff of access between two distinct threat actors conducting the first and second wave of attacks. Furthermore, a gap of 3 weeks suggests that the first threat actor did not have a secondary actor already aligned/available for immediate access; they had to find Cactus. Talos’ assessment was now leaning towards Hypothesis C: The initial access might have been carried out by a different attack group, but there is some connection to Cactus.
Defining a new methodology for modeling and tracking compartmentalized threats
  • Shared credentials: Within the first six days of activity, we observed credential harvesting and exfiltration. Three weeks later, the second wave began which we attributed to Cactus. This second wave was kickstarted using the same credentials stolen in the first wave. Therefore, there was indeed a connection between the two waves of activity: the shared stolen credentials.

The totality of patterns and abnormalities collected during our research shifted our assessments toward the hypothesis involving an initial access group, leading us to reanalyze the LAGTOY tool used in the first wave of activities conducted post compromise. We discovered that this backdoor is the same as HOLERUN, which Mandiant reported as being used by UNC961. This finding, combined with the previous public reporting and observations, allows us to confirm that the attack involved two distinct attacker groups (ToyMaker aka UNC961, and Cactus).

Mandiant’s public reporting noted that UNC961’s intrusion activities often preceded the deployment of Maze and Egregor ransomware by distinct follow-on actors. While Egregor is considered a direct successor to Maze, there is no evidence indicating any connection to Cactus. In the campaign we investigated, Cactus used compromised credentials from the first wave of attacks on the victim’s machine. Based on these findings, Talos assesses with high confidence that ToyMaker provided initial access for the Cactus group. Given ToyMaker’s focus on financial gain and their history of selling initial access to ransomware groups, we classify them as an FIA group.

Leveraging the extended Diamond Model for further analysis and defensive strategy

Defining a new methodology for modeling and tracking compartmentalized threats

Building on the analysis and context provided, the extended Diamond Model allows Talos to effectively represent the threat actors involved in this campaign, highlighting the intricacies of their collaborative relationships. In Figure 6, we utilize two distinct diamonds to symbolize the ToyMaker group and the Cactus ransomware group. The Relationship Layer plays a crucial role in delineating the connections between ToyMaker’s victims and Cactus’ victims, as well as illustrating the initial access provider-receiver dynamics.

These relationships underscore the importance of carefully reviewing and investigating any capabilities and infrastructure indicators identified on the victim’s machine associated with either threat actor. For example, the hosts infected by LAGTOY are potentially at risk of ransomware attacks, or tools discovered on Cactus’ victims might be from LAGTOY or potentially other initial access groups. 

We can also leverage the relationship information provided by the extended Diamond Model to identify additional potential victims of Cactus ransomware by hunting for hosts infected with the LAGTOY backdoor. Similarly, examining victims associated with ToyMaker can lead to discovering other ransomware attack victims. For defenders, this relationship data is crucial for prioritizing detection efforts and ensuring that the activities of ToyMaker and other initial access groups are not overlooked, as they can serve as precursors to further attacks. By maintaining vigilance and focusing on these initial access indicators, security teams can proactively identify and mitigate threats before they escalate into full-blown ransomware incidents.

Cisco Talos Blog – ​Read More

The ransomware landscape in 2025 | Kaspersky official blog

May 12 is World Anti-Ransomware Day. On this memorable day, established in 2020 by both INTERPOL and Kaspersky, we want to discuss the trends that can be traced in ransomware incidents and serve as proof that negotiations with attackers and payments in cryptocurrency are becoming an increasingly  bad idea.

Low quality of decryptors

When a company’s infrastructure is encrypted as a result of an attack, the first thing a business wants to do is to get back to normal operations by recovering data on workstations and servers as quickly as possible. From the ransom notes, it may seem that, after paying the ransom, the company will receive a decryptor app that will quickly return all the information to its original state and allow resuming work processes almost painlessly. In practice, this almost never happens.

First, some extortionists simply deceive their victims and don’t send a decryptor at all. Such cases became widely known, for example, thanks to the leak of internal correspondence of the Black Basta ransomware group.

Second, the cybercriminals specialize in encryption, not decryption, so they put little effort into their decryptor applications; the result is that they work poorly and slowly. It may turn out that restoring data from a backup copy is much faster than using the attackers’ utility. Their decryptors often crash when encountering exotic file names or access-rights conflicts (or simply for no apparent reason), and they do not have a mechanism for continuing decryption from the point where it was interrupted. Sometimes, due to faulty logic, they simply corrupt files.

Repeated attacks

It’s common knowledge that a blackmailer will always be able to keep on blackmailing; blackmailing with ransomware is just the same. Cybercriminal gangs communicate with each other, and “affiliates” switch between ransomware-as-a-service providers. In addition, when law enforcement agencies successfully stop a gang, they’re not always able to arrest all of its members, and those who’ve evaded capture take up their old tricks in another group. As a result, information about someone successfully collecting a ransom from a victim becomes known to the new gang, which tries to attack the same organization – often successfully.

Tightening of legislation

Modern attackers not only encrypt, but also steal data, which creates long-term risks for a company. After a ransomware attack, a company has three main options:

  • publicly report the incident and restore operations and data without communicating with the cybercriminals;
  • report the incident, but pay a ransom to restore the data and prevent its publication;
  • conceal the incident by paying a ransom for silence.

The latter option has always been a ticking time bomb – as the cases of Westend Dental and Blackbaud prove. Moreover, many countries are now passing laws that make such actions illegal. For example:

  • the NIS2 (network and information security) directive and DORA (Digital Operational Resilience Act) adopted in the EU require companies in many industries, as well as large and critical businesses, to promptly report cyber incidents, and also impose significant cyber resilience requirements on organizations;
  • a law is being discussed in the UK that would prohibit government organizations and critical infrastructure operators from paying ransoms, and would also require all businesses to promptly report ransomware incidents;
  • the Cybersecurity Act has been updated in Singapore, requiring critical information infrastructure operators to report incidents, including ones related to supply-chain attacks and to any customer service interruptions;
  • a package of federal directives and state laws in the U.S. prohibiting large payments (more than $100,000) to cybercriminals, and also requiring prompt reporting of incidents is under discussion and has been partially adopted in the United States.

Thus, even having successfully recovered from an incident, a company that secretly paid extortionists risks receiving unpleasant consequences for many years to come if the incident becomes public (for example, after the extortionists are arrested).

Lack of guarantees

Often, companies pay not for decryption, but for an assurance that stolen data won’t be published and that the attack will remain confidential. But there’s never any guarantee that this information won’t surface somewhere later. As recent incidents show, disclosure of the attack itself and stolen corporate data can be possible in several scenarios:

  • As a result of an internal conflict among attackers. For example, due to disagreements within a group or an attack by one group on the infrastructure of another. As a result, the victims’ data is published in order to take revenge, or it’s leaked to help in destroying the assets of a competing gang. In 2025, victims’ data appeared in a leak of internal correspondence of the Black Basta gang; another disclosure of victims’ data was made when the DragonForce group destroyed and seized the infrastructure of two rivals, BlackLock and Mamona. On May 7, the Lockbit website was hacked and data from the admin panel was made publicly available – listing and describing in detail all the group’s victims over the past six months.
  • During a raid by law enforcement agencies on a ransomware group. The police, of course, won’t publish the data itself, but the fact that the incident took place would will be disclosed. Last year, Lockbit victims became known like this.
  • Due to a mistake made by the ransomware group itself. Ransomware groups’ infrastructure is often not particularly well protected, and the stolen data can be accidentally found by security researchers, competitors, or just random people. The most striking example was a giant collection of data stolen from five large companies by various ransomware gangs, and published in full by the hacktivist collective DDoSecrets.

Ransomware may not be the main problem

Thanks to the activities of law enforcement agencies and the evolution of legislation, the portrait of a “typical ransomware group” has changed dramatically. The activity of large groups typical of incidents in 2020-2023 has decreased, and ransomware-as-a-service schemes have come to the fore, in which the attacking party can be very small teams or even individuals. An important trend has emerged: as the number of encryption incidents has increased, the total amount of ransoms paid has decreased. There are two reasons for this: firstly, victims increasingly refuse to pay, and secondly, many extortionists are forced to attack smaller companies and ask for a smaller ransom. More detailed statistics can be found in our report on Securelist.

But the main change is that there’ve been more cases where attackers have mixed motives; for example, one and the same group conducts espionage campaigns and simultaneously infects the infrastructure with ransomware. Sometimes the ransomware serves only as a smokescreen to disguise espionage, and sometimes the attackers are apparently carrying out someone’s order for information extraction, and using extortion as an additional source of income. For business owners and managers, this means that in the case of a ransomware incident, it’s impossible to fully understand the attacker’s motivation or check its reputation.

How to deal with a ransomware incident

The conclusion is simple: paying money to ransomware operators may be not the solution, but a prolongation and deepening of the problem. The key to a quick business recovery is a response plan prepared in advance.

Organizations need to implement detailed plans for IT and infosec departments to respond to a ransomware incident. Special attention should be given to scenarios for isolating hosts and subnets, disabling VPN and remote access, and deactivating accounts (including primary administrative ones), with a transition to backup accounts. Regular training on restoring backups is also a good idea. And don’t forget to store those backups in an isolated system where they cannot be corrupted by an attack.

To implement these measures and be able to respond ASAP while an attack has not yet affected the entire network, it’s necessary to implement a constant deep monitoring process: large companies will benefit from a XDR solution, while smaller businesses can get high-quality monitoring and response by subscribing to an MDR service.

Kaspersky official blog – ​Read More

Catching a phish with many faces

Here’s a brief dive into the murky waters of shape-shifting attacks that leverage dedicated phishing kits to auto-generate customized login pages on the fly

WeLiveSecurity – ​Read More

The IT help desk kindly requests you read this newsletter

The IT help desk kindly requests you read this newsletter

Welcome to this week’s edition of the Threat Source newsletter. 

Authority bias is one of the many things that shape how we think. Taking the advice of someone with recognized authority is often far easier (and usually leads to a better outcome) than spending time and effort in researching the reasoning and logic behind that advice. Put simply, it’s easier to take your doctor’s advice on health matters than it is to spend years in medical school learning why the advice you received is necessary. 

This tendency to respect and follow authoritative instructions translates into our use of computers, too. If you’re reading this, you’ve likely been the recipient of many questions about computer-related matters from friends and family. However, your trust can be abused, even by someone who seems knowledgeable and respectable. 

Attackers have learned that by impersonating individuals with some form of authority, such as banking staff, tax officials or IT professionals, they can persuade victims to carry out actions against their own interests. In our most recent Incident Response Quarterly Trends update, we describe how ransomware actors masquerade as IT agents when contacting their victims, instructing them to install remote access software. This allows the threat actor to set up long-term access to the device and continue the pursuit of their malicious objectives. 

If someone contacts you out of the blue professing to be an IT or bank/tax expert with urgent or helpful instructions, end the conversation immediately. Follow up with a call to the main contact details of the team or organization that contacted you to verify if the call was genuine. Be aware of the scams that the bad guys are using and spread awareness far and wide. Expect threat actors to attempt to exploit human nature and its own vulnerabilities. 

The one big thing 

Threat hunting is an integral part of any cyber security strategy because identifying potential incursions early allows issues to be swiftly resolved before harm is incurred. There are many different approaches to threat hunting, each of which may uncover different threats.

Why do I care? 

As threat actors increasingly use living-off-the-land binaries (LOLBins) — i.e. using either dual-use tools or the tools that they find already in place on compromised systems — detecting the presence of an intruder is no longer a case of simply finding their malware.  

Spotting bad guys is still possible, but requires a slightly different approach: either looking for evidence of the potential techniques they use, or finding evidence that things aren’t quite as they should be. 

So now what? 

Read about the different types of threat hunting strategies the Talos IR team uses and investigate how these can be used within your environment to improve your chances of finding incursions early.

Top security headlines of the week 

MySQL turns 30  
The popular database was founded on May 23, 1995 and is at the heart of many high-traffic applications such as Facebook, Netflix, Uber, Airbnb, Shopify, and Booking.com. (Oracle

Disney Slack attack wasn’t Russian protesters, just a Cali dude with malware 
A resident of California has pleaded guilty to conducting an attack in which 1.1 TB of data was stolen. The attack was conducted by releasing a trojan masquerading as an AI art generation application. (The Register

Ransomware Group Claims Attacks on UK Retailers 
The DragonForce ransomware group says it orchestrated the disruptive cyberattacks that hit UK retailers Co-op, Harrods, and Marks & Spencer (M&S). (Security Week

Attackers Ramp Up Efforts Targeting Developer Secrets 
Attackers are increasingly seeking to steal secret keys or tokens that have been inadvertently exposed in live environments or published in online code repositories. (Dark Reading)

Can’t get enough Talos? 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools 
A new spam campaign is targeting Brazilian users with a clever twist — abusing the free trial period of trusted remote monitoring tools and the country’s electronic invoice system to spread malicious agents. Read now

Threat Hunting with Talos IR
Talos recently published a blog on the framework behind our Threat Hunting service, featuring this handy video:

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507   
MD5: 2915b3f8b703eb744fc54c81f4a9c67f   
VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507/  
Typical Filename: VID001.exe  
Detection Name: Win.Worm.Bitmin-9847045-0  

SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
MD5: 7bdbd180c081fa63ca94f9c22c457376   
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
Typical Filename: img001.exe  
Detection Name: Simple_Custom_Detection  

SHA256: 47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca   
MD5: 71fea034b422e4a17ebb06022532fdde   
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
Typical Filename: VID001.exe   
Detection Name: Coinminer:MBT.26mw.in14.Talos 

SHA256: c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0   
MD5: 8c69830a50fb85d8a794fa46643493b2 
VirusTotal: https://www.virustotal.com/gui/file/c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0  
Typical Filename: AAct.exe 
Detection Name: W32.File.MalParent 

Cisco Talos Blog – ​Read More

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools

  • Cisco Talos identified a spam campaign targeting Brazilian users with commercial remote monitoring and management (RMM) tools since at least January 2025. Talos observed the use of PDQ Connect and N-able remote access tools in this campaign. 
  • The spam message uses the Brazilian electronic invoice system, NF-e, as a lure to entice users into clicking hyperlinks and accessing malicious content hosted in Dropbox. 
  • Talos has observed the threat actor abusing RMM tools in order to create and distribute malicious agents to victims. They then use the remote capabilities of these agents to download and install Screen Connect after the initial compromise.
  • Talos assesses with high confidence that the threat actor is an initial access broker (IAB) abusing the free trial periods of these RMM tools. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools

Talos recently observed a spam campaign targeting Portuguese-speaking users in Brazil with the intention of installing commercial remote monitoring and management (RMM) tools. The initial infection occurs via specially crafted spam messages purporting to be from financial institutions or cell phone carriers with an overdue bill or electronic receipt of payment issued as an NF-e (see Figures 1 and 2). 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 1. Spam message purporting to be from a cell phone provider. 
Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 2. Spam message masquerading as a bill from a financial institution. 

Both messages link to a Dropbox file, which contains the malicious binary installer for the RMM tool. The file names also contain references to NF-e in their names: 

  • AGENT_NFe_<random>.exe 
  • Boleto_NFe_<random>.exe 
  • Eletronica_NFe_<random>.exe 
  • Nf-e<random>.exe 
  • NFE_<random>.exe 
  • NOTA_FISCAL_NFe_<random>.exe 

Note: <random> means the filename uses a random sequence of letters and numbers in that position. 

The victims targeted in this campaign are mostly C-level executives and financial and human resources accounts across several industries, including some educational and government institutions. This assessment is based on the most common recipients found in the messages Talos observed during this campaign. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 3. Targeted recipients.

Abusing RMM tools for profit 

This campaign’s objective is to lure the victims into installing an RMM tool, which allows the threat actor to take complete control of the target machine. N-able RMM Remote Access is the most common tool distributed in this campaign and is developed by N-able, Inc., previously known as SolarWinds. N-able is aware of this abuse and took action to disable the affected trial accounts. Another tool Talos observed in some cases is PDQ Connect, a similar RMM application. Both provide a 15-day free trial period.

To assess whether these actors were using a trial version rather than stolen credentials to create these accounts, Talos checked samples older than 15 days and confirmed all of them returned errors that the accounts were disabled, while newer samples found in the last 15 days were all active.

Talos also examined the email accounts used to register for the service. They all use free email services such as Gmail or Proton Mail, as well as usernames following the theme of the spam campaign, with few exceptions where the threat actors used personal accounts. These exceptions are potentially compromised accounts which are being abused by the threat actors to create additional trial accounts. Talos did not find any samples in which the registered account was issued by a private company, so we can assess with high confidence these agents were created using trial accounts instead of stolen credentials.

N-able is aware of this abuse and took action to disable the affected trial accounts.

Talos found no evidence of a common post-infection behavior for the affected machines, with most machines staying infected for days before any other malicious activity was executed by the tool. However, in some cases, we observed the threat actor installing an additional RMM tool and removing all security tools from the machine a few days after the initial compromise. This is consistent with actions of initial access broker (IAB) groups. 

An IAB’s main objective is to rapidly create a network of compromised machines and then sell access to the network to third parties. Threat actors commonly use IABs when looking for specific target companies to deploy ransomware on. However, IABs have varied priorities and may sell their services to any threat actors, including state-sponsored actors.  

Adversaries’ abuse of commercial RMM tools has steadily increased in recent years. These tools are of interest to threat actors because they are usually digitally signed by recognized entities and are a fully featured backdoor. They also have little to no cost in software or infrastructure, as all of this is generally provided by the trial version application.  

Talos created a trial account to test what features were available for a trial user. In the case of the N-able remote access tool, the trial version offers a full set of features only limited by the 15-day trial period. Talos was able to confirm that by using a trial account, the threat actor has full access to the machine, including remote desktop like access, remote command execution, screen streaming, keystroke capture and remote shell access. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 4. N-able management interface showing available remote access tools. 
Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 5. Administrative shell executed on a remote machine. 

The threat actor also has access to a fully featured file manager to easily read and write files to the remote file system. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 6. N-able file manager. 

The network traffic these tools create is also disguised as regular traffic, with many tools using communication over HTTPS and connecting to resources which are part of the infrastructure provided by the application provider. For example, N-able Remote Access uses a domain associated with its management interface, hosted on Amazon Web Services (AWS): 

  • hxxps://upload1[.]am[.]remote[.]management/ 
  • hxxps://upload2[.]am[.]remote[.]management/ 
  • hxxps://upload3[.]am[.]remote[.]management/ 
  • hxxps://upload4[.]am[.]remote[.]management/ 

Disclaimer: The URLs above are part of the management infrastructure for the RMM tools described in this blog and are not controlled by the threat actor. Customers must complete an assessment before enabling block signatures for these domains. 

The domain the agent uses is the same for any customer using the tool, with only the username and API key differentiating which customer the agent belongs to, as can be seen in Figure 7. This makes it even more difficult to identify the origin of the attacks and perform threat actor attribution.

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools
Figure 7. Example configuration file. 

By extracting the configuration files inside the agent installer files still available on Dropbox, we can see some email addresses follow the same theme of the spam emails, using names of finance-related users and domains, while others could be potentially compromised accounts being used to create trial accounts for N-able Remote Access.  

With these trial versions being limited only by time and providing full remote-control features with little to no cost to the threat actors, Talos expects these tools to become even more common in attacks. 

Cisco Secure Firewall Application control is able to detect the unintended usage of RMM tools in customer’s networks. Instructions on how to set up Application control can be found at Cisco Secure Firewall documentation

Coverage 

Ways our customers can detect and block this threat are listed below. 

Spam campaign targeting Brazil abuses Remote Monitoring and Management tools

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here. 

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat. 

Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device. 

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products. 

Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles.  Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work.  Please contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access. 

Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.  

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.  

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.  

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org

ClamAV detections are also available for this threat:  

Txt.Backdoor.NableRemoteAccessConfig-10044370-0
Txt.Backdoor.NableRemoteAccessConfig-10044371-0 
Txt.Backdoor.NableRemoteAccessConfig-10044372-0 

Indicators of Compromise 

Disclaimer: The URLs below are part of the management infrastructure for the RMM tools described in this blog and are not controlled by the threat actor. An assessment must be done by customers before enabling block signatures for these domains. 

IOCs for this threat can be found on our GitHub repository here. 

Network IOCs 

hxxps://upload1[.]am[.]remote[.]management/ 
hxxps://upload2[.]am[.]remote[.]management/ 
hxxps://upload3[.]am[.]remote[.]management/ 
hxxps://upload4[.]am[.]remote[.]management/ 
198[.]45[.]54[.]34[.]bc[.]googleusercontent[.]com 

RMM Installer – Hashes 

03b5c76ad07987cfa3236eae5f8a5d42cef228dda22b392c40236872b512684e 
0759b628512b4eaabc6c3118012dd29f880e77d2af2feca01127a6fcf2fbbf10 
080e29e52a87d0e0e39eca5591d7185ff024367ddaded3e3fd26d3dbdb096a39 
0de612ea433676f12731da515cb16df0f98817b45b5ebc9bbf121d0b9e59c412 
1182b8e97daf59ad5abd1cb4b514436249dd4d36b4f3589b939d053f1de8fe23 
14c1cb13ffc67b222b42095a2e9ec9476f101e3a57246a1c33912d8fe3297878 
2850a346ecb7aebee3320ed7160f21a744e38f2d1a76c54f44c892ffc5c4ab77 
4787df4eea91d9ceb9e25d9eb7373d79a0df4a5320411d7435f9a6621da2fd6b 
51fa1d7b95831a6263bf260df8044f77812c68a9b720dad7379ae96200b065dd 
527a40f5f73aeb663c7186db6e8236eec6f61fa04923cde560ebcd107911c9ff 
57a90105ad2023b76e357cf42ba01c5ca696d80a82f87b54aea58c4e0db8d683 
63cde9758f9209f15ee4068b11419fead501731b12777169d89ebb34063467ea 
79b041cedef44253fdda8a66b54bdd450605f01bbb77ea87da31450a9b4d2b63 
a2c17f5c7acb05af81d4554e5080f5ed40b10e3988e96b4d05c4ee3e6237c31a 
b53f9c2802a0846fc805c03798b36391c444ab5ea88dc2b36bffc908edc1f589 
c484d3394b32e3c7544414774c717ebc0ce4d04ca75a00e93f4fb04b9b48ecef 
ca11eb7b9341b88da855a536b0741ed3155e80fc1ab60d89600b58a4b80d63a5 
d1efebcca578357ea7af582d3860fa6c357d203e483e6be3d6f9592265f3b41c 
e2171735f02f212c90856e9259ff7abc699c3efb55eeb5b61e72e92bea96f99c 
e34b8c9798b92f6a0e2ca9853adce299b1bf425dedb29f1266254ac3a15c87cd 
ebdefa6f88e459555844d3d9c13a4d7908c272128f65a12df4fb82f1aeab139f 
f52b4d81c73520fd25a2cc9c6e0e364b57396e0bb782187caf7c1e49693bebbf 
f5efd939372f869750e6f929026b7b5d046c5dad2f6bd703ff1b2089738b4d9c 
F68ae2c1d42d1b95e3829f08a516fb1695f75679fcfe0046e3e14890460191cf 
a71e274fc3086de4c22e68ed1a58567ab63790cc47cd2e04367e843408b9a065

Cisco Talos Blog – ​Read More