Hunter Plan: Fast and Private Threat Analysis for Solo Malware Researchers 

Editor’s note: The current article was originally published on November 26, 2020, and updated on August 12, 2025.

If you’re an independent malware analyst or threat researcher, you need a solution that works as hard as you do; one that’s flexible, private, and built for deep, hands-on investigations. 

Hunter puts that power in your hands. With 70% of ANY.RUN’s Interactive Sandbox capabilities, you can dive into advanced investigations, expose hidden threats, and keep every detail locked down. 

Let’s look at why so many solo analysts make Hunter their plan of choice. 

Keep Your Analyses Secure 

Hunter plan advantages you can’t miss 

The Hunter plan gives analysts the privacy they need to work with sensitive samples confidently.  

You decide who can access your submissions, whether you want to keep them completely private, share with a trusted contact, or display them in a controlled presentation mode. 

Benefits of private sandbox analyses 

This control is backed by strong security measures that protect your data at every stage: 

  • Our SOC 2 Type 1 certification is backed by independent assessments, verifying that we have robust controls in place to protect user data, private malware analyses, and system integrity 
  • Data is encrypted at rest with AES-256, ensuring stored files remain secure against unauthorized access. 

Detect threats faster with ANY.RUN’s Interactive Sandbox
See full attack chain in seconds for immediate response 



Contact us for quote or trial


Identify Malicious Files and URLs Faster 

Hunter enables rapid, controlled analysis of suspicious files and URLs across a range of environments, from Windows 7, 10, and 11 to Linux and Android. In most investigations, the sandbox delivers a reliable verdict in under 40 seconds, allowing analysts to act without delay. 

By fully detonating each attack and interacting with it at every stage, you can observe its complete execution chain, including those steps designed to evade automated tools. Detonation actionsand environment fine-tuning work together to make threat identification both precise and efficient, even when dealing with multi-layered or highly evasive malware. 

The intuitive interface makes it easy to navigate complex analyses, while helping analysts of all experience levels deepen their expertise with every investigation. 

Real-World Example of Phishing Attack 

One real-world case shows exactly why this capability matters. 

Real Case Analysis: From Phishing Email to AsyncRAT 

Fake document with malicious PDF displayed inside ANY.RUN sandbox 

A phishing email arrived with an SVG attachment and a password hidden in the message body. Opening the SVG in the sandbox revealed a fake document containing a link to download a PDF. Clicking that link triggered the download of a ZIP archive; one that could only be extracted by manually entering the earlier password. 

Entering password hidden in the message body 

Inside was an executable file. When run, ANY.RUN immediately flagged it as AsyncRAT — a remote access trojan capable of spying on and controlling infected systems. 

AsyncRAT detected by ANY.RUN sandbox 

Without interactivity, this chain would have remained hidden. A fully automated tool wouldn’t have clicked the link, copied the password, or opened the archive, leaving the threat undetected.  

Here, the AI Assistant also stepped in to summarize the full chain of actions, making it easier for a junior analyst to quickly understand the threat without manually piecing together every detail. 

Threat summary by AI Assistant  

In this case, ANY.RUN sandbox provided: 

  • Network activity visibility, enabling the team to block C2 communications before data exfiltration 

Gain Better Visibility into Threat Behavior 

Hunter helps you understand exactly how malware operates, so you can respond with precision. 

Inside the analysis session, you can view MITRE ATT&CK®-mapped TTPs to see which tactics and techniques the threat uses. This makes it easier to assess the attack’s sophistication, connect it to known threat actors, and prioritize the right defensive actions. 

You can also explore attack patterns through the process graph and triggered rules, visualizing every step of the execution chain. This helps analysts quickly grasp complex behaviors, uncover hidden stages, and spot anomalies that might otherwise be missed. 

When the investigation is complete, you can generate detailed reports with IOCs, ready for sharing with colleagues, integrating into SIEM or EDR systems, or using to update detection rules. This ensures your findings don’t just stay in the lab but actively strengthen defenses. 

Real-World Example: Gootloader Infection Chain 

A live Gootloader case in the Hunter sandbox begins with a user landing on a compromised website while searching for something business-related, such as a contract template. The site delivers a ZIP file containing a trojanized JavaScript file disguised as a common library like jQuery. Once opened, the script runs via wscript.exe, launching a heavily obfuscated payload. 

Real Case Analysis: Contract Template Search Leads to Gootloader 

Analysis of the Gootloader Node.js malware inside ANY.RUN’s Interactive Sandbox 

The process graph shows the full attack chain: the first-stage payload drops a second-stage JavaScript file, creates a scheduled task for persistence, and hands execution from wscript.exe to cscript.exe, which then spawns a PowerShell process. 

ANY.RUN’s process graph with full attack chain 

Mapped TTPs in the MITRE ATT&CK® section reveal multiple techniques, including system reconnaissance, persistence via scheduled tasks, and data exfiltration through HTTP headers.

PID 7828 with its exposed techniques and tactics inside ANY.RUN sandbox 

At the end of the investigation, a detailed report with IOCs is generated, containing domains, file hashes, and registry keys. These can be shared instantly with your team or imported into security tools to block future attacks. 

Well-structured report generated by ANY.RUN sandbox 

Uncover Evasive Malware Designed to Evade Detection 

Some threats are designed to stay hidden, activating only under specific system conditions, locales, or network environments. Hunter equips you with the tools to expose them. 

You can dissect samples in depth by inspecting network traffic, registry modifications, and running processes, giving you a complete picture of the malware’s activity and persistence mechanisms. This visibility is critical for detecting hidden payloads and spotting malicious behavior that traditional scanners might miss. 

Hunter also lets you gather unique IOCs directly from malware configurations and Suricata IDS detections. These high-confidence indicators can be used to update detection rules, block malicious infrastructure, and improve threat-hunting accuracy across your environment. 

Finally, you can investigate in-depth by customizing the OS, installed tools, and network settings. Switch locales, adjust keyboard languages, or route traffic through specific regions using a residential proxy to bypass geofencing. This flexibility enables you to trigger and observe behaviors that would otherwise remain dormant, ensuring no evasion technique goes unnoticed. 

Revealing Geofenced Malware with Locale and Network Routing 

Some malware is geofenced, checking the geolocation of the infected host before delivering a payload. If the system isn’t in a target country, the attack simply won’t proceed. 

With Hunter, you can bypass these restrictions by changing the system locale and routing traffic through another region, either via TOR or a residential proxy

In this case, a malicious document with an Italian-language template was analyzed in a default en-US environment. The Regsvr32.exe process launched but didn’t receive any payload, terminating shortly after. Restarting the analysis with the locale set to it-IT and routing traffic through Italy via TOR revealed the hidden threat: Ursnif (Gozi) was successfully downloaded as a payload. 

Detection of Ursnif (Gozi) malware using TOR inside ANY.RUN sandbox 

This combination of environment customization and network rerouting allows analysts to uncover full attack chains, capture critical IOCs, and study malware that would otherwise remain invisible in automated or default setups. 

Scale Early Threat Detection, Reduce Business Risks with Enterprise Plan 

ANY.RUN’s Enterprise plan is a comprehensive solution for SOC teams 

Built for SMBs, large enterprises, MSSPs, and government agencies, the Enterprise plan gives SOC teams the full power of ANY.RUN’s Interactive Sandbox, with advanced capabilities for security, automation, and collaboration. 

☝ Key ANY.RUN stats
  • +36% average detection rate improvement in SOC environments
  • 20% workload reduction for Tier 1 analysts through automated triage
  • 21 minutes faster MTTR per case boosting overall SOC efficiency
  • Up to 3x overall SOC performance gains when scaling across large teams
  • 30% fewer escalations from Tier 1 to Tier 2, thanks to skill-building through interactive analysis
  • Trusted by 15,000+ organizations across finance, telecom, retail, government, and healthcare

Enterprise is designed for teams that need to investigate faster, work together seamlessly, and stay ahead of evolving threats. 

ANY.RUN’s Enterprise plan provides teamwork functionality 

With Enterprise, you can: 

  • Slash business risk with early threat detection to prevent costly damage to your infrastructure and reputation. 
  • Cut MTTR through quick triage and clear threat insights that speed up decisive threat response. 
  • Increase detection rate by analyzing all types of Windows, Linux (including ARM), and Android files to identify more threats faster. 
  • Enhance productivity by automating routine tasks to help teams focus on critical incidents with less fatigue. 
  • Develop analyst expertise through hands-on, guided analysis that doubles as real-world training and saves on resources on onboarding. 
  • Protect sensitive data with private analyses, compliance with strict security frameworks, and isolated working environments. 
  • Collaborate seamlessly with shared investigations, role-based permissions, and productivity tracking for the whole SOC. 
ANY.RUN’s Enterprise plan provides customizable integration with your security stack 

Enterprise provides API/SDK access that lets SOC teams utilize ANY.RUN’s connectors for popular security solutions like SIEM, XDR, TIP systems to streamline workflows and increse response speed even further. 


ANY.RUN cloud interactive sandbox interface

Sandbox for Businesses

Boost performance of your SOC with the Enterprise plan designed for SMBs, MSSPs, enterprise companies, and government organizations.



Case Study: Expertware Cuts Investigation Time by 50% 

Challenge: 
Expertware, a leading European MSSP, needed to accelerate malware investigations, cut down on manual processes, and deliver faster, higher-quality results to its clients. 

Result: 
By adopting ANY.RUN Enterprise, Expertware reduced investigation turnaround time by 50%, boosted SOC efficiency with real-time collaborative analysis and shared reports, and gained complete visibility into multi-stage and fileless attacks, from initial macro execution to C2 communications. These improvements allowed them to deliver clearer, more actionable reports, enabling clients to respond before threats escalated. 

“ANY.RUN’s interactive approach was critical in dissecting a complex multi-stage XLoader campaign and swiftly mitigating its impact across our network.” 
— Expertware, Leading European MSSP 

Ready to Get Started? 

Whether you need the agility of Hunter or the full-scale power of Enterprise, ANY.RUN gives you the solutions to detect, investigate, and stop threats faster. 

Contact us for a trial or a personalized quote today. 

About ANY.RUN  

Designed to accelerate threat detection and improve response times, ANY.RUN equips teams with interactive malware analysis capabilities and real-time threat intelligence.  

ANY.RUN’s cloud-based sandbox supports investigations across Windows, Linux, and Android environments. Combined with Threat Intelligence Lookup and Feeds, our solutions give security teams full behavioral visibility, context-rich IOCs, and automation-ready outputs, all with zero infrastructure overhead.  

Ready to see how ANY.RUN’s services can power your SOC?    

Start your 14-day trial now → 

The post Hunter Plan: Fast and Private Threat Analysis for Solo Malware Researchers  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Sleepwalk: a sophisticated way to steal encryption keys | Kaspersky official blog

Information security has multiple layers of complexity. Effective yet technically simple attacks through phishing emails and social engineering are well known about. We also often post about sophisticated targeted attacks that exploit vulnerabilities in enterprise software and services. And among the most sophisticated are attacks that exploit fundamental hardware features. Although such attacks aren’t cheap, the cost doesn’t deter all threat actors. Or at least researchers.

Researchers at two US universities recently published a paper with a fascinating example of an attack on hardware. Using the standard operating system feature for switching between tasks, the researchers developed an attack they named Sleepwalk, which can crack a cutting-edge data encryption algorithm.

Side-channeling — sleep-walking

Sleepwalk is a type of side-channel attack. In this context, “side channel” typically refers to any method of stealing secret information by indirect observation. For example, imagine someone is typing a password on a keyboard. You can’t see the letters/symbols, but you can hear the keys being pressed. This is a feasible attack in which the sound of the keystrokes — the side channel — reveals what text is being typed. A classic example of a side channel is monitoring changes in the power consumption of a computer system.

Why does power consumption vary? Simple: different computing tasks require different resources. Serious number crunching will max out the load on the CPU and RAM, while typing in a text editor will see the computer mostly idle. In some cases, changes in power consumption give away sensitive information, such as private keys for data encryption. This is similar to how a few barely audible clicks can reveal the correct rotor positions to pick the combination lock on a safe.

Why are these attacks sophisticated? Because a computer performs multiple tasks simultaneously. And all of them affect power consumption in one way or another. Extracting useful information from this noise is a highly complex job. Even when analyzing the simplest devices such as smart card readers, researchers take hundreds of thousands of measurements in a short period, repeating them tens or hundreds of times, then apply sophisticated signal-processing methods to confirm or refute the possibility of a side-channel attack. Sleepwalk in a sense simplifies this work: the researchers were able to extract useful information by measuring the pattern of power consumption just once, during a so-called context switch.

Voltage fluctuations during CPU context switching.

Voltage fluctuations during CPU context switching. Source

Context switching

We’re all used to switching between programs on a computer or smartphone. At a deeper level, such multitasking is enabled by various mechanisms behind the scenes, one of which is context switching. The state of one program is saved, while data from another is loaded into the CPU. The decision on which program to give priority to, and when, is made by the operating system. That said, there’s a simple way for a programmer to force a context switch by adding a sleep instruction to the program code. The operating system then sees that the program doesn’t require CPU power for the time being, and switches to another task. Context switching, especially when the sleep function is called, is an energy-consuming activity that requires saving the state of one program and loading data from another into the CPU. The screenshot above shows a spike in the measured voltage during such a switch.

As it turns out, the nature of this power spike is determined both by the task that was running before and by the data being processed. Essentially, the researchers hugely simplified implementing a side-channel attack in which the system’s energy consumption is measured. Instead of measuring over a long period, a single spike is analyzed at a predetermined time. This serves up indirect data of two types: what program was running before the switch, and what data was being processed. All that remains is to carry out the attack according to the scheme below:

Outline of the Sleepwalk attack

Outline of the Sleepwalk attack Source

Sleepwalk attack in the real world

The researchers did their experiments on a single-board Raspberry Pi 4, demonstrating first of all that the power spike produced by different computing tasks during context switching has a unique fingerprint. Let’s suppose that this computer is performing data encryption. We can feed any text to the encryption algorithm as input, but we don’t know the key for encrypting the data.

What if we trigger a context switch at a specific point in the encryption algorithm’s operation? The operating system will save the state of the program, causing a spike in power consumption. Using an oscilloscope to repeatedly measure the nature of this spike, the researchers were able to extract the secret key!

That was just one of many important things learned in the experiment. They also succeeded in fully reconstructing a SIKE private key. The fairly new encryption algorithm SIKE is proposed as a replacement for traditional algorithms to protect data even in the quantum age. Yet despite its apparent innovativeness, questions are already being asked about the algorithm’s strength. Moreover, to extract the secret key, the researchers didn’t just carry out a Sleepwalk attack, but also exploited a weakness in the algorithm itself.

The Sleepwalk attack was unable to fully crack the traditional and reliable (but not post-quantum) AES-128 algorithm. But the team was able to reconstruct 10 of the 16 bytes of the private key — and this in itself is an achievement since Sleepwalk is somewhat simpler than other side-channel attack methods.

Sure, there’s no talk yet of deploying Sleepwalk in practice. The researchers merely wanted to demonstrate that power spikes during context switching can reveal secret information. Which they did. But bad guys one day might be able to develop the attack so as to steal real secrets — be they from a computer, secure flash drive, or crypto wallet.

As result of this research, existing and in-development encryption algorithms should become a little more reliable. Not only that, the Sleepwalk attack indirectly points up a key aspect in the implementation of cryptographic systems. Future algorithms will need to be resistant to analysis using quantum computing (so-called “post-quantum cryptography”); but no less vitally, this will need to be done correctly. Otherwise, a new, theoretically more secure algorithm may turn out to be more vulnerable to traditional attacks than a pre-quantum one.

Kaspersky official blog – ​Read More

WinRAR zero-day exploited in espionage attacks against high-value targets

The attacks used spearphishing campaigns to target financial, manufacturing, defense, and logistics companies in Europe and Canada, ESET research finds

WeLiveSecurity – ​Read More

Update WinRAR tools now: RomCom and others exploiting zero-day vulnerability

ESET Research discovered a zero-day vulnerability in WinRAR being exploited in the wild in the guise of job application documents; the weaponized archives exploited a path traversal flaw to compromise their targets

WeLiveSecurity – ​Read More

How to implement a blameless approach to cybersecurity | Kaspersky official blog

Even companies with a mature cybersecurity posture and significant investments into data protection aren’t immune to cyber-incidents. Attackers can exploit zero-day vulnerabilities or compromise a supply chain. Employees can fall victim to sophisticated scams designed to breach the company’s defenses. The cybersecurity team itself can make a mistake while configuring security tools, or during an incident response procedure. However, each of these incidents represents an opportunity to improve processes and systems, making your defenses even more effective. This isn’t just a rallying call; it’s a practical approach that’s been successful enough in other fields such as aviation safety.

In aviation, almost everyone in the aviation industry — from aircraft design engineers to flight attendants – is required to share information to prevent incidents. This isn’t limited to crashes or system failures; the industry also reports potential problems. These reports are constantly analyzed, and security measures are adjusted based on the findings. According to Allianz Commercial’s statistics, this continuous implementation of new measures and technologies has led to a significant reduction in fatal incidents — from 40 per million flights in 1959 to 0.1 in 2015.

Still in aviation, it was recognized long ago that this model simply won’t work if people are afraid to report procedure violations, quality issues, and other causes of incidents. That’s why aviation standards include requirements for non-punitive reporting and a just culture, meaning that reporting problems and violations shouldn’t lead to punishment. DevOps engineers have a similar principle they call a blameless culture, which they use when analyzing major incidents. This approach is also essential in cybersecurity.

Does every mistake have a name?

The opposite of a blameless culture is the idea that “every mistake has a name”, meaning a specific person is to blame. Under this approach, every mistake can lead to disciplinary action, including termination. This principle is considered harmful and doesn’t lead to better security.

  • Employees fear accountability and tend to distort facts during incident investigations — or even destroy evidence.
  • Distorted or partially destroyed evidence complicates the response and worsens the overall outcome because security teams can’t quickly and properly assess the scope of a given incident.
  • Zeroing in on one person to blame during an incident review prevents the team from focusing on how to change the system to prevent similar incidents from happening again.
  • Employees are afraid to report violations of IT and security policies, causing the company to miss opportunities to fix security flaws before they lead to a critical incident.
  • Employees have no motivation to discuss cybersecurity issues, coach one another, or correct their coworkers’ mistakes.

To truly enable every employee to contribute to your company’s security, you need a different approach.

The core principles of a just culture

Call it “non-punitive reporting” or a “blameless culture” — the core principles are the same:

  • Everyone makes mistakes. We learn from our mistakes; we don’t punish them. However, it’s crucial to distinguish between an honest mistake and a malicious violation.
  • When analyzing security incidents, the overall context, the employee’s intent, and any systemic issues that may have contributed to the situation all need considering. For example, if a high turnover of seasonal retail employees prevents them from being granted individual accounts, they might resort to sharing a single login for a point-of-sale terminal. Is the store administrator at fault? Probably not.
  • Beyond just reviewing technical data and logs, you must have in-depth conversations with everyone involved in an incident. For this you should create a productive and safe environment where people feel comfortable sharing their perspectives.
  • The goal of an incident review should be to improve behavior, technology, and processes in the future. Regarding the latter for serious incidents, they should be split in to two: immediate response to mitigate the damage, and postmortem analysis to improve your systems and procedures.
  • Most importantly, be open and transparent. Employees need to know how reports of issues and incidents are handled, and how decisions are made. They should know exactly who to turn to if they see or even suspect a security problem. They need to know that both their supervisors and security specialists will support them.
  • Confidentiality and protection. Reporting a security issue should not create problems for the person who reported it or for the person who may have caused it — as long as both acted in good faith.

How to implement these principles in your security culture

Secure leadership buy-in. A security culture doesn’t require massive direct investment, but it does need consistent support from the HR, information security, and internal communications teams. Employees also need to see that top management actively endorses this approach.

Document your approach. The blameless culture philosophy should be captured in your company’s official documents — from detailed security policies to a simple, short guide that every employee will actually read and understand. This document should clearly state the company’s position on the difference between a mistake and a malicious violation. It should formally state that employees won’t be held personally responsible for honest errors, and that the collective priority is to improve the company’s security, and prevent future recurrences.

Create channels for reporting issues. Offer several ways for employees to report problems: a dedicated section on the intranet, a specific email address, or the option to simply tell their immediate supervisor. Ideally, you should also have an anonymous hotline for reporting concerns without fear.

Train employees. Training helps employees recognize insecure processes and behaviors. Use real-world examples of problems they should report, and walk them through different incident scenarios. You can use our online our online Kaspersky Automated Security Awareness Platform to organize these cybersecurity-awareness training sessions. Motivate employees to not only report incidents, but also to suggest improvements and think about how to prevent security problems in their day-to-day work.

Educate your leadership. Every manager needs to understand how to respond to reports from their team. They need to know how and where to forward a report, and how to avoid creating blame-focused islands in a sea of just culture. Teach leaders to respond in a way that makes their coworkers feel supported and protected. Their reactions to incidents and error reports needs to be constructive. Leaders should also encourage discussions of security issues in team meetings to normalize the topic.

Develop a fair review procedure for incidents and security-issue reports. You’ll need to assemble a diverse group of employees from various teams to form a “no-blame review board”. It will be responsible for promptly processing reports, making decisions, and creating action plans for each case.

Reward proactivity. Publicly praise and reward employees who report spearphishing attempts or newly discovered flaws in policies or configurations, or who simply complete awareness training better and faster than others on their team. Mention these proactive employees in regular IT and security communications such as newsletters.

Integrate findings into your security management processes. The conclusions and suggestions from the review board should be prioritized and incorporated into the company’s cyber-resilience plan. Some findings may simply influence risk assessments, while others could directly lead to changes in company policies, or implementation of new technical security controls or reconfiguration of existing ones.

Use mistakes as learning opportunities. Your security awareness program will be more effective if it uses real-life examples from your own organization. You don’t need to name specific individuals, but you can mention teams and systems, and describe attack scenarios.

Measure performance. To ensure this process is working and delivering results, you need to use information security metrics as well as HR and communications KPIs. Track the MTTR for identified issues, the percentage of issues discovered through employee reports, employee satisfaction levels, the number and nature of security issues identified, and the number of employees engaged in suggesting improvements.

Important exceptions

A security culture or blameless culture doesn’t mean that no one is ever held accountable. Aviation safety documents on non-punitive reporting, for example, include crucial exceptions. Protection doesn’t apply when someone knowingly and maliciously deviates from the regulations. This exception prevents an insider who has leaked data to competitors from enjoying complete impunity after confessing.

The second exception is when national or industry regulations require individual employees to be held personally accountable for incidents and violations. Even with this kind of regulation, it’s vital to maintain balance. The focus should remain on improving processes and preventing future incidents —  not on finding who’s to blame. You can still build a culture of trust if investigations are objective and accountability is only applied where it’s truly necessary and justified.

Kaspersky official blog – ​Read More

ReVault! When your SoC turns against you… deep dive edition

For a high-level overview of this research, you can refer to our Vulnerability Spotlight. This is the in-depth version that shares many more technical details. In this post, we’ll be covering the entire research process as well as providing technical explanations of the exploits behind the attack scenarios.

Dell ControlVault is “a hardware-based security solution that provides a secure bank that stores your passwords, biometric templates, and security codes within the firmware.” A daughter board provides this functionality and performs these security features in firmware. Dell refers to the daughter board as a Unified Security Hub (USH), as it is used as a hub to run ControlVault (CV), connecting various security peripherals such as a fingerprint reader, smart card reader and NFC reader.

Why target ControlVault3?

Hindsight is 20/20 and in retrospect, there are plenty of valid reasons to look at it:

  • There is no public research on this device.
  • It is used for security and enhanced logins and thus is used for sensitive functions.
  • It is found in countless Dell laptops and, in particular, places that seek this extra layer of security (e.g., finance, healthcare, government, etc.) are more likely to have it in their environment.

But what really kickstarted this research project was spotting this target that seemed “promising.” What first caught our attention is that most of the Windows services involved with ControlVault3 are not Address Space Layout Randomization (ASLR)-enabled. This means easier exploitation, and possible technical debt in the codebase. Further, the setup bundle comes with multiple drivers and what appears to be a mix of clear text and encrypted firmware. This makes for an exciting challenge that calls for further investigation.

Making a plan

When starting a vulnerability research project, it is good to have some ideas of what we’re trying to achieve. Let’s make a plan that will act as our North Star and guide our steps along the way:

  1. The main application is encrypted, and we want to see what this firmware hides. One of our first tasks should be to find a way to decrypt the application firmware.
  2. This is a vulnerability research project and, as such, we need to understand how to interact with Control Vault, understand its attack surface, and look for vulnerabilities.
  3. The Windows services run without ASLR and have SYSTEM privileges. Those could be standalone targets for local escalation of privilege (EoP) and/or may have interesting exploitation paths.

Gathering information

Information gathering occurred throughout the project. However, to clarify this discussion, we’ll now summarize some of the early findings.

ControlVault is made by Broadcom and leverages their 5820X chip series. Technically, we are only talking about ControlVault3 (or ControlVault3+), but there was a ControlVault2 and a ControlVault (1 being implied) that were using different hardware. The first mentions of ControlVault date back to 2009-2011.

Online research for the BCM5820X chip series yields minimal results, with this NIST certification being the only notable finding. This document clarifies the security posture of the chip and gives some insight into the operations of its cryptographic module.

Other useful resources are forum posts where power users talk about Control Vault, particularly when the power users discuss making it work on Linux. One post eventually lead to a repository providing official (but limited) Linux support. It is worth noting that one of the shared objects in this repository, “libfprint-2-tod-1-broadcom.so”, ships with debug symbols. This can help when reversing the ControlVault ecosystem.

Finally, for a physical representation, the USH board that connects to the laptop and runs the ControlVault firmware is shown below:

Figure 1: Picture of a USH Board running ControlVault.

When connected inside the laptop, it looks like this (battery removed to show the board):

Figure 2: USH board (highlighted in orange) inside a Dell Latitude laptop.

Interesting files in ControlVault3 bundle

ControlVault comes with a lot of files. We cannot look at all of them at once, but there are a few that stick out, mainly the “bin” and “firmware” folders. The former contains the main services used to communicate with ControlVault and the associated shared objects, while the latter is used to push data to the device.

Figure 3: Bin and firmware folders from the ControlVault3 installer.

The firmware folder is also particularly interesting as it contains what we can presume is the code running on the ControlVault device. If we look at the content of these files by running the “strings” command or by opening them in a hex editor, we find that the ones with “SBI” in their names are in plaintext, while the ones named “bcmCitadelXXX” appear to be either compressed or encrypted. From the information we gathered earlier, we know that “SBI” stands for “Secure Boot Image” and is part of the early stage of the device’s boot process; we can then guess the “bcmCitadelXXX” files are the main application firmware that gets started by the SBI.

Reversing the bootloader

As the SBI files are in plaintext and we know from the Broadcom’s documentation that they are ARM code, we can have a look at one of them in our favorite disassembler/decompiler, which might help us figure out how to handle the application firmware itself.

Identifying the SBI load address

The usual first step is to identify the load address of this blob of data which, in our case, is 0x2400CC00. The data starts with a 0x400 bytes header, thus leading to a more reasonable 0x2400D000 base address for the actual start of the code.

To find this value, the trick is to first load the code at an arbitrary address and then look for absolute addresses (e.g., pointers to strings, addresses of functions, etc.) and play the guessing game while rebasing the firmware until everything lines up. The SBI firmware includes a lot of strings, so it’s fairly easy to spot when they are referenced properly. Alternatively, function pointers can also be useful and, conveniently, some can be found close to the start of the code, as an ARM vector table is placed there. This gives away the load address.

Figure 4: Vector table and beginning of the code inside the SBI.

Determining the software architecture

Here, we need to make a choice of what to focus on first. We can either try to map out the general architecture of the SBI and understand how it works or instead keep our eyes on the ball and look for how the application firmware is being decrypted. In practice, we did the latter, but let’s provide a few spoilers to make this easier to follow.

Functions and parameters names

The firmware relies heavily on logging, which can leak function names, variables and some details about the logic of the code itself.

The firmware appears to be running a custom real-time operating system (RTOS) called Citadel RTOS. We can also find debug strings referring to OpenRTOS, which was likely used as a base to CitadelRTOS.

And as mentioned previously, the Linux implementation comes with debug symbols for the host API, which provides lots of data structures and enum values used by ControlVault.

Communication with the firmware

Before going too far into reversing the SBI, let’s have a high-level overview of how communication occurs between host (Windows) and firmware.

Essentially, the USH board is connected to the laptop’s motherboard and appears as a USB device in the device manager. A driver, “cvusbdrv.sys”, creates a device file that can be opened from userland. Various DeviceIoControl commands can be used to manage and communicate with the device:

{
  IOCTL_SUBMIT = 0x5500E004,  // sends CV Command
  IOCTL_RESULT = 0x5500E008,  // result from CV Command
  IOCTL_HSREQ = 0x5500E00C, // Host Storage Request, used by bcmHostStorageService
  IOCTL_HCREQ = 0x5500E01C,  // Host Control Request, used by bcmHostControlService
  IOCTL_FPREQ = 0x5500E024,  // Fingerprint Request 
  IOCTL_CACHE_VER = 0x5500E028,  // Returned cached version string
  IOCTL_CLREQ = 0x5500E030, // Contactless Request (NFC)
};

Communicating with the driver can be made easier by using userland APIs. In particular, the “bcmbipdll.dll” file implements more than 160 high-level functions that can be used to send specific commands to the firmware. These functions are prefixed with “cv_” (e.g., “cv_open”, “cv_close”, “cv_create_object”, etc.) and are referenced as “CV Commands”. Behind the scenes, when invoking one of these commands, IOCTL_SUBMIT / IOCTL_RESULT is issued, and the relevant data is sent over USB to the firmware.

Upon receiving data from the USB endpoints, the firmware will process the data packets and route them to dedicated code paths. For CV commands, the data is passed to a function called “CvManager /CvManager_SBI” that dispatches the command to the function implementing it.

Example: Manual communication with ControlVault

A simple Python script can be used to load “bcmbipdll.dll” and invoke its functions.

For instance, the following will retrieve the version string of the firmware:

Figure 5: Python snippet to retrieve CV’s version string.

The return value:

Figure 6: Version string obtained from cv_get_ush_ver.

As a reminder, the Linux implementation of the host APIs (libfprint-2-tod1-broadcom/usr/lib/x86_64-linux-gnu/libfprint-2/tod-1/ libfprint-2-tod-1-broadcom.so) comes with debug symbols and thus can be used to identify the various structures and parameters involved in the invocation of each CV command.

We will revisit the communication mechanism in the “Exploiting a SYSTEM service” section, but for now, we can return to our original goal of figuring out how to decrypt the application firmware.

Finding the firmware decryption mechanism

We can search the strings inside the SBI to see if anything mentions decryption:

Figure 7: Strings from the SBI firmware mentioning decryption.

As seen in the screenshot above, the USH_UPGRADE functionality mentions decryption failures. And indeed, this functionality is related to application firmware decryption. The USH_UPGRADE functionality is implemented by three CV commands:

  • CV_CMD_FW_UPGRADE_START
  • CV_CMD_FW_UPGRADE_UPDATE
  • CV_CMD_FW_UPGRADE_COMPLETE

Those commands are issued by the “cv_firmware_upgrade” function in “bcmbipdll.dll”.

The firmware update process is a little convoluted:

  1. The host will first flash a file called “bcm_cv_clear_scd.otp” solely composed of “0123456789abcdef” repeated many times. For that, it will use the “cv_flash_update” function.
  2. The host will call “cv_reboot_to_sbi” to restart in SBI mode.
  3. The host will send the CV_CMD_FW_UPGRADE_START command handled in the SBI by “ushFieldUpgradeStart”:
  1. The SBI will try to load from Flash something called a Secure Code Descriptor (SCD) that contains key material (e.g., decryption key, IV, and RSA – public key) but will revert to hardcoded default if no SCD is available. This is what got flashed/erased during step 1.

Figure 8 Using hardcoded defaults during ushFieldUpgradeStart.
Figure 9 Calling the decryption function.

  1. The host will send the first 0x2B0 bytes of the encrypted application firmware. This is an encrypted header defining the parameters of the soon-to-be installed firmware.
  2. The SBI will try to decrypt (AES-CBC), validate, and cryptographically verify
    the header using key material from the SCD or the hardcoded defaults.
  3. Upon success, the SBI will generate new key material to be stored in a
    different section of the SCD and used to store the firmware in an encrypted
    form. This is because the SoC used by ControlVault can execute in place
    (XIP) encrypted code thanks to its Secure Memory Access Unit (SMAU).
  1. Then, the host will send the rest of the firmware split into chunks of 0x100 bytes via the CV_CMD_FW_UPGRADE_UPDATE command handled in the SBI by the “ushFieldUpgradeUpdate” function.
  1. The firmware chunks are decrypted using the same method, but instead of
    using a default IV, the code relies on a custom function from the SMAU
    device to generate an IV based upon the address of the memory block being
    decrypted.
    Note: The base address of this application firmware can be guessed from
    reversing and is 0x63030000.

Figure 10 Computation of address-based IV.

  1. A rolling hash (SHA256) of the decrypted blocks is kept for further validation.
  1. When done sending the encrypted firmware, the host will send the
    CV_CMD_FW_UPGRADE_COMPLETE command handled in the SBI by the
    “ushFieldUpgradeComplete” function.
  1. The SBI will verify the signature of the firmware received based upon the
    already verified header and the rolling hash that was computed while
    decrypting firmware pages.
  2. Upon success, the new SCD will be encrypted and committed to flash using
    a per-device AES key stored in the chip OTP fuses.

Luckily, the hardcoded keys in the “bcmsbiCitadelA0_1.otp file are the ones that were used to encrypt the application firmware, and by re-implementing the algorithm described above, we can successfully decrypt the application firmware and move on to our second objective: looking for vulnerabilities.

Attack surface mapping and vulnerability research

With a freshly decrypted firmware image, it’s easy to jump the gun and start reversing everything, but before we get into the deep end, we should stop and strategize. So, let’s have a look at the architecture of the system and the potential areas of interest:

Figure 11: System architecture.

There are a few angles we can consider:

  • From the host, could we send malicious data to corrupt the application firmware or the SBI code?
  • Could we tamper with the firmware itself and make it misbehave?
  • Could a malicious firmware image compromise the host?
  • What about the hardware peripherals? Could they be compromised or used to compromise the firmware?

The research we’ve conducted explores the first three questions. The fourth one is a potential future research project. Answering the first question will help achieve the next two, so let’s start with this first one.

Finding vulnerabilities in the application firmware

The application firmware accepts more than150 CV commands. This is a massive attack surface and there is a lot to look at. Most of these commands expect a “session” to be already established using the “cv_open” command.  When the interaction is over, the “cv_close” function is used to terminate the session. Let’s look at how these two operate.

cv_open and cv_close

The prototype of “cv_open” is as such:

int __fastcall cv_open(cv_session_flags a1, cv_app_id *appId, cv_user_id *userId, int
*pHandle)

Its implementation is below:

Figure 12: Call to cv_open.

We can see that memory is allocated (line 29), then a tag “SeSs” is written (line 36) as the first four bytes of the session object. After some more processing, the pointer to the session is returned as a handle (line 44) back to the host. The choice of using a pointer as a handle is already a little questionable as it leaks a heap address to the host, but let’s continue.

The prototype for “cv_close” is as follows:

int __fastcall cv_close(cv_session *)

The function takes the pointer to the session we’ve obtained from “cv_open” and attempts to close it by doing the following:

  1. Validate the session (see below)
  2. Erase the “SeSs” tag
  3. Free the memory
Figure 13: Implementation of cv_close.

Meanwhile, the “validate_session” function will:

  1. Verify the pointer provided is within the CV_HEAP
  2. Verify the first 4 bytes match the “SeSs” tag
  3. Extra checks irrelevant for us
Figure 14: Session validation.

This is particularly interesting because, assuming one can place some arbitrary data on the heap, it then becomes easy to forge a fake session and free it, corrupting heap memory in the process. This issue was reported as CVE-2025-25215.

As expected, it is indeed possible to place attacker-controlled data on the heap using functions like “cv_create_object” or “cv_set_object”. Locating said data is a little trickier, as the handles returned by “cv_create_object” are random rather than heap addresses. However, it is possible to create a “session oracle” to help locate real and forged sessions alike. To do so, we can leverage one of the many CV functions that require a session handle but will return a unique error code if the session is invalid. For instance, “cv_get_random” can be used as such:

Figure 15: Implementation of cv_get_random.

If the session fails the “validate_session” check, “cv_get_random” will return CV_INVALID_HANDLE, otherwise it will either return CV_SUCCESS or CV_INVALID_OUTPUT_PARAMETER_LENGTH. This gives a way to identify valid-looking sessions without any side effect.

Debug strings in the application firmware indicate that the heap implementation used is “heap_4.c” from OpenRTOS. At this point, we could use standard heap exploitation techniques to try and corrupt memory, but instead, we chose to look for more vulnerabilities that may be easier to exploit.

securebio_identify

This function is one of the few that does not have “cv” in its names but is called via “cv_identify_feature_set_authenticated”. It is part of the implementation of the WinBio framework used by Windows Hello during fingerprint handling.

The function expects a handle to an object, retrieves it, and copies some of its content:

Figure 16: ”securebio_identify” retrieves and copies object content.

The data is copied from one of the object’s properties into the “data2” stack buffer. To copy the data, “memcpy” is using the property’s size assuming it will fit inside the “data2” buffer. This can be a dangerous assumption: If this property was to be larger than expected, it would lead to a stack overflow.

In practice, objects allocated with “cv_create_object” cannot be used this way as there are checks in place to limit the size of this property. However, because we can corrupt heap data, it is possible to forge a malicious object that will trigger the bug. Alternatively, there might be other legitimate avenues to load a malicious object. For instance, “cv_import_object” is a good candidate. Due to the complexity of the function, we focused on the heap corruption approach instead. Regardless, this bug was reported as CVE-2025-24922.

The general approach to exploiting “securebio_identify” is as follows:

  1. Create a large object on the heap containing fake heap metadata followed by the “SeSs” tag.
  2. Locate the fake session and free it is using “cv_close”. This will mark a chunk of heap memory as freed even though it is still being used by the large object we’ve created.
  3. Allocate a smaller object that will end up being allocated inside the hole we’ve poked inside the large object from step 1.
  4. Use “cv_set_object” to modify the data of the large object and thus corrupt the fields of the small object.
  5. Use the corrupted small object to trigger the stack overflow inside “securebio_identify”. Because the firmware doesn’t have ASLR, it’s easy to find gadgets to fully exploit the function and gain arbitrary code execution inside the firmware.
  6. Optional: Use the large object as an output buffer to store data produced by our exploit and retrieve its content from the host
Figure 17. Overlapping two objects to exploit securebio_identify.

An example of this attack will be used in the next section.

More firmware vulnerabilities

While looking at the application firmware, we also found an out-of-bound read in the “cv_send_blockdata” and an out-of-bound write in “cv_upgrade_sensor_firmware”. Those were reported respectively as CVE-2025-24311 and CVE-2025-25050. We did not use these vulnerabilities for further exploitation.

Arbitrary code execution in the application firmware: What’s next?

If we circle back to our list of goals, now that we have gained code execution in the firmware, we can try to attack the host from this vantage point. To have a stronger and more meaningful case, it would be more interesting to first find a way to permanently modify the firmware. So, let’s do that!

Figure 18: Secure boot process.

This diagram showcases the ControlVault boot process:

  1. The Bootrom verifies the SBIs.
  2. The SBI retrieves keys from the OTP memory to decrypt the SCD.
  3. From the decrypted SCD, the SBI loads the required key material and sets up the SMAU to execute the encrypted firmware in place.
  4. The application firmware is executed.

Surprisingly, at boot time, there’s no cryptographic verification of the application firmware. Cryptographic verification only happens during the firmware update process. The security of the application firmware mostly relies on the security of the OTP keys and the key material stored in the SCD. But now that we have code execution on the firmware, can we leak this key material?

sotp_read_key

The “sotp_read_key” is an internal (i.e., non CV) function that can be used to read key material from the OTP memory of the Broadcom chip. In particular, it is possible to retrieve the AES and HMAC keys that are used to encrypt and authenticate the SCD:



0:00
/0:18



Figure 19: Demo of dumping OTP keys.

By obtaining the device OTP keys, it becomes possible to decrypt its SCD blob and/or forge a new one. This is particularly interesting as we can write an arbitrary SCD blob to flash using the “cv_flash_update” function.

We can create our own RSA public/private keypair and replace the SCD’s public key with the one we’ve just created. Upon firmware update, the new RSA public key will be used for firmware verification. This way, we can modify a firmware file and install it on our device.

To confirm the process works, we modify a firmware to make it send an arbitrary message when Windows requests its USB descriptor:

Figure 20: Malicious USB descriptor returned by a tampered ControlVault.

Firmware modification

Patching “cv_fingerprint_identify”

With the ability to tamper with the firmware, a new attack vector gets unlocked: we can now modify the behaviors of certain functions. In particular, “cv_fingerprint_identify” is used by Windows Hello when a user tries to login with their fingerprint. The host will send a list of handles to check if any of the CV stored fingerprint templates match the fingerprint currently touching the reader. This pseudo matching-in-sensor is done to avoid storing fingerprints templates on the host itself as it could lead to privacy concerns. This creates an interesting opportunity: what if “cv_fingerprint_identify” were to always return true, and thus make Windows Hello accept any fingerprint.



0:00
/0:07



Figure 21: Demo of bypassing Windows Hello.

Exploiting a SYSTEM service

Another consequence of being able to modify the firmware running on our device is that now we can explore the question of whether a malicious ControlVault device can compromise its host.

Primer on host-FW communication

Let’s consider what happens when calling one of the CV Commands, for example “cv_get_random”:

Figure 22: Calling cv_get_random.

  1. The “InitParam_List” function is called to populate two separate arrays of objects: “out_param_list_entry” and “in_param_list_entry”. The former is used to specify the arguments going to the firmware while the latter is used to prepare for the return values expected from the command.
  1. The first parameter of “InitParam_List” is the type of data encapsulation:

cv_param_type_e::CV_ENCAP_STRUC = 0x0, 
cv_param_type_e::CV_ENCAP_LENVAL_STRUC = 0x1, 
cv_param_type_e::CV_ENCAP_LENVAL_PAIR = 0x2,
cv_param_type_e::CV_ENCAP_INOUT_LENVAL_PAIR = 0x3

  1. Depending on the encapsulation type, the parameters will be
    encapsulated/decapsulated slightly differently:
  • STRUC will result in a regular buffer being decapsulated
  • LENVAL_STRUC will result in a length-prefixed buffer (i.e., the first four
    bytes is the size of the data followed by the actual data)
  • LENVAL_PAIR will be decapsulated as two separate parameters (size and
    buffer)
  • INOUT_LENVAL_PAIR will be initialized without data but will get
    decapsulated as two parameters like LENVAL_PAIR
  • “cvhManagerCVAPICall” is called to perform the command and retrieve its result.
    1. From a high-level perspective, when this function is called, the data we are
      sending gets serialized in the appropriate format and an IOCTL_SUBMIT call
      is issued; the data is sent to the firmware eventually.
    2. Once the execution of the command is complete, data is returned and
      deserialized to be populated into the “in_param_list_entry array” that was
      prepared in the previous step
  • Finally, the function “cvhSaveReturnValues” is used to parse the
    “in_param_list_entry array” and extract these values into a caller-provided array of
    objects.
    1. For instance, in the screenshot above (figure 22), there is one parameter in the
      “in_param_list_entry” and its type is CV_ENCAP_INOUT_LENVAL_PAIR. As
      such, upon calling the “cvhSaveReturnValues”, two parameters will be
      produced: the first one being the size of the data returned by
      “cv_get_random” and the second being the actual data.

    On the firmware side, when handling the CV commands, the return values are re-defined, which is surprising:

    Figure 23: CvManager handling the cv_get_random command (firmware-side).

    It turns out that the way this data is processed leads to an unsafe deserialization. We cover the root-cause analysis of this issue in CVE-2025-24919. In short, the redefinition of the firmware-to-host parameters firmware-side can lead to invalid decapsulation of the data on the host. For instance, if a malicious firmware image were to change the return type of “cv_get_random” to be CV_ENCAP_STRUC instead of being CV_INOUT_LENVAL_PAIR, the “pLen” argument that is meant to receive the size of the produced data would instead be filled with the data itself. In figure 22, the “pLen” variable is a stack variable meant to receive a size value as an integer; any data larger than four bytes would thus overflow the stack, possibly leading to arbitrary code execution.

    Exploitation constraints

    “bcmbipdll.dll” file and some of the ControlVault services are not ASLR enabled, which makes exploitation much easier, as it is possible to hardcode offsets which removes the need of finding an information leak that could be leveraged by a malicious ControlVault device. Data Execution Prevention (DEP) is in place, so it is still necessary to perform a ROP chain attack for further exploitation. Surprisingly, another common mitigation is only partially implemented; stack canaries are only occasionally present in the ControlVault services and DLLs. For instance, in the case of “cv_get_random”, even though “pLen” is a stack variable, no stack cookie is included to protect this function. This leads to the side-quest of identifying CV commands that are easy to exploit but also are used in a high privilege context.

    In practice, we have these constraints for our ideal CV command to target:

    • It needs to be used (directly or inside a call-chain) by a high-privilege service.
    • One of the variables fed to the CV Command needs to be a stack variable that can be corrupted using the bug reported in CVE-2025-24919 (e.g., like the “pLen” variable in “cv_get_random”).
    • No stack cookie must be present between the to-be-corrupted stack variable and the return address being the target of the stack overflow.

    Finding what to target

    The “cv_get_random” function would be an ideal candidate, but unfortunately it’s hard to find code that is using this function reliably.

    After investigating most of the CV commands, we found the following:

    Figure 24: WBFUSH_ExistsCVObject calling CSS_GetObject.

    The first argument to this function, “cvHandle”, is a handle to an object. It is passed to “CSS_GetObject”, which will populate the stack variable “objHeader” with the header of the object tied to this handle. Down the call-stack, “cv_get_object” is called with both the “cvHandle” and the “objHeader” variables. Due to these functions’ stack layout, it is possible to leverage CVE-2025-24919 to corrupt the “objHeader” variable and trigger a stack overflow in its parent function.

    Exploitation details

    The “WBFUSH_ExistCVObject” function is used by the “BCMStorageAdapter.dll” to verify if an object handle is tied to a real object stored in the ControlVault firmware. Meanwhile, “BCMStorageAdapter” is part of Broadcom’s implementation of the adapters required to interface with the Windows Biometric Framework (WBF). These adapters are necessary to interface a fingerprint reader with WBF to be used with Windows Hello (fingerprint login) or other biometric-enabled scenarios. Here is the call stack to reach the vulnerable function:

    StorageAdapterControlUnit 
      -> WBFUSH_ExistsCVObject 
        -> CSS_GetObject
          -> cv_get_object

    The “StorageAdapterControlUnit” function can be reached by a regular user opening the proper adapter with “WinBioOpenSession” and then issuing a “WinBioControlUnit” command with the “WINBIO_COMPONENT_STORAGE” component.

    Figure 25: WinBioControlUnit prototype.

    The “ControlCode” parameter specifies which adapter’s function to invoke.

    Figure 26: StorageAdapterControlUnit with ControlCode=2.

    By reversing “BCMStorageAdapter !StorageAdapterControlUnit”, we find that using “ControlCode=2” will lead to calling the “WBFUSH_ExistsCVObject” with caller provided handle. Specifically, the first four bytes of the “SendBuffer” argument passed to “WinBioControlUnit” are cast into the expected object handle.

    With this in mind, the exploitation process is as follows:

    1. Achieve code execution on the firmware to leak the device keys and gain the ability to forge a firmware file that will be accepted by this specific device.
    2. Forge a malicious firmware update with a modified “cv_get_object” function.
      1. The “cv_get_object” function will be backdoored: if the object handle matches a specific magic value (e.g., 0x1337) it will return the stack-overflow payload and tamper with the encapsulation parameters to trigger CVE-2025-24919. If the handle is not 0x1337, “cv_get_object” will execute normally to avoid unintended side-effects from the backdoor.
      2. The stack-overflow payload will be a ROP chain that will eventually lead to the execution of a reverse-shell.
    3. Install the malicious firmware update.
    4. Invoke the “WinBioControlUnit” function with “ControlCode=2” and “b”x37x13x00x00”” as the “SendBuffer” (little-endian representation of 0x1337 as a DWORD).
    5. Connect to the reverse shell and observe having obtained SYSTEM privileges.


    0:00
    /0:22



    Figure 27: Demo SYSTEM service exploit.

    Going further

    Implant

    The process described above could be seen as one of the most convoluted ways to perform an elevation of privileges to SYSTEM on Windows. However, this should be considered in context. There are other services and functions that could be used instead. In our example here, we picked functions that were easy to build a demo with. In practice, other functions could be leveraged so that no user interaction would be required to trigger the vulnerabilities. This would make sense for a standalone implant that could lay dormant and trigger from time to time in order to call home. Development of a weaponized implant is of course beyond the scope of our research.

    Physical attacks

    Another promising angle that we have yet to mention is physical access. The USH board is an internal USB device. It is possible to open the laptop that contains the board and connect to it directly provided the proper connector. There are mitigations against physical access (e.g., chassis intrusion alerts), but those are generally opt-in. As such, an attacker with 10-20 minutes of physical access could perform the same attacks described in this deep dive, but without any of the other requirements (e.g., no need to be able to log in as user; disk encryption would not protect against this, etc.).

    The following video is a short demo of the feasibility of connecting directly to a USH board over USB.



    0:00
    /0:19



    Figure 28: Demo physical attack.

    Please note that in the video above, a ControlVault device is already present but disabled. This is because the machine used already had a ControlVault device built in. The relevant driver/dll were also already installed. Upon connecting the USB cable, a new ControlVault device pops up and this is the one that is being interacted with.

    Impact

    Attack scenario

    The risks we’ve explored in this article can be summarized in the following diagram:

    Figure 29: Attack scenarios.

    The ability to modify the firmware running on one of the USH boards can be used by a local attacker to either gain privileges, bypass fingerprint login and/or compromise Windows. A threat actor could also leverage this in a post-compromise situation. If a user’s workstation is compromised, one could tamper with the ControlVault firmware running on their machine to act as an implant that could remain present even after a full system reinstall.

    Detection

    Detecting a compromised ControlVault device can be tricky. An implant could ignore new firmware updates. This is why verifying that a legitimate firmware update can be successfully installed and then returns the expected version string is a good first check to perform.

    This can be done with the Python code provided at the beginning of this article (figure 5). Alternatively, a second way is to look at the properties of the ControlVault device in the – device manager. The “Versioning” panel will show the ControlVault firmware version as reported by the device.

    Indication of local exploitation of the ControlVault device can be detected by monitoring unexpected processes loading “bcmbipdll.dll” or those trying to open a handle to the ControlVault device itself. The path for the device may depend on the laptop model and its internal USB connections. The full path can be retrieved using “SetupDiGetClassDevsW / SetupDiEnumDeviceInterfaces” with the InterfaceGuid: {79D2E5E9-8883-4E9D-91CBA14D2B145A41}.

    Finally, unexpected crashes in “WinBioSvc”, “bcmHostStorageService”, “bcmHostControlService” or “bcmUshUpgradeService” could be signs of something being amiss.

    Conclusion

    ControlVault is a surprisingly complex attack surface spanning the whole gamut from hardware to firmware and software. Multiple peripherals, frameworks and drivers are involved as well. It has a legacy codebase that can be traced back to the early 2010s and various first party software has interacted with it over the years. This deep dive has barely scratched the surface of ControlVault’s complexity and yet we showed how far reaching the consequences of a compromise could lead to. The most surprising thing could be that our work appears to be the first public research on the subject. Firmware security isn’t a new topic, but still, how many other ControlVault-like devices are yet to be found and assessed for the unexpected risk they may bring?

    Cisco Talos Blog – ​Read More

    Android adware: What is it, and how do I get it off my device?

    Is your phone suddenly flooded with aggressive ads, slowing down performance or leading to unusual app behavior? Here’s what to do.

    WeLiveSecurity – ​Read More

    Black Hat USA 2025: Is a high cyber insurance premium about your risk, or your insurer’s?

    A sky-high premium may not always reflect your company’s security posture

    WeLiveSecurity – ​Read More

    Black Hat USA 2025: Policy compliance and the myth of the silver bullet

    Who’s to blame when the AI tool managing a company’s compliance status gets it wrong?

    WeLiveSecurity – ​Read More

    The Efimer Trojan steals cryptocurrency via malicious torrent files and WordPress websites | Kaspersky official blog

    If you’re an active cryptocurrency user but you’re still downloading torrent files and aren’t sure how to safely store your seed phrases, we’ve some bad news for you. We’ve discovered a new Trojan, Efimer, that replaces crypto wallet addresses right in your clipboard. One click is all it takes for your money to end up in a hacker’s wallet.

    Here’s what you need to do to keep your crypto safe.

    How Efimer spreads

    One of Efimer’s main distribution channels is WordPress websites. It doesn’t help that WordPress is a free content-management system for websites — or that it’s the world’s most popular. Everyone from small-time bloggers and businesses to major media outlets and corporations uses it. Scammers exploit poorly secured sites and publish posts with infected torrent files.

    This is what a hacked WordPress website infected with Efimer looks like

    This is what a hacked WordPress website infected with Efimer looks like

    When a user downloads a torrent file from an infected site, they get a small folder that contains what looks like a movie file with the .xmpeg extension. You can’t open a file in that format without a “special media player”, which is conveniently included in the folder. In reality, the “player” is a Trojan installer.

    The torrent folder with the malicious files inside

    The torrent folder with the malicious files inside

    Recently, Efimer has also started spreading through phishing emails. Website and domain owners receive emails, purportedly from lawyers, falsely claiming copyright infringement and demanding content removal. The emails say all the details are in the attachment… which is actually where the Trojan is lurking. Even if you don’t own a website yourself, you can still receive spam email messages with Efimer attached. Threat actors collect user email addresses from WordPress sites they’ve previously compromised. So, if you get an email like this, whatever you — don’t open the attachment.

    How Efimer steals your crypto

    Once Efimer infects a device, one of its scripts adds itself to the Windows Defender exclusion list — provided the user has administrator privileges. The malware then installs a Tor client to communicate with its command-and-control server.

    Efimer accesses the clipboard and searches for a seed phrase, which is a unique sequence of words that allows access to a crypto wallet. The Trojan saves this phrase and sends it to the attackers’ server. If it also finds a crypto wallet address in the clipboard, Efimer discreetly swaps it out for a fake one. To avoid raising suspicion, the fake address is often very similar to the original. The end result is that cryptocurrency is silently transferred to the cybercrooks.

    Wallets containing Bitcoin, Ethereum, Monero, Tron, or Solana are primarily at risk, but owners of other cryptocurrencies shouldn’t let their guard down. The developers of Efimer regularly update the malware by adding new scripts and extending support for more crypto wallets. You can find out more about Efimer’s capabilities in our analysis on Securelist.

    Who’s at risk?

    The Trojan is attacking Windows users all over the world. Currently the malware is most active in Brazil, Russia, India, Spain, Germany, and Italy, but the scope of these attacks could easily expand to your country, if it’s not already on the list. Users of crypto wallets, owners of WordPress sites, and those who frequently download movies, games, and torrent files from the internet should be especially vigilant.

    How to protect yourself from Efimer

    The Efimer Trojan is a real jack-of-all-trades. It’s capable of stealing cryptocurrencies, swapping crypto wallets, and it poses a serious threat to both individuals and organizations. It can use scripts to hack WordPress sites, and is able to spread on its own. However, in every case, a device can only be infected if the potential victim downloads and opens a malicious file themselves. This means that a little vigilance and a healthy dose of caution — ignoring files from suspicious sources at the very least — is your best defense against Efimer.

    Here are our recommendations for home users:

    • Use a robust security solution that can scan files for malware and warn you against opening phishing links.
    • Create unique and strong passwords. And no, storing them in your notes app is not a good idea. Make sure you use a password manager.
    • Use two-factor authentication to sign in to crypto wallets and websites.
    • Avoid downloading movies or games from unverified sites. Pirated content is often crawling with all kinds of Trojans. Even if you choose to take that risk, pay close attention to the file extensions. A regular video file definitely won’t have an .exe or .xmpeg extension.
    • Don’t store your seed phrases in plain text files. Trust a password manager. Read this article to learn more about how to protect your cryptocurrency assets.

    What other threats lurk in the crypto world:

    Kaspersky official blog – ​Read More