ESET Research discovered a zero-day vulnerability in WinRAR being exploited in the wild in the guise of job application documents; the weaponized archives exploited a path traversal flaw to compromise their targets
Even companies with a mature cybersecurity posture and significant investments into data protection aren’t immune to cyber-incidents. Attackers can exploit zero-day vulnerabilities or compromise a supply chain. Employees can fall victim to sophisticated scams designed to breach the company’s defenses. The cybersecurity team itself can make a mistake while configuring security tools, or during an incident response procedure. However, each of these incidents represents an opportunity to improve processes and systems, making your defenses even more effective. This isn’t just a rallying call; it’s a practical approach that’s been successful enough in other fields such as aviation safety.
In aviation, almost everyone in the aviation industry — from aircraft design engineers to flight attendants – is required to share information to prevent incidents. This isn’t limited to crashes or system failures; the industry also reports potential problems. These reports are constantly analyzed, and security measures are adjusted based on the findings. According to Allianz Commercial’s statistics, this continuous implementation of new measures and technologies has led to a significant reduction in fatal incidents — from 40 per million flights in 1959 to 0.1 in 2015.
Still in aviation, it was recognized long ago that this model simply won’t work if people are afraid to report procedure violations, quality issues, and other causes of incidents. That’s why aviation standards include requirements for non-punitive reporting and a just culture, meaning that reporting problems and violations shouldn’t lead to punishment. DevOps engineers have a similar principle they call a blameless culture, which they use when analyzing major incidents. This approach is also essential in cybersecurity.
Does every mistake have a name?
The opposite of a blameless culture is the idea that “every mistake has a name”, meaning a specific person is to blame. Under this approach, every mistake can lead to disciplinary action, including termination. This principle is considered harmful and doesn’t lead to better security.
Distorted or partially destroyed evidence complicates the response and worsens the overall outcome because security teams can’t quickly and properly assess the scope of a given incident.
Zeroing in on one person to blame during an incident review prevents the team from focusing on how to change the system to prevent similar incidents from happening again.
Employees are afraid to report violations of IT and security policies, causing the company to miss opportunities to fix security flaws before they lead to a critical incident.
Employees have no motivation to discuss cybersecurity issues, coach one another, or correct their coworkers’ mistakes.
To truly enable every employee to contribute to your company’s security, you need a different approach.
The core principles of a just culture
Call it “non-punitive reporting” or a “blameless culture” — the core principles are the same:
Everyone makes mistakes. We learn from our mistakes; we don’t punish them. However, it’s crucial to distinguish between an honest mistake and a malicious violation.
When analyzing security incidents, the overall context, the employee’s intent, and any systemic issues that may have contributed to the situation all need considering. For example, if a high turnover of seasonal retail employees prevents them from being granted individual accounts, they might resort to sharing a single login for a point-of-sale terminal. Is the store administrator at fault? Probably not.
Beyond just reviewing technical data and logs, you must have in-depth conversations with everyone involved in an incident. For this you should create a productive and safe environment where people feel comfortable sharing their perspectives.
The goal of an incident review should be to improve behavior, technology, and processes in the future. Regarding the latter for serious incidents, they should be split in to two: immediate response to mitigate the damage, and postmortem analysis to improve your systems and procedures.
Most importantly, be open and transparent. Employees need to know how reports of issues and incidents are handled, and how decisions are made. They should know exactly who to turn to if they see or even suspect a security problem. They need to know that both their supervisors and security specialists will support them.
Confidentiality and protection. Reporting a security issue should not create problems for the person who reported it or for the person who may have caused it — as long as both acted in good faith.
How to implement these principles in your security culture
Secure leadership buy-in. A security culture doesn’t require massive direct investment, but it does need consistent support from the HR, information security, and internal communications teams. Employees also need to see that top management actively endorses this approach.
Document your approach. The blameless culture philosophy should be captured in your company’s official documents — from detailed security policies to a simple, short guide that every employee will actually read and understand. This document should clearly state the company’s position on the difference between a mistake and a malicious violation. It should formally state that employees won’t be held personally responsible for honest errors, and that the collective priority is to improve the company’s security, and prevent future recurrences.
Create channels for reporting issues. Offer several ways for employees to report problems: a dedicated section on the intranet, a specific email address, or the option to simply tell their immediate supervisor. Ideally, you should also have an anonymous hotline for reporting concerns without fear.
Train employees. Training helps employees recognize insecure processes and behaviors. Use real-world examples of problems they should report, and walk them through different incident scenarios. You can use our online our online Kaspersky Automated Security Awareness Platform to organize these cybersecurity-awareness training sessions. Motivate employees to not only report incidents, but also to suggest improvements and think about how to prevent security problems in their day-to-day work.
Educate your leadership. Every manager needs to understand how to respond to reports from their team. They need to know how and where to forward a report, and how to avoid creating blame-focused islands in a sea of just culture. Teach leaders to respond in a way that makes their coworkers feel supported and protected. Their reactions to incidents and error reports needs to be constructive. Leaders should also encourage discussions of security issues in team meetings to normalize the topic.
Develop a fair review procedure for incidents and security-issue reports. You’ll need to assemble a diverse group of employees from various teams to form a “no-blame review board”. It will be responsible for promptly processing reports, making decisions, and creating action plans for each case.
Reward proactivity. Publicly praise and reward employees who report spearphishing attempts or newly discovered flaws in policies or configurations, or who simply complete awareness training better and faster than others on their team. Mention these proactive employees in regular IT and security communications such as newsletters.
Integrate findings into your security management processes. The conclusions and suggestions from the review board should be prioritized and incorporated into the company’s cyber-resilience plan. Some findings may simply influence risk assessments, while others could directly lead to changes in company policies, or implementation of new technical security controls or reconfiguration of existing ones.
Use mistakes as learning opportunities. Your security awareness program will be more effective if it uses real-life examples from your own organization. You don’t need to name specific individuals, but you can mention teams and systems, and describe attack scenarios.
Measure performance. To ensure this process is working and delivering results, you need to use information security metrics as well as HR and communications KPIs. Track the MTTR for identified issues, the percentage of issues discovered through employee reports, employee satisfaction levels, the number and nature of security issues identified, and the number of employees engaged in suggesting improvements.
Important exceptions
A security culture or blameless culture doesn’t mean that no one is ever held accountable. Aviation safety documents on non-punitive reporting, for example, include crucial exceptions. Protection doesn’t apply when someone knowingly and maliciously deviates from the regulations. This exception prevents an insider who has leaked data to competitors from enjoying complete impunity after confessing.
The second exception is when national or industry regulations require individual employees to be held personally accountable for incidents and violations. Even with this kind of regulation, it’s vital to maintain balance. The focus should remain on improving processes and preventing future incidents — not on finding who’s to blame. You can still build a culture of trust if investigations are objective and accountability is only applied where it’s truly necessary and justified.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-11 13:06:412025-08-11 13:06:41How to implement a blameless approach to cybersecurity | Kaspersky official blog
For a high-level overview of this research, you can refer to our Vulnerability Spotlight. This is the in-depth version that shares many more technical details. In this post, we’ll be covering the entire research process as well as providing technical explanations of the exploits behind the attack scenarios.
Dell ControlVault is “a hardware-based security solution that provides a secure bank that stores your passwords, biometric templates, and security codes within the firmware.” A daughter board provides this functionality and performs these security features in firmware. Dell refers to the daughter board as a Unified Security Hub (USH), as it is used as a hub to run ControlVault (CV), connecting various security peripherals such as a fingerprint reader, smart card reader and NFC reader.
Why target ControlVault3?
Hindsight is 20/20 and in retrospect, there are plenty of valid reasons to look at it:
There is no public research on this device.
It is used for security and enhanced logins and thus is used for sensitive functions.
It is found in countless Dell laptops and, in particular, places that seek this extra layer of security (e.g., finance, healthcare, government, etc.) are more likely to have it in their environment.
But what really kickstarted this research project was spotting this target that seemed “promising.” What first caught our attention is that most of the Windows services involved with ControlVault3 are not Address Space Layout Randomization (ASLR)-enabled. This means easier exploitation, and possible technical debt in the codebase. Further, the setup bundle comes with multiple drivers and what appears to be a mix of clear text and encrypted firmware. This makes for an exciting challenge that calls for further investigation.
Making a plan
When starting a vulnerability research project, it is good to have some ideas of what we’re trying to achieve. Let’s make a plan that will act as our North Star and guide our steps along the way:
The main application is encrypted, and we want to see what this firmware hides. One of our first tasks should be to find a way to decrypt the application firmware.
This is a vulnerability research project and, as such, we need to understand how to interact with Control Vault, understand its attack surface, and look for vulnerabilities.
The Windows services run without ASLR and have SYSTEM privileges. Those could be standalone targets for local escalation of privilege (EoP) and/or may have interesting exploitation paths.
Gathering information
Information gathering occurred throughout the project. However, to clarify this discussion, we’ll now summarize some of the early findings.
ControlVault is made by Broadcom and leverages their 5820X chip series. Technically, we are only talking about ControlVault3 (or ControlVault3+), but there was a ControlVault2 and a ControlVault (1 being implied) that were using different hardware. The first mentions of ControlVault date back to 2009-2011.
Online research for the BCM5820X chip series yields minimal results, with this NIST certification being the only notable finding. This document clarifies the security posture of the chip and gives some insight into the operations of its cryptographic module.
Other useful resources are forum posts where power users talk about Control Vault, particularly when the power users discuss making it work on Linux. One post eventually lead to a repository providing official (but limited) Linux support. It is worth noting that one of the shared objects in this repository, “libfprint-2-tod-1-broadcom.so”, ships with debug symbols. This can help when reversing the ControlVault ecosystem.
Finally, for a physical representation, the USH board that connects to the laptop and runs the ControlVault firmware is shown below:
Figure 1: Picture of a USH Board running ControlVault.
When connected inside the laptop, it looks like this (battery removed to show the board):
Figure 2: USH board (highlighted in orange) inside a Dell Latitude laptop.
Interesting files in ControlVault3 bundle
ControlVault comes with a lot of files. We cannot look at all of them at once, but there are a few that stick out, mainly the “bin” and “firmware” folders. The former contains the main services used to communicate with ControlVault and the associated shared objects, while the latter is used to push data to the device.
Figure 3: Bin and firmware folders from the ControlVault3 installer.
The firmware folder is also particularly interesting as it contains what we can presume is the code running on the ControlVault device. If we look at the content of these files by running the “strings” command or by opening them in a hex editor, we find that the ones with “SBI” in their names are in plaintext, while the ones named “bcmCitadelXXX” appear to be either compressed or encrypted. From the information we gathered earlier, we know that “SBI” stands for “Secure Boot Image” and is part of the early stage of the device’s boot process; we can then guess the “bcmCitadelXXX” files are the main application firmware that gets started by the SBI.
Reversing the bootloader
As the SBI files are in plaintext and we know from the Broadcom’s documentation that they are ARM code, we can have a look at one of them in our favorite disassembler/decompiler, which might help us figure out how to handle the application firmware itself.
Identifying the SBI load address
The usual first step is to identify the load address of this blob of data which, in our case, is 0x2400CC00. The data starts with a 0x400 bytes header, thus leading to a more reasonable 0x2400D000 base address for the actual start of the code.
To find this value, the trick is to first load the code at an arbitrary address and then look for absolute addresses (e.g., pointers to strings, addresses of functions, etc.) and play the guessing game while rebasing the firmware until everything lines up. The SBI firmware includes a lot of strings, so it’s fairly easy to spot when they are referenced properly. Alternatively, function pointers can also be useful and, conveniently, some can be found close to the start of the code, as an ARM vector table is placed there. This gives away the load address.
Figure 4: Vector table and beginning of the code inside the SBI.
Determining the software architecture
Here, we need to make a choice of what to focus on first. We can either try to map out the general architecture of the SBI and understand how it works or instead keep our eyes on the ball and look for how the application firmware is being decrypted. In practice, we did the latter, but let’s provide a few spoilers to make this easier to follow.
Functions and parameters names
The firmware relies heavily on logging, which can leak function names, variables and some details about the logic of the code itself.
The firmware appears to be running a custom real-time operating system (RTOS) called Citadel RTOS. We can also find debug strings referring to OpenRTOS, which was likely used as a base to CitadelRTOS.
And as mentioned previously, the Linux implementation comes with debug symbols for the host API, which provides lots of data structures and enum values used by ControlVault.
Communication with the firmware
Before going too far into reversing the SBI, let’s have a high-level overview of how communication occurs between host (Windows) and firmware.
Essentially, the USH board is connected to the laptop’s motherboard and appears as a USB device in the device manager. A driver, “cvusbdrv.sys”,creates a device file that can be opened from userland. Various DeviceIoControl commands can be used to manage and communicate with the device:
{
IOCTL_SUBMIT = 0x5500E004, // sends CV Command
IOCTL_RESULT = 0x5500E008, // result from CV Command
IOCTL_HSREQ = 0x5500E00C, // Host Storage Request, used by bcmHostStorageService
IOCTL_HCREQ = 0x5500E01C, // Host Control Request, used by bcmHostControlService
IOCTL_FPREQ = 0x5500E024, // Fingerprint Request
IOCTL_CACHE_VER = 0x5500E028, // Returned cached version string
IOCTL_CLREQ = 0x5500E030, // Contactless Request (NFC)
};
Communicating with the driver can be made easier by using userland APIs. In particular, the “bcmbipdll.dll” file implements more than 160 high-level functions that can be used to send specific commands to the firmware. These functions are prefixed with “cv_” (e.g., “cv_open”, “cv_close”, “cv_create_object”, etc.) and are referenced as “CV Commands”. Behind the scenes, when invoking one of these commands, IOCTL_SUBMIT / IOCTL_RESULT is issued, and the relevant data is sent over USB to the firmware.
Upon receiving data from the USB endpoints, the firmware will process the data packets and route them to dedicated code paths. For CV commands, the data is passed to a function called “CvManager /CvManager_SBI” that dispatches the command to the function implementing it.
Example: Manual communication with ControlVault
A simple Python script can be used to load “bcmbipdll.dll” and invoke its functions.
For instance, the following will retrieve the version string of the firmware:
Figure 5: Python snippet to retrieve CV’s version string.
The return value:
Figure 6: Version string obtained from cv_get_ush_ver.
As a reminder, the Linux implementation of the host APIs (libfprint-2-tod1-broadcom/usr/lib/x86_64-linux-gnu/libfprint-2/tod-1/ libfprint-2-tod-1-broadcom.so) comes with debug symbols and thus can be used to identify the various structures and parameters involved in the invocation of each CV command.
We will revisit the communication mechanism in the “Exploiting a SYSTEM service” section, but for now, we can return to our original goal of figuring out how to decrypt the application firmware.
Finding the firmware decryption mechanism
We can search the strings inside the SBI to see if anything mentions decryption:
Figure 7: Strings from the SBI firmware mentioning decryption.
As seen in the screenshot above, the USH_UPGRADE functionality mentions decryption failures. And indeed, this functionality is related to application firmware decryption. The USH_UPGRADE functionality is implemented by three CV commands:
CV_CMD_FW_UPGRADE_START
CV_CMD_FW_UPGRADE_UPDATE
CV_CMD_FW_UPGRADE_COMPLETE
Those commands are issued by the “cv_firmware_upgrade” function in “bcmbipdll.dll”.
The firmware update process is a little convoluted:
The host will first flash a file called “bcm_cv_clear_scd.otp” solely composed of “0123456789abcdef” repeated many times. For that, it will use the “cv_flash_update” function.
The host will call “cv_reboot_to_sbi” to restart in SBI mode.
The host will send the CV_CMD_FW_UPGRADE_START command handled in the SBI by “ushFieldUpgradeStart”:
The SBI will try to load from Flash something called a Secure Code Descriptor (SCD) that contains key material (e.g., decryption key, IV, and RSA – public key) but will revert to hardcoded default if no SCD is available. This is what got flashed/erased during step 1.
Figure 8 Using hardcoded defaults during ushFieldUpgradeStart.Figure 9 Calling the decryption function.
The host will send the first 0x2B0 bytes of the encrypted application firmware. This is an encrypted header defining the parameters of the soon-to-be installed firmware.
The SBI will try to decrypt (AES-CBC), validate, and cryptographically verify
the header using key material from the SCD or the hardcoded defaults.
Upon success, the SBI will generate new key material to be stored in a
different section of the SCD and used to store the firmware in an encrypted
form. This is because the SoC used by ControlVault can execute in place
(XIP) encrypted code thanks to its Secure Memory Access Unit (SMAU).
Then, the host will send the rest of the firmware split into chunks of 0x100 bytes via the CV_CMD_FW_UPGRADE_UPDATE command handled in the SBI by the “ushFieldUpgradeUpdate” function.
The firmware chunks are decrypted using the same method, but instead of
using a default IV, the code relies on a custom function from the SMAU
device to generate an IV based upon the address of the memory block being
decrypted. Note: The base address of this application firmware can be guessed from
reversing and is 0x63030000.
Figure 10 Computation of address-based IV.
A rolling hash (SHA256) of the decrypted blocks is kept for further validation.
When done sending the encrypted firmware, the host will send the
CV_CMD_FW_UPGRADE_COMPLETE command handled in the SBI by the
“ushFieldUpgradeComplete” function.
The SBI will verify the signature of the firmware received based upon the
already verified header and the rolling hash that was computed while
decrypting firmware pages.
Upon success, the new SCD will be encrypted and committed to flash using
a per-device AES key stored in the chip OTP fuses.
Luckily, the hardcoded keys in the “bcmsbiCitadelA0_1.otp” file are the ones that were used to encrypt the application firmware, and by re-implementing the algorithm described above, we can successfully decrypt the application firmware and move on to our second objective: looking for vulnerabilities.
Attack surface mapping and vulnerability research
With a freshly decrypted firmware image, it’s easy to jump the gun and start reversing everything, but before we get into the deep end, we should stop and strategize. So, let’s have a look at the architecture of the system and the potential areas of interest:
Figure 11: System architecture.
There are a few angles we can consider:
From the host, could we send malicious data to corrupt the application firmware or the SBI code?
Could we tamper with the firmware itself and make it misbehave?
Could a malicious firmware image compromise the host?
What about the hardware peripherals? Could they be compromised or used to compromise the firmware?
The research we’ve conducted explores the first three questions. The fourth one is a potential future research project. Answering the first question will help achieve the next two, so let’s start with this first one.
Finding vulnerabilities in the application firmware
The application firmware accepts more than150 CV commands. This is a massive attack surface and there is a lot to look at. Most of these commands expect a “session” to be already established using the “cv_open” command. When the interaction is over, the “cv_close” function is used to terminate the session. Let’s look at how these two operate.
cv_open and cv_close
The prototype of “cv_open” is as such:
int __fastcall cv_open(cv_session_flags a1, cv_app_id *appId, cv_user_id *userId, int
*pHandle)
Its implementation is below:
Figure 12: Call to cv_open.
We can see that memory is allocated (line 29), then a tag “SeSs” is written (line 36) as the first four bytes of the session object. After some more processing, the pointer to the session is returned as a handle (line 44) back to the host. The choice of using a pointer as a handle is already a little questionable as it leaks a heap address to the host, but let’s continue.
The prototype for “cv_close” is as follows:
int __fastcall cv_close(cv_session *)
The function takes the pointer to the session we’ve obtained from “cv_open” and attempts to close it by doing the following:
Validate the session (see below)
Erase the “SeSs” tag
Free the memory
Figure 13: Implementation of cv_close.
Meanwhile, the “validate_session” function will:
Verify the pointer provided is within the CV_HEAP
Verify the first 4 bytes match the “SeSs” tag
Extra checks irrelevant for us
Figure 14: Session validation.
This is particularly interesting because, assuming one can place some arbitrary data on the heap, it then becomes easy to forge a fake session and free it, corrupting heap memory in the process. This issue was reported as CVE-2025-25215.
As expected, it is indeed possible to place attacker-controlled data on the heap using functions like “cv_create_object” or “cv_set_object”. Locating said data is a little trickier, as the handles returned by “cv_create_object” are random rather than heap addresses. However, it is possible to create a “session oracle” to help locate real and forged sessions alike. To do so, we can leverage one of the many CV functions that require a session handle but will return a unique error code if the session is invalid. For instance, “cv_get_random” can be used as such:
Figure 15: Implementation of cv_get_random.
If the session fails the “validate_session” check, “cv_get_random” will return CV_INVALID_HANDLE, otherwise it will either return CV_SUCCESS or CV_INVALID_OUTPUT_PARAMETER_LENGTH. This gives a way to identify valid-looking sessions without any side effect.
Debug strings in the application firmware indicate that the heap implementation used is “heap_4.c” from OpenRTOS. At this point, we could use standard heap exploitation techniques to try and corrupt memory, but instead, we chose to look for more vulnerabilities that may be easier to exploit.
securebio_identify
This function is one of the few that does not have “cv” in its names but is called via “cv_identify_feature_set_authenticated”. It is part of the implementation of the WinBio framework used by Windows Hello during fingerprint handling.
The function expects a handle to an object, retrieves it, and copies some of its content:
Figure 16: ”securebio_identify” retrieves and copies object content.
The data is copied from one of the object’s properties into the “data2” stack buffer. To copy the data, “memcpy” is using the property’s size assuming it will fit inside the “data2” buffer. This can be a dangerous assumption: If this property was to be larger than expected, it would lead to a stack overflow.
In practice, objects allocated with “cv_create_object” cannot be used this way as there are checks in place to limit the size of this property. However, because we can corrupt heap data, it is possible to forge a malicious object that will trigger the bug. Alternatively, there might be other legitimate avenues to load a malicious object. For instance, “cv_import_object” is a good candidate. Due to the complexity of the function, we focused on the heap corruption approach instead. Regardless, this bug was reported as CVE-2025-24922.
The general approach to exploiting “securebio_identify” is as follows:
Create a large object on the heap containing fake heap metadata followed by the “SeSs” tag.
Locate the fake session and free it is using “cv_close”. This will mark a chunk of heap memory as freed even though it is still being used by the large object we’ve created.
Allocate a smaller object that will end up being allocated inside the hole we’ve poked inside the large object from step 1.
Use “cv_set_object” to modify the data of the large object and thus corrupt the fields of the small object.
Use the corrupted small object to trigger the stack overflow inside “securebio_identify”. Because the firmware doesn’t have ASLR, it’s easy to find gadgets to fully exploit the function and gain arbitrary code execution inside the firmware.
Optional: Use the large object as an output buffer to store data produced by our exploit and retrieve its content from the host
Figure 17. Overlapping two objects to exploit securebio_identify.
An example of this attack will be used in the next section.
More firmware vulnerabilities
While looking at the application firmware, we also found an out-of-bound read in the “cv_send_blockdata” and an out-of-bound write in “cv_upgrade_sensor_firmware”. Those were reported respectively as CVE-2025-24311 and CVE-2025-25050. We did not use these vulnerabilities for further exploitation.
Arbitrary code execution in the application firmware: What’s next?
If we circle back to our list of goals, now that we have gained code execution in the firmware, we can try to attack the host from this vantage point. To have a stronger and more meaningful case, it would be more interesting to first find a way to permanently modify the firmware. So, let’s do that!
Figure 18: Secure boot process.
This diagram showcases the ControlVault boot process:
The Bootrom verifies the SBIs.
The SBI retrieves keys from the OTP memory to decrypt the SCD.
From the decrypted SCD, the SBI loads the required key material and sets up the SMAU to execute the encrypted firmware in place.
The application firmware is executed.
Surprisingly, at boot time, there’s no cryptographic verification of the application firmware. Cryptographic verification only happens during the firmware update process. The security of the application firmware mostly relies on the security of the OTP keys and the key material stored in the SCD. But now that we have code execution on the firmware, can we leak this key material?
sotp_read_key
The “sotp_read_key” is an internal (i.e., non CV) function that can be used to read key material from the OTP memory of the Broadcom chip. In particular, it is possible to retrieve the AES and HMAC keys that are used to encrypt and authenticate the SCD:
0:00
/0:18
Figure 19: Demo of dumping OTP keys.
By obtaining the device OTP keys, it becomes possible to decrypt its SCD blob and/or forge a new one. This is particularly interesting as we can write an arbitrary SCD blob to flash using the “cv_flash_update” function.
We can create our own RSA public/private keypair and replace the SCD’s public key with the one we’ve just created. Upon firmware update, the new RSA public key will be used for firmware verification. This way, we can modify a firmware file and install it on our device.
To confirm the process works, we modify a firmware to make it send an arbitrary message when Windows requests its USB descriptor:
Figure 20: Malicious USB descriptor returned by a tampered ControlVault.
Firmware modification
Patching “cv_fingerprint_identify”
With the ability to tamper with the firmware, a new attack vector gets unlocked: we can now modify the behaviors of certain functions. In particular, “cv_fingerprint_identify” is used by Windows Hello when a user tries to login with their fingerprint. The host will send a list of handles to check if any of the CV stored fingerprint templates match the fingerprint currently touching the reader. This pseudo matching-in-sensor is done to avoid storing fingerprints templates on the host itself as it could lead to privacy concerns. This creates an interesting opportunity: what if “cv_fingerprint_identify” were to always return true, and thus make Windows Hello accept any fingerprint.
0:00
/0:07
Figure 21: Demo of bypassing Windows Hello.
Exploiting a SYSTEM service
Another consequence of being able to modify the firmware running on our device is that now we can explore the question of whether a malicious ControlVault device can compromise its host.
Primer on host-FW communication
Let’s consider what happens when calling one of the CV Commands, for example “cv_get_random”:
Figure 22: Calling cv_get_random.
The “InitParam_List” function is called to populate two separate arrays of objects: “out_param_list_entry” and “in_param_list_entry”. The former is used to specify the arguments going to the firmware while the latter is used to prepare for the return values expected from the command.
The first parameter of “InitParam_List” is the type of data encapsulation:
Depending on the encapsulation type, the parameters will be
encapsulated/decapsulated slightly differently:
STRUC will result in a regular buffer being decapsulated
LENVAL_STRUC will result in a length-prefixed buffer (i.e., the first four
bytes is the size of the data followed by the actual data)
LENVAL_PAIR will be decapsulated as two separate parameters (size and
buffer)
INOUT_LENVAL_PAIR will be initialized without data but will get
decapsulated as two parameters like LENVAL_PAIR
“cvhManagerCVAPICall” is called to perform the command and retrieve its result.
From a high-level perspective, when this function is called, the data we are
sending gets serialized in the appropriate format and an IOCTL_SUBMIT call
is issued; the data is sent to the firmware eventually.
Once the execution of the command is complete, data is returned and
deserialized to be populated into the “in_param_list_entry array” that was
prepared in the previous step
Finally, the function “cvhSaveReturnValues” is used to parse the
“in_param_list_entry array” and extract these values into a caller-provided array of
objects.
For instance, in the screenshot above (figure 22), there is one parameter in the
“in_param_list_entry” and its type is CV_ENCAP_INOUT_LENVAL_PAIR. As
such, upon calling the “cvhSaveReturnValues”, two parameters will be
produced: the first one being the size of the data returned by
“cv_get_random” and the second being the actual data.
On the firmware side, when handling the CV commands, the return values are re-defined, which is surprising:
Figure 23: CvManager handling the cv_get_random command (firmware-side).
It turns out that the way this data is processed leads to an unsafe deserialization. We cover the root-cause analysis of this issue in CVE-2025-24919. In short, the redefinition of the firmware-to-host parameters firmware-side can lead to invalid decapsulation of the data on the host. For instance, if a malicious firmware image were to change the return type of “cv_get_random” to be CV_ENCAP_STRUC instead of being CV_INOUT_LENVAL_PAIR, the “pLen” argument that is meant to receive the size of the produced data would instead be filled with the data itself. In figure 22, the “pLen” variable is a stack variable meant to receive a size value as an integer; any data larger than four bytes would thus overflow the stack, possibly leading to arbitrary code execution.
Exploitation constraints
“bcmbipdll.dll” file and some of the ControlVault services are not ASLR enabled, which makes exploitation much easier, as it is possible to hardcode offsets which removes the need of finding an information leak that could be leveraged by a malicious ControlVault device. Data Execution Prevention (DEP) is in place, so it is still necessary to perform a ROP chain attack for further exploitation. Surprisingly, another common mitigation is only partially implemented; stack canaries are only occasionally present in the ControlVault services and DLLs. For instance, in the case of “cv_get_random”, even though “pLen” is a stack variable, no stack cookie is included to protect this function. This leads to the side-quest of identifying CV commands that are easy to exploit but also are used in a high privilege context.
In practice, we have these constraints for our ideal CV command to target:
It needs to be used (directly or inside a call-chain) by a high-privilege service.
One of the variables fed to the CV Command needs to be a stack variable that can be corrupted using the bug reported in CVE-2025-24919 (e.g., like the “pLen” variable in “cv_get_random”).
No stack cookie must be present between the to-be-corrupted stack variable and the return address being the target of the stack overflow.
Finding what to target
The “cv_get_random” function would be an ideal candidate, but unfortunately it’s hard to find code that is using this function reliably.
After investigating most of the CV commands, we found the following:
The first argument to this function, “cvHandle”, is a handle to an object. It is passed to “CSS_GetObject”, which will populate the stack variable “objHeader” with the header of the object tied to this handle. Down the call-stack, “cv_get_object” is called with both the “cvHandle” and the “objHeader” variables. Due to these functions’ stack layout, it is possible to leverage CVE-2025-24919 to corrupt the “objHeader” variable and trigger a stack overflow in its parent function.
Exploitation details
The “WBFUSH_ExistCVObject” function is used by the “BCMStorageAdapter.dll” to verify if an object handle is tied to a real object stored in the ControlVault firmware. Meanwhile, “BCMStorageAdapter” is part of Broadcom’s implementation of the adapters required to interface with the Windows Biometric Framework (WBF). These adapters are necessary to interface a fingerprint reader with WBF to be used with Windows Hello (fingerprint login) or other biometric-enabled scenarios. Here is the call stack to reach the vulnerable function:
The “StorageAdapterControlUnit” function can be reached by a regular user opening the proper adapter with “WinBioOpenSession” and then issuing a “WinBioControlUnit” command with the “WINBIO_COMPONENT_STORAGE” component.
Figure 25: WinBioControlUnit prototype.
The “ControlCode” parameter specifies which adapter’s function to invoke.
Figure 26: StorageAdapterControlUnit with ControlCode=2.
By reversing “BCMStorageAdapter !StorageAdapterControlUnit”, we find that using “ControlCode=2” will lead to calling the “WBFUSH_ExistsCVObject” with caller provided handle. Specifically, the first four bytes of the “SendBuffer” argument passed to “WinBioControlUnit” are cast into the expected object handle.
With this in mind, the exploitation process is as follows:
Achieve code execution on the firmware to leak the device keys and gain the ability to forge a firmware file that will be accepted by this specific device.
Forge a malicious firmware update with a modified “cv_get_object” function.
The “cv_get_object” function will be backdoored: if the object handle matches a specific magic value (e.g., 0x1337) it will return the stack-overflow payload and tamper with the encapsulation parameters to trigger CVE-2025-24919. If the handle is not 0x1337, “cv_get_object” will execute normally to avoid unintended side-effects from the backdoor.
The stack-overflow payload will be a ROP chain that will eventually lead to the execution of a reverse-shell.
Install the malicious firmware update.
Invoke the “WinBioControlUnit” function with “ControlCode=2” and “b”x37x13x00x00”” as the “SendBuffer” (little-endian representation of 0x1337 as a DWORD).
Connect to the reverse shell and observe having obtained SYSTEM privileges.
0:00
/0:22
Figure 27: Demo SYSTEM service exploit.
Going further
Implant
The process described above could be seen as one of the most convoluted ways to perform an elevation of privileges to SYSTEM on Windows. However, this should be considered in context. There are other services and functions that could be used instead. In our example here, we picked functions that were easy to build a demo with. In practice, other functions could be leveraged so that no user interaction would be required to trigger the vulnerabilities. This would make sense for a standalone implant that could lay dormant and trigger from time to time in order to call home. Development of a weaponized implant is of course beyond the scope of our research.
Physical attacks
Another promising angle that we have yet to mention is physical access. The USH board is an internal USB device. It is possible to open the laptop that contains the board and connect to it directly provided the proper connector. There are mitigations against physical access (e.g., chassis intrusion alerts), but those are generally opt-in. As such, an attacker with 10-20 minutes of physical access could perform the same attacks described in this deep dive, but without any of the other requirements (e.g., no need to be able to log in as user; disk encryption would not protect against this, etc.).
The following video is a short demo of the feasibility of connecting directly to a USH board over USB.
0:00
/0:19
Figure 28: Demo physical attack.
Please note that in the video above, a ControlVault device is already present but disabled. This is because the machine used already had a ControlVault device built in. The relevant driver/dll were also already installed. Upon connecting the USB cable, a new ControlVault device pops up and this is the one that is being interacted with.
Impact
Attack scenario
The risks we’ve explored in this article can be summarized in the following diagram:
Figure 29: Attack scenarios.
The ability to modify the firmware running on one of the USH boards can be used by a local attacker to either gain privileges, bypass fingerprint login and/or compromise Windows. A threat actor could also leverage this in a post-compromise situation. If a user’s workstation is compromised, one could tamper with the ControlVault firmware running on their machine to act as an implant that could remain present even after a full system reinstall.
Detection
Detecting a compromised ControlVault device can be tricky. An implant could ignore new firmware updates. This is why verifying that a legitimate firmware update can be successfully installed and then returns the expected version string is a good first check to perform.
This can be done with the Python code provided at the beginning of this article (figure 5). Alternatively, a second way is to look at the properties of the ControlVault device in the – device manager. The “Versioning” panel will show the ControlVault firmware version as reported by the device.
Indication of local exploitation of the ControlVault device can be detected by monitoring unexpected processes loading “bcmbipdll.dll” or those trying to open a handle to the ControlVault device itself. The path for the device may depend on the laptop model and its internal USB connections. The full path can be retrieved using “SetupDiGetClassDevsW / SetupDiEnumDeviceInterfaces” with the InterfaceGuid: {79D2E5E9-8883-4E9D-91CBA14D2B145A41}.
Finally, unexpected crashes in “WinBioSvc”, “bcmHostStorageService”, “bcmHostControlService” or “bcmUshUpgradeService” could be signs of something being amiss.
Conclusion
ControlVault is a surprisingly complex attack surface spanning the whole gamut from hardware to firmware and software. Multiple peripherals, frameworks and drivers are involved as well. It has a legacy codebase that can be traced back to the early 2010s and various first party software has interacted with it over the years. This deep dive has barely scratched the surface of ControlVault’s complexity and yet we showed how far reaching the consequences of a compromise could lead to. The most surprising thing could be that our work appears to be the first public research on the subject. Firmware security isn’t a new topic, but still, how many other ControlVault-like devices are yet to be found and assessed for the unexpected risk they may bring?
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-09 13:06:372025-08-09 13:06:37ReVault! When your SoC turns against you… deep dive edition
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-09 12:06:582025-08-09 12:06:58Android adware: What is it, and how do I get it off my device?
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-09 12:06:572025-08-09 12:06:57Black Hat USA 2025: Is a high cyber insurance premium about your risk, or your insurer’s?
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-08 10:06:372025-08-08 10:06:37Black Hat USA 2025: Policy compliance and the myth of the silver bullet
If you’re an active cryptocurrency user but you’re still downloading torrent files and aren’t sure how to safely store your seed phrases, we’ve some bad news for you. We’ve discovered a new Trojan, Efimer, that replaces crypto wallet addresses right in your clipboard. One click is all it takes for your money to end up in a hacker’s wallet.
Here’s what you need to do to keep your crypto safe.
How Efimer spreads
One of Efimer’s main distribution channels is WordPress websites. It doesn’t help that WordPress is a free content-management system for websites — or that it’s the world’s most popular. Everyone from small-time bloggers and businesses to major media outlets and corporations uses it. Scammers exploit poorly secured sites and publish posts with infected torrent files.
This is what a hacked WordPress website infected with Efimer looks like
When a user downloads a torrent file from an infected site, they get a small folder that contains what looks like a movie file with the .xmpeg extension. You can’t open a file in that format without a “special media player”, which is conveniently included in the folder. In reality, the “player” is a Trojan installer.
The torrent folder with the malicious files inside
Recently, Efimer has also started spreading through phishing emails. Website and domain owners receive emails, purportedly from lawyers, falsely claiming copyright infringement and demanding content removal. The emails say all the details are in the attachment… which is actually where the Trojan is lurking. Even if you don’t own a website yourself, you can still receive spam email messages with Efimer attached. Threat actors collect user email addresses from WordPress sites they’ve previously compromised. So, if you get an email like this, whatever you — don’t open the attachment.
How Efimer steals your crypto
Once Efimer infects a device, one of its scripts adds itself to the Windows Defender exclusion list — provided the user has administrator privileges. The malware then installs a Tor client to communicate with its command-and-control server.
Efimer accesses the clipboard and searches for a seed phrase, which is a unique sequence of words that allows access to a crypto wallet. The Trojan saves this phrase and sends it to the attackers’ server. If it also finds a crypto wallet address in the clipboard, Efimer discreetly swaps it out for a fake one. To avoid raising suspicion, the fake address is often very similar to the original. The end result is that cryptocurrency is silently transferred to the cybercrooks.
Wallets containing Bitcoin, Ethereum, Monero, Tron, or Solana are primarily at risk, but owners of other cryptocurrencies shouldn’t let their guard down. The developers of Efimer regularly update the malware by adding new scripts and extending support for more crypto wallets. You can find out more about Efimer’s capabilities in our analysis on Securelist.
Who’s at risk?
The Trojan is attacking Windows users all over the world. Currently the malware is most active in Brazil, Russia, India, Spain, Germany, and Italy, but the scope of these attacks could easily expand to your country, if it’s not already on the list. Users of crypto wallets, owners of WordPress sites, and those who frequently download movies, games, and torrent files from the internet should be especially vigilant.
How to protect yourself from Efimer
The Efimer Trojan is a real jack-of-all-trades. It’s capable of stealing cryptocurrencies, swapping crypto wallets, and it poses a serious threat to both individuals and organizations. It can use scripts to hack WordPress sites, and is able to spread on its own. However, in every case, a device can only be infected if the potential victim downloads and opens a malicious file themselves. This means that a little vigilance and a healthy dose of caution — ignoring files from suspicious sources at the very least — is your best defense against Efimer.
Here are our recommendations for home users:
Use a robust security solution that can scan files for malware and warn you against opening phishing links.
Create unique and strong passwords. And no, storing them in your notes app is not a good idea. Make sure you use a password manager.
Avoid downloading movies or games from unverified sites. Pirated content is often crawling with all kinds of Trojans. Even if you choose to take that risk, pay close attention to the file extensions. A regular video file definitely won’t have an .exe or .xmpeg extension.
Don’t store your seed phrases in plain text files. Trust a password manager. Read this article to learn more about how to protect your cryptocurrency assets.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-08 09:06:392025-08-08 09:06:39The Efimer Trojan steals cryptocurrency via malicious torrent files and WordPress websites | Kaspersky official blog
Welcome to this week’s edition of the Threat Source newsletter.
Vulnerabilities within software are a persistent challenge. Software engineers inadvertently tend to make the same mistakes repeatedly, with the same entries appearing in the annual top 25 list of Common Weakness Enumerations each year.
The truth is, writing software is difficult. Software engineering is a craft demands concentration, knowledge and time, all coupled with extensive testing. Even the most skilled software engineer can get distracted or have a bad day, leading to a hidden vulnerability inadvertently making its way into a production codebase.
Identifying vulnerabilities early in the software development process is one of the promises of AI. The idea being that an AI agent would write perfect code under the direction of a software engineer or verify and correct code written by a human.
Last weekend, I decided to put this premise to the test. As a somewhat rusty software engineer, I resolved to see if AI could assist me with a personal software project. Initially, I was impressed, the AI agent offered an engaging discussion about high-level architecture and the trade-offs of various approaches. I was amazed at the lines of code that the AI generated on request. All the software for my project written at the press of a button!
Then came the testing. Although the code looked convincing, it failed to interface with the required libraries. Parameters were incorrect, it tried to call fictional functions. It seemed that the way the AI imagined the library to work didn’t reflect reality or the available documentation. Similarly, there were less sanity checks or verification of variable values than I was comfortable with; especially since many of these were derived from external inputs.
To be fair, the AI code resolved a tricky threading issue that had defeated me, and the ‘boilerplate’ code necessary to form the skeleton structure of the software was flawless. I felt that I achieved a productivity boost from the AI’s exposure to ‘frequently encountered’ coding issues. However, when it came to more esoteric APIs with which I was moderately familiar, the AI was unable to generate functional code or correctly diagnose reported errors.
After some debugging and manual rewriting, I managed to create a working prototype. The code is clearly not bulletproof, but then again, I hadn’t explicitly asked for code that was secured against all potential hacks. Like many software engineers, myself and my AI assistant focused on quickly delivering the desired functionality, rather than considering the long-term operation of the code in a potentially hostile environment.
I remain optimistic that AI assisted coding is the pathway to a software vulnerability free future. However, my recent limited personal experience leads me to think that we still have a considerable journey ahead before we can definitively resolve software vulnerabilities for good.
I hope you all have a tremendous time at Summer Camp, see a lot of old friends and make new ones and most importantly that you shower and use deodorant. Conference season is a marathon, it’s long, it’s arduous, it’s sweaty – be the hygienic change you want to see in the world.
The one big thing
Continuing the AI theme, Guilherme describes how AI LLM models can be used to assist in the reverse engineering of malware. Used correctly, LLMs can provide valuable insights and facilitate the analysis of malware.
Why do I care?
Reverse engineering malware is the often time-consuming task of identifying the execution path of malicious software. Frequently malware writers obfuscate their code to make it difficult to understand and follow what their code is doing. Advances in technology that can speed up this process make fighting malware easier.
So now what?
Investigate if the tools and approaches described in the blog can be used to improve your reverse engineering process, or as a means to begin learning about reverse engineering.
Top security headlines of the week
As ransomware gangs threaten physical harm, ‘I am afraid of what’s next,’ ex-negotiator says
In an effort to increase the pressure on victims, ransomware gangs are now using threats of physical violence. (The Register)
‘Shadow AI’ increases cost of data breaches, report finds
Unmanaged and unsecured use of AI is leading to data breaches. (Cybersecurity Dive)
Enough to drive a cybersecurity officer mad: one rule here, a different rule there
Chief information security officers call for less fragmentation in global cybersecurity regulations. (ASPI)
UK Online Safety Act promotes insecurity
The implementation of the UK Online Safety Act requiring age verification for content deemed harmful to children introduces some security quandaries. (Tech HQ)
Can’t get enough Talos?
Cyber Analyst Series: Cybersecurity Overview and the Role of the Cybersecurity Analyst
A series of videos on the profession of cybersecurity analysts made in conjunction with the Ministry of Digital Transformation of Ukraine for Diia.Education (available in English and Ukrainian languages). Watch here.
Tales from the Frontlines
Join the Cisco Talos Incident Response team to hear real-world stories from the frontlines of cyber defense. Reserve your spot.
Vulnerability roundup
Cisco Talos’ Vulnerability Discovery & Research team recently disclosed seven vulnerabilities in WWBN AVideo, four in MedDream, and one in an Eclipse ThreadX module. Read more.
Talos Takes
Hazel is joined by threat intelligence researcher James Nutland to discuss Cisco Talos’ latest findings on the newly emerged Chaos ransomware group. Listen here.
Upcoming events where you can find Talos
It’s the summer. We’ll be on the beach.
Most prevalent malware files from Talos telemetry over the past week
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-07 18:06:412025-08-07 18:06:41AI wrote my code and all I got was this broken prototype
Today’s cyberattackers are masters of disguise — working hard to make their malicious activities look like normal processes. They use legitimate tools, communicate with command-and-control servers through public services, and mask the launch of malicious code as regular user actions. This kind of activity is almost invisible to traditional security solutions; however, certain anomalies can be uncovered by analyzing the behavior of specific users, service accounts, or other entities. This is the core concept behind a threat detection method called UEBA, short for “user and entity behavior analytics”. And this is exactly what we’ve implemented in the latest version of our SIEM system — Kaspersky Unified Monitoring and Analysis Platform.
How UEBA works within an SIEM system
By definition, UEBA is a cybersecurity technology that identifies threats by analyzing the behavior of users, devices, applications, and other objects in an information system. While in principle this technology can be used with any security solution, we believe it’s most effective when integrated in an SIEM platform. By using machine learning to establish a normal baseline for a user or object’s behavior (whether it’s a computer, service, or another entity), an SIEM system equipped with UEBA detection rules can analyze deviations from typical behavior. This allows for the timely detection of APTs, targeted attacks, and insider threats.
This is why we’ve equipped our SIEM system with an UEBA rule package — designed specifically to detect anomalies in authentication processes, network activity, and the execution of processes on Windows-based workstations and servers. This makes our system smarter at finding novel attacks that are difficult to spot with regular correlation rules, signatures, or indicators of compromise. Every rule in the UEBA package is based on profiling the behavior of users and objects. The rules fall into two main categories:
Statistical rules, which use the interquartile range to identify anomalies based on current behavior data.
Rules that detect deviations from normal behavior, which is determined by analyzing an account or object’s past activity.
When a deviation from a historical norm or statistical expectation is found, the system generates an alert and increases the risk score of the relevant object (user or host). (Read this article to learn more about how our SIEM solution uses AI for risk scoring.)
Structure of the UEBA rule package
For this rule package, we focused on the areas where UEBA technology works best — such as account protection, network activity monitoring, and secure authentication. Our UEBA rule package currently features the following sections:
Authentication and permission control
These rules detect unusual login methods, sudden spikes in authentication errors, accounts being added to local groups on different computers, and authentication attempts outside normal business hours. Each of these deviations is flagged, and increases the user’s risk score.
DNS profiling
Dedicated to analysis of DNS queries made by computers on the corporate network. The rules in this section collect historical data to identify anomalies like queries for unknown record types, excessively long domain names, unusual zones, or atypical query frequencies. It also monitors the volume of data returned via DNS. Any such deviations are considered potential threats, and thus increase the host’s risk score.
Network activity profiling
Tracking connections between computers both within the network and to external resources. These rules flag first-time connections to new ports, contacts with previously unknown hosts, unusual volumes of outgoing traffic, and access to management services. All actions that deviate from normal behavior generate alerts and raise the risk score.
Process profiling
This section monitors programs launched from Windows system folders. If a new executable runs for the first time from the System32 or SysWOW64 directories on a specific computer, it’s flagged as an anomaly. This raises the risk score for the user who initiated the process.
PowerShell profiling
This section tracks the source of PowerShell script executions. If a script runs for the first time from a non-standard directory — one that isn’t Program Files, Windows, or another common location — the action is marked as suspicious and increases the user’s risk score.
VPN monitoring
This flags a variety of events as risky — including logins from countries not previously associated with the user’s profile, geographically impossible travel, unusual traffic volumes over a VPN, VPN client changes, and multiple failed login attempts. Each of these events results in a higher risk score for the user’s account.
Using these UEBA rules helps us detect sophisticated attacks and reduce false positives by analyzing behavioral context. This significantly improves the accuracy of our analysis and lowers the workload of security analysts. Using UEBA and AI to assign a risk score to an object speeds up and improves each analyst’s response time by allowing them to prioritize incidents more accurately. Combined with the automatic creation of typical behavioral baselines, this significantly boosts the overall efficiency of security teams. It frees them from routine tasks, and provides richer, more accurate behavioral context for threat detection and response.
We’re constantly improving the usability of our SIEM system. Stay tuned for updates to the Kaspersky Unified Monitoring and Analysis Platform on its official product page.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-08-07 12:06:422025-08-07 12:06:42UEBA rules in Kaspersky SIEM | Kaspersky official blog
Cisco Talos’ Vulnerability Discovery & Research team recently disclosed seven vulnerabilities in WWBN AVideo, four in MedDream, and one in an Eclipse ThreadX module.
For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets fromSnort.org, and our latest Vulnerability Advisories are always posted onTalos Intelligence’s website.
A specially crafted HTTP request can lead to arbitrary Javascript execution in all five cases. An attacker must get a user to visit a webpage to trigger these vulnerabilities.
Additionally, Talos identified two vulnerabilities that, when chained together, can lead to arbitrary code execution:
TALOS-2025-2212 (CVE-2025-25214) A race condition vulnerability exists in the aVideoEncoder.json.php unzip functionality of WWBN AVideo 14.4 and dev master commit 8a8954ff. A series of specially crafted HTTP requests can lead to arbitrary code execution.
TALOS-2025-2213 (CVE-2025-48732) An incomplete blacklist exists in the .htaccess sample of WWBN AVideo 14.4 and dev master commit 8a8954ff. A specially crafted HTTP request can lead to arbitrary code execution. An attacker can request a .phar file to trigger this vulnerability.
MedDream
Discovered by Emmanuel Tacheau and Marcin Noga of Cisco Talos.
MedDream PACS Premium is a DICOM 3.0 compliant picture archiving and communication system for the medical industry. The PACS server provides connectivity to all DICOM modalities (CR, DX, CT, MR, US, XA, etc.).
Talos found four unique MedDreams PACS Premium vulnerabilities.
TALOS-2025-2154 (CVE-2025-26469) is an incorrect default permissions vulnerability in the CServerSettings::SetRegistryValues functionality of MedDream PACS Premium 7.3.3.840. A specially crafted application can decrypt credentials stored in a configuration-related registry key. An attacker can execute a malicious script or application to exploit this vulnerability.
TALOS-2025-2156 (CVE-2025-27724) is a privilege escalation vulnerability in the login.php functionality of meddream MedDream PACS Premium 7.3.3.840. A specially crafted .php file can lead to elevated capabilities. An attacker can upload a malicious file to trigger this vulnerability.
TALOS-2025-2176 (CVE-2025-32731) is a reflected XSS vulnerability in the radiationDoseReport.php functionality of meddream MedDream PACS Premium 7.3.5.860. A specially crafted malicious URL can lead to arbitrary JavaScript code execution. An attacker can provide a crafted URL to trigger this vulnerability.
TALOS-2025-2177 (CVE-2025-24485) is a server-side request forgery (SSRF) vulnerability in the cecho.php functionality of MedDream PACS Premium 7.3.5.860. A specially crafted HTTP request can lead to SSRF. An attacker can make an unauthenticated HTTP request to trigger this vulnerability.
Eclipse ThreadX is an embedded development suite for an advanced real-time operating system (RTOS) that provides efficient performance for resource-constrained devices.
TALOS-2024-2088 is a buffer overflow vulnerability in the FileX RAM disk driver functionality of Eclipse ThreadX FileX git commit 1b85eb2. A specially crafted set of network packets can lead to code execution. An attacker can send a sequence of requests to trigger this vulnerability.