How extensions from Open VSX were used to steal cryptocurrency

Our researchers have uncovered several malicious fake extensions targeting Solidity developers in the Open VSX marketplace. At least one company has fallen victim to the attackers distributing these extensions — losing approximately US$500 000 in crypto assets.

Threats associated with malware distribution in open-source repositories have been known about for a long time. Despite this, users of AI-powered code editors like Cursor AI and Windsurf are forced to use the open-source extension marketplace Open VSX, as they have no other source for the extensions these platforms need.

However, extensions on Open VSX do not undergo the same rigorous checks as those on the Visual Studio Marketplace. This loophole allows attackers to distribute malicious software disguised as legitimate solutions. In this post, we dive into the details of the malicious Open VSX extensions investigated by our experts, and explain how to prevent similar incidents within your organization.

Risks for users of Open VSX extensions

In June 2025, a blockchain developer who had just lost approximately US$500 000 in crypto assets to attackers reached out to our experts and requested an incident investigation. While examining a disk image from the compromised system, our researchers noticed a suspicious component of an extension named Solidity Language for the Cursor AI development environment. The component was executing a PowerShell script — a sure sign of malicious activity.

The Solidity Language extension on the Open VSX marketplace

The description of the Solidity Language extension published on the Open VSX marketplace

The extension was installed from the Open VSX marketplace, where it had tens of thousands of downloads (presumably inflated by bot activity). The description claimed to optimize development of smart contract code written in the Solidity language. However, analysis of the extension revealed it had no useful functionality whatsoever. The developers who installed it mistook the lack of advertised features for a bug, didn’t immediately investigate, and just continued their work.

The browser extension wasn’t actually faulty; it was fake. Once installed, it contacted a command-and-control server to download and run a malicious script. This script then installed ScreenConnect — a remote access application — on the victim’s computer.

The attackers used ScreenConnect to upload additional malicious payloads. In the incident our experts investigated, these tools specifically allowed the attackers to steal passphrases for the developer’s crypto wallets and then syphon off cryptocurrency. A detailed technical description of the attack, along with indicators of compromise, is available in a Securelist blog post.

Manipulating search: how attackers promote malicious extensions

A look into the Open VSX marketplace revealed a concerning trend: a fake extension, deceptively named “Solidity Language”, ranked fourth in search results, while the legitimate extension, simply called solidity, appeared all the way down at eighth. It’s no surprise then that the developer downloaded the counterfeit instead of the genuine article.

When searching Open VSX for "solidity", the imposter extension appeared higher than the legitimate one

Search results for “solidity”: the malicious extension (red) vs. the legitimate one (green)

This ranking is quite surprising, especially considering that at the time of the search, the legitimate extension had more downloads: 61 000 compared to the fake’s 54 000.

The key lies in Open VSX’s ranking algorithm. It doesn’t solely rely on download counts to determine relevance; it also considers other factors like verification status, ratings, and recency. This is exactly how the attackers managed to outrank the genuine extension in search results: the fake one had a more recent update date.

The fake plugin was removed from the Open VSX marketplace on July 2, 2025, right after the cryptocurrency heist. However, the very next day, we found another malicious package with the same name as the original extension, “solidity”, and the same harmful functionality as Solidity Language.

Additionally, our researchers used an open-source component-monitoring tool to discover yet another malicious package in Open VSX. Several details link this package to the same cybercriminals.

Why do developers have to rely on the Open VSX marketplace?

The Visual Studio Marketplace, Microsoft’s official store, has long been the primary industry source for extensions. It includes automatic scanning for malicious code, sandboxed execution of extensions for behavioral analysis, monitoring for anomalies in extension usage, and a number of other features to help identify harmful extensions. However, its licensing agreement dictates that only solutions for use with Visual Studio products can be published in the Visual Studio Marketplace.

Consequently, users of increasingly popular AI-powered code editors like Cursor AI and Windsurf must install extensions from an alternative store: Open VSX. The problem is that this platform has less stringent extension vetting, which makes it easier to distribute malicious packages compared to Microsoft’s official marketplace.

To be fair, attackers sometimes manage to publish malicious extensions even in the more secure Visual Studio Marketplace. For instance, this spring, experts found three malicious extensions there with an infection scheme very similar to the one described in this post, also targeting Solidity developers.

How to stay safe?

No matter where you’re installing extensions from, we recommend the following:

  • Be careful when searching marketplaces.
  • Always take note of who the developer of an extension is.
  • Check the code and behavior of extensions you install.
  • Use an XDR solution to monitor any suspicious activity inside the corporate network.

Kaspersky official blog – ​Read More

Asus and Adobe vulnerabilities

Asus and Adobe vulnerabilities

Cisco Talos’ Vulnerability Discovery & Research team recently disclosed two vulnerabilities each in Asus Armoury Crate and Adobe Acrobat products.  

The vulnerabilities mentioned in this blog post have been patched by their respective vendors, all in adherence to Cisco’s third-party vulnerability disclosure policy.    

For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.     

Asus Armoury Crate stack-based buffer overflow and authorization bypass  vulnerabilities

Discovered by Marcin 'Icewall' Noga of Cisco Talos.   

These vulnerabilities were recently covered in a deep-dive post, Decrement by one to rule them all.

Asus Armoury Crate is a software utility used to manage Asus and ROG lighting, performance, and updates.

TALOS-2025-2144 (CVE-2025-1533) is a stack-based buffer overflow vulnerability in the AsIO3.sys kernel driver of Asus Armoury Crate 5.9.13.0. A specially crafted I/O request packet (IRP) can lead to stack-based buffer overflow. An unprivileged attacker can run a program from user mode to trigger this vulnerability.

TALOS-2025-2150 (CVE-2025-3464) is an authorization bypass vulnerability in the AsIO3.sys functionality of Asus Armoury Crate 5.9.13.0. A specially crafted hard link can lead to an authorization bypass. An attacker can create a hard link to trigger this vulnerability.

Adobe Acrobat Reader out-of-bounds read and use-after-free vulnerabilities 

Discovered by Kamlapati Choubey of Cisco Talos.   

Adobe Acrobat Reader is one of the most popular PDF reading software currently available. Talos found an out-of-bounds read vuln, TALOS-2025-2159 (CVE-2025-43578), in the Font functionality of Adobe Acrobat Reader 2025.001.20435. A specially crafted font file embedded into a PDF can trigger this vulnerability which can lead to disclosure of sensitive information.

TALOS-2025-2170 (CVE-2025-43576) is a use-after-free vulnerability in the annotation object processing functionality of Adobe Acrobat Reader 2025.001.20435. A specially crafted Javascript code inside a malicious PDF document can trigger reuse of a previously freed object, which can lead to memory corruption and could result in arbitrary code execution.

An attacker needs to trick the user into opening the malicious file to trigger either of these vulnerabilities.

Cisco Talos Blog – ​Read More

Is a Gemini AI update about to kill privacy on your Android device? | Kaspersky official blog

On July 7, 2025, Google rolled out a Gemini update that gives its AI-powered assistant access to Phone, Messages, WhatsApp, and Utilities data on Android devices. The company announced this update via an email to the users of its chatbot — essentially presenting them with a fait accompli. “We’ve made it easier for Gemini to interact with your device”, the email read. “Gemini will soon be able to help you use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off”.

According to Google, the update improves privacy because users can now use Gemini's features without having to enable Gemini Apps Activity. Pretty convenient, right?

According to Google, the update improves privacy because users can now use Gemini’s features without having to enable Gemini Apps Activity. Pretty convenient, right?

The update applies regardless of whether the Gemini Apps Activity feature is enabled or not. Google pushed the update to all Android versions that support Gemini, starting with Android 10. So, although the company warned users, it clearly failed to ask for their explicit consent. Google has already practiced subtle coercion to use its features before: just a month ago, Gemini was integrated into the Gmail client without any warning.

The email itself contained neither clear instructions for how to disable the new features, nor detailed explanations as to what exactly Gemini would do with the collected data. Users received the email just two weeks before the update was launched.

As you’d expect, the tech community was on the verge of panic. Previously, users who wanted to integrate Gemini with their apps had to explicitly enable Gemini Apps Activity. This allowed Gemini to store and use their data long-term, and potentially gave developers access to it – of course, “only for the purpose of improving Google AI”.

Warning prompt when launching Gemini in the browser for the first time

Warning prompt when launching Gemini in the browser for the first time

Google isn’t alone in this. OpenAI, Anthropic, and other AI companies are guilty of the same “improving service quality” excuse. At least Google gives users the illusion of choice. What makes this case different is that, even with Gemini Apps Activity turned off, Google will still retain your conversations with the AI assistant for up to 72 hours — all for the same purposes of safety, security, and feedback.

We won’t debate whether this is good or bad — we’ll just show you how to completely block Gemini’s access to your apps and data. Grab your phone, and let’s go!…

How to disable Gemini via the app?

  1. Open Gemini on your Android device.
  2. Tap your profile picture or initials in the top-right corner.
  3. Select Gemini Apps Activity.
  4. Tap Turn off, or select Turn off and delete activity.
Disabling Gemini via the app

Disabling Gemini via the app

How to disable Gemini via the web interface?

  1. Open Gemini in a browser.
  2. Click the hamburger menu in the top-left corner.
  3. Select Activity or Settings & HelpActivity.
  4. Tap Turn off, or select Turn off and delete activity.

Alternatively, you can reach that option directly to turn off Gemini Apps Activity right there.

Disabling Gemini via the web interface

Disabling Gemini via the web interface

How to block Gemini from accessing individual apps and services?

If rather than disabling the AI assistant altogether you want to restrict Gemini’s access to data only from certain services like your email or photos, you can customize which apps it can work with and which it cannot.

Disabling Gemini’s access to individual services via the app:

  1. Open the Gemini app.
  2. Go to your profile and select Apps.
  3. Turn off the toggle next to each app or service whose data you don’t want to share with Gemini.
Disabling Gemini's access to individual services via the app

Disabling Gemini’s access to individual services via the app

Disabling Gemini’s access to individual services via the web interface:

  1. Open Gemini in a browser.
  2. Click the hamburger menu in the top-left corner.
  3. Select Settings & help → Apps.
  4. Turn off the toggle next to each app or service whose data you don’t want to share with Gemini.

Alternatively, you can reach that section of the settings directly.

Disabling Gemini's access to individual services via the web interface

Disabling Gemini’s access to individual services via the web interface

How to configure additional privacy settings for Gemini?

Deleting saved Gemini data:

  1. While in the Gemini app, go to your profile and select Gemini Apps Activity. In a browser, open Activity, click Delete, and select a time range.
    • Last hour/day clears your recent activity.
    • All time clears all your activity.
    • Custom range lets you select a range of data to clear.
  2. Confirm deletion.
Deleting saved Gemini data

Deleting saved Gemini data

Setting up auto-delete for Gemini data:

  1. While in the Gemini app, go to your profile, and select Gemini Apps Activity. In a browser, open Activity.
  2. Choose how long saved data will be kept before it’s deleted: three, 18, or 36 months.
Setting up auto-delete for Gemini data

Setting up auto-delete for Gemini data

How to completely remove Gemini from your smartphone?

If you plan not to use Gemini on your phone altogether, you can simply uninstall the app:

  1. Go to Settings and select Apps.
  2. Find Gemini, and tap Uninstall if that option is available.
  3. If you don’t see Uninstall, tap Disable Gemini is a system app on some phones and thus not easy to remove. For more details on how to deal with this, see Delete the undeletable: how to disable and remove Android bloatware.

If you’re determined not to have any Google services on your phone, consider installing GrapheneOS; however, be forewarned that this is a solution for geeks with a Pixel phone only.

How to check that you’ve successfully disabled Gemini?

When you’re done with the settings, it’s a good idea to verify if your changes have been applied successfully:

  1. Go to the Gemini Activity.
  2. Check that there are no records of your activity.
  3. In the Gemini app, check the state of the toggles in the Apps.
  4. Repeat these checks after each Google update you install.

To protect your Android device, use tried-and-true security solutions like Kaspersky for Android. This will give you peace of mind knowing you don’t have to worry about malware, your privacy, passwords, or personal and payment data.

Here are a few other posts about the subtleties of privacy in Google services and beyond.

Kaspersky official blog – ​Read More

How to Maintain Fast and Fatigue-Free Alert Triage with Threat Intelligence 

Alert triage as one of the critical SOC and MSSP workflows implies evaluating, prioritizing, and categorizing security alerts to determine which threats require immediate attention and which can be safely dismissed or handled through automated processes. 

Efficient alert triage, supported by robust threat intelligence, ensures that organizations stay ahead of adversaries while maintaining analyst productivity and morale. We shall see how it works on the example of ANY.RUN’s Threat Intelligence Lookup.  

Why Triage is the Key to Efficiency 

For SOCs, triage ensures that internal teams focus on high-priority incidents that could compromise critical systems or data. MSSPs, managing multiple clients, rely on triage to allocate resources efficiently across diverse environments, ensuring timely responses tailored to each client’s needs.  

The triage process acts as the gateway between detection and action — the critical juncture where security alerts either trigger appropriate defensive measures or fade into background noise. 

Challenges and Problems of Alert Triage 

Alert triage is fraught with challenges that compromise its effectiveness in many organizations. 

  • Alert Overload: Modern SOCs generate thousands to millions of alerts daily from tools like SIEMs, EDRs, and network monitoring systems. Analysts can only investigate a fraction of these, leading to potential oversight of critical threats. 
  • False Positives: Many alerts are benign or irrelevant, consuming valuable time and resources.  
  • Lack of Context: Alerts often require analysts to manually gather data from disparate sources, slowing down investigations and increasing the risk of errors. 
  • Resource Constraints: Limited staffing and budget constraints stretch SOC teams thin, making it difficult to handle high alert volumes efficiently; the same goes for MSSPs managing multiple clients. 
  • Evolving Threats: The complexity and variety of modern cyberattacks demand constant adaptation, challenging analysts to stay ahead with limited tools and time. 

These obstacles create inefficiencies, delay responses, and increase organizational risk.

Speed as a Critical Key Performance Indicator

Speed in alert triage, measured by metrics like Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR), is a critical KPI for SOCs and MSSPs. Rapid triage minimizes the window of opportunity for attackers, reducing potential damage from breaches, data loss, or system downtime. For businesses, fast triage aligns with key objectives: 

  • Minimizing Financial Impact 
  • Protecting Customer Trust 
  • Operational Continuity  
  • Regulatory Compliance 

Organizations with efficient triage processes can handle larger volumes of security data without proportionally increasing staff, improving operational efficiency and ROI on security investments.

Analyst Fatigue: The Hidden Threat to Security Effectiveness 

Analyst fatigue occurs when security professionals become mentally and emotionally exhausted from processing endless streams of alerts, many of which prove to be false positives or low-priority events. 

Cognitive overload increases when analysts must process more information than their mental capacity allows, leading to lower accuracy in threat assessment. Emotional exhaustion develops from the constant pressure of potentially missing critical threats, creating a state of chronic stress that affects both performance and well-being. 

The business impact is profound and multifaceted. Fatigued analysts are more likely to miss genuine threats, increasing the exposure to successful attacks. They may also escalate false positives to avoid responsibility. High fatigue levels contribute to analyst turnover, creating recruitment and training costs while leaving organizations vulnerable during transition periods. 

A negative feedback loop emerges where stressed analysts make more mistakes, leading to increased scrutiny and pressure, which further exacerbates fatigue. This cycle can devastate team morale.  

Balancing Speed and Accuracy: The Dual Challenge of Analyst Overload 

The “need for speed” in alert triage is inseparable from the problem of analyst overload and fatigue. SOCs and MSSPs must analyze threats, incidents, and artifacts quickly to contain risks, but this analysis must be accurate and comprehensive to avoid missing critical threats or wasting resources on false positives.  

Solutions that streamline triage without sacrificing accuracy are essential for overcoming this paradox. You do not choose between speed and accuracy but develop systems and processes that enable both.

ANY.RUN’s Threat Intelligence Lookup: A Comprehensive Solution

ANY.RUN’s Threat Intelligence Lookup addresses both the speed and fatigue challenges by providing rapid, comprehensive threat context for indicators like files, URLs, domains, and IP addresses, and enabling teams to make informed decisions quickly.  
 
Besides basic IOCs, this data contains attack and behavioral indicators including: 

  • file modifications, 
  • processes, 
  • network activity, 
  • TTPs mapped to the MITRE ATT&CK Matrix, 
  • malware configurations, Suricata IDS signatures. 

The data is derived from investigations of real-world cyberattacks on over 15,000 companies using ANY.RUN’s services.   

When analysts encounter suspicious artifacts during triage, they can quickly query the service to obtain detailed information about the threat. This eliminates the time-consuming process of manually researching threats across multiple sources. 

TI Lookup Use Cases: Faster and Smarter Alert Triage

Instead of spending valuable time manually investigating suspicious artifacts, analysts can focus on higher-level analysis and decision-making. Here are a couple of examples.  

1. Artifact Quick Check 

A suspicious IP spotted in network connections can be checked against TI Lookup’s vast indicator database in a matter of seconds.   

destinationIP:”195.177.94.58″ 

IP search results with a malicious verdict 

The IP address is exposed as malicious and a part of Quasar RAT inventory. It has been detected in recent malware samples, so it is an indicator of an actual threat.   

Get 50 search requests to test TI Lookup in your SOC
Speed up triage and gain threat context for fast response 



Request trial


2. Process Investigation 

Suppose an analyst notices a legitimate utility like certutil.exe is used for retrieving content from an external URL. All they have to do is copy a snippet of command line contents and paste it into TI Lookup search bar with the CommandLine search parameter:  

commandLine:”certutil.exe -urlcache -split -f http” 

Lookup by a fragment of a command line command 

Switching to the Analyses tab of the search results, the analyst observes a selection of malware samples that performed this command during their execution chain. Now he knows that this behavior is typical for Glupteba trojan acting as a loader. Each sample analysis can be researched in depth and used for collecting IOCs.  

3. Registry Change Understanding 

Could it be okay if an app changes Windows registry key \CurrentVersion\Run responsible for default autoruns at system startup, by adding a command that initiates a script execution chain via mshta.exe using built-in VBScript? Query TI Lookup using RegistryKey and RegistryValue search parameters:  

registryKey:”SOFTWAREMicrosoftWindowsCurrentVersionRun” AND registryValue:”mshtavbscript” 

Malware samples that change Windows registry in a certain way 

As we can notice looking at the found sandbox analyses, such registry modification is often associated with malware evasion and persistence techniques, and is typical for XWorm RAT.  

4. Mutex detection 

When a new malware emerges, the available intelligence on it can be scarce. Nitrogen ransomware became notorious for targeting the valuable and vulnerable financial sector back in mid-2024. For months, a single research report was the source of public data on this strain. It provided analysts with two IOCs and two IOBs, one of the formers was a mutex.  

Before encrypting files, Nitrogen creates a unique mutex (nvxkjcv7yxctvgsdfjhv6esdvsx) to ensure only one instance of the ransomware runs at a time. The mutex can be used for Nitrogen detection, and searching for it via Threat Intelligence Lookup delivers Nitrogen samples detonated in the Interactive Sandbox.  

syncObjectName:”nvxkjcv7yxctvgsdfjhv6esdvsx” 

SyncObject parameters in TI Lookup help to work with mutexes  

Each sample can be explored to enrich the understanding of the threat and gather additional indicators not featured in public research. 

Nitrogen sample analysis: ransom note and one of the main processes 

5. Payload recognition 

File hashes as unique digital fingerprints of a particular file are popular indicators of compromise. TI Lookup supports md5, sha256 and sha1 search parameters, but also allows to use a file name as a query. 

filePath:”Electronic_Receipt_ATT0001″ 

File search results: not always malicious but not to be trusted 

These lookup results show that a certain file name pattern can emerge in both malicious and benign samples: phishing kit campaigns often use filenames typical for popular documentation formats. 

We can observe several samples of phishing attacks using the file with such name pattern in the Interactive Sandbox:  

A phishing sample analysis 

File name search can help understand the general mechanics of phishkit attacks and see a broader picture of emerging threats.

Fast, Fatigue-Free Alert Triage with Threat Intelligence 

It’s up to you not to choose between speed and accuracy, nor to accept analyst fatigue as an unavoidable cost of doing business. Instead, embrace solutions that enable both rapid response and meticulous analysis. 

ANY.RUN’s Threat Intelligence Lookup fuels this strategy by providing immediate, context-rich insights into suspicious artifacts and transforming reactive, manual investigations into proactive, informed decision-making. This translates into tangible business values: 

  • Enhanced Operational Efficiency: Teams can process a higher volume of alerts with existing staff, optimizing the return on investment in security tools and personnel. 
  • Reduced Organizational Risk: Faster and more accurate identification of genuine threats minimizes the window of opportunity for attackers, thereby reducing the likelihood of successful breaches, data loss, and system downtime.   
  • Improved Analyst Productivity and Morale: Automating the initial stages of threat intelligence gathering frees analysts from repetitive, cognitively taxing tasks.  
  • Preserved Customer Trust and Brand Reputation: Swift and effective handling of security incidents demonstrates a commitment to protecting sensitive data and maintaining operational integrity. 

Investing in solutions like ANY.RUN’s Threat Intelligence Lookup is not just about technology; it’s about building a sustainable and resilient security posture that protects an organization’s financial health, its most valuable assets, and its people. 

About ANY.RUN  

Over 500,000 cybersecurity professionals and 15,000+ companies in finance, manufacturing, healthcare, and other sectors rely on ANY.RUN. Our services streamline malware and phishing investigations for organizations worldwide.  

  • Speed up triage and response: Detonate suspicious files using ANY.RUN’s Interactive Sandbox to observe malicious behavior in real time and collect insights for faster and more confident security decisions.  
  • Improve threat detection: ANY.RUN’s Threat Intelligence Lookup and TI Feeds provide actionable insights into cyber attacks, improving detection and deepening understanding of evolving threats. 

Start 14-day trial of ANY.RUN’s solutions in your SOC today 

The post How to Maintain Fast and Fatigue-Free Alert Triage with Threat Intelligence  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Microsoft Patch Tuesday for July 2025 — Snort rules and prominent vulnerabilities

Microsoft Patch Tuesday for July 2025 — Snort rules and prominent vulnerabilities

Microsoft has released its monthly security update for July 2025, which includes 132 vulnerabilities affecting a range of products, including 14 that Microsoft marked as “critical.”  

In this month’s release, Microsoft observed none of the included vulnerabilities being actively exploited in the wild. Out of 14 “critical” entries, 11 are remote code execution (RCE) vulnerabilities in Microsoft Windows services and applications including KDC Proxy service, Microsoft Office and SharePoint server. 

CVE-2025-49735 is an RCE vulnerability in Windows KDC Proxy Service (KPSSVC) given a CVSS 3.1 score of 8.1. To successfully exploit this vulnerability, an unauthenticated attacker could use a specially-crafted application to leverage a cryptographic protocol vulnerability in KPSSVC to perform RCE against the target. Microsoft has noted that this vulnerability only affects Windows servers that are configured as a Kerberos key Distribution Center (KDC) Proxy Protocol server, and domain controllers are not affected. Microsoft assessed that the attack complexity is “high,” and exploitation is “more likely.”  

CVE-2025-49704 is an RCE vulnerability in Microsoft SharePoint server given a CVSS 3.1 score of 7.7. Microsoft noted that this vulnerability in Microsoft Office SharePoint is due to improper control of generation of code (“code injection”) which would allow an authenticated attacker to execute code over a network. To exploit this vulnerability, an authenticated attacker in a network-based attack, with a minimum of Site Member permission, could execute arbitrary code remotely on the SharePoint server. Microsoft assessed that the attack complexity is “low,” and exploitation is “more likely.”  

CVE-2025-49695, CVE-2025-49696, CVE-2025-49697, CVE-2025-49698, CVE-2025-49702 and CVE-2025-49703 are RCE vulnerabilities in Microsoft Office and Microsoft Word. The vulnerabilities CVE-2025-49695 and CVE-2025-49698 are “use after free” (UAF) vulnerabilities that occur when Microsoft Office tries to access memory that has already been freed. CVE-2025-49696 is an out-of-bounds read in Microsoft Office. Microsoft assessed that for CVE-2025-49695 and CVE-2025-49696, the attack complexity is “low,” and exploitation is “more likely.” For CVE-2025-49697, CVE-2025-49698, CVE-2025-49702 and CVE-2025-49703, the attack complexity is “low,” and exploitation is “less likely.”   

CVE-2025-48822 is an RCE vulnerability in Windows Hyper-V Discrete Device Assignment (DDA) given a CVSS 3.1 score of 8.6. This vulnerability is an out-of-bounds read in Hyper-V that could allow an unauthorized attacker to execute code locally. Microsoft assessed that the attack complexity is “low,” and exploitation is “less likely.” 

CVE-2025-47981is an RCE vulnerability in SPNEGO Extended Negotiation (NEGOEX) Security Mechanism given a CVSS 3.1 score of 9.8. This vulnerability is a heap-based buffer overflow that could allow an unauthorized attacker to execute code over a network. According to Microsoft, this vulnerability affects Windows client machines running Windows 10, version 1607 and above, due to the following GPO being enabled by default on these operating systems: “Network security: Allow PKU2U authentication requests to this computer to use online identities.” Microsoft has assessed that the attack complexity is “low,” and exploitation is “more likely.”  

CVE-2025-49717 is an RCE vulnerability in Microsoft SQL Server, given a CVSS 3.1 score of 8.5. This vulnerability is a heap-based buffer overflow that could allow an unauthorized attacker to execute code over a network. However, Microsoft has assessed “exploitation unlikely”. 

The last critical vulnerability listed (CVE-2025-47980) is an information disclosure in Windows Imaging Component that, if exploited, could allow an attacker to read small portions of heap memory. Microsoft has assessed that the attack complexity is “low,” and exploitation is “less likely.”   

Talos would also like to highlight the following “important” vulnerabilities as Microsoft has determined that their exploitation is “more likely:”  

  • CVE-2025-49701: Microsoft SharePoint Remote Code Execution Vulnerability 
  • CVE-2025-49724: Windows Connected Devices Platform Service Remote Code Execution Vulnerability 

A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page.   

In response to these vulnerability disclosures, Talos is releasing a new Snort ruleset that detects attempts to exploit some of them. Please note that additional rules may be released at a future date, and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Ruleset customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.   

Snort 2 rules included in this release that protect against the exploitation of many of these vulnerabilities are: 64435, 64436, 65092, 65096 – 65107, 65110 – 65113.  

The following Snort 3 rules are also available: 301114, 301268 – 301272. 

Cisco Talos Blog – ​Read More

Technical Analysis of Ducex: Packer of Triada Android Malware 

Many have probably heard of the modular malware for mobile devices called Triada. Even nine years after its first mention in 2016, it remains one of the most advanced Android trojans out there. Recently, our team at ANY.RUN came across an interesting sample of this malicious software. The sample in question was embedded in a fake Telegram app.  

Since the capabilities of this threat have been extensively studied and described by other researchers, we focused on the packer used by the malware that we named Ducex.  

Here’s a detailed breakdown of our analysis and key findings. 

Key Takeaways 

  • Triada’s Android Packer: Ducex is an advanced Chinese Android packer found in Triada samples, whose primary goal is to complicate analysis and confuse the detection of its payload. 
  • Encrypted Functions: The packer employs serious obfuscation through function encryption  using a modified RC4 algorithm with added shuffling. 
  • XORed Strings: Beyond functions, all strings used by Ducex are also encrypted using a simple sequential XOR algorithm with a changing 16-byte key. 
  • Debugging Challenges: Ducex creates major roadblocks for debugging. It performs APK signature verification, failing if the app is re-signed. It also employs self-debugging using fork and ptrace to block external tracing. 
  • Analysis Tool Detection: The packer actively detects the presence of popular analysis tools such as Frida, Xposed, and Substrate. If any of these tools are found in memory, the process terminates its execution. 
  • Payload Storage & Encryption: The Triada payload is uniquely stored within Ducex’s own classes.dex file in a large, additional section after the main application code, avoiding detection as separate files. 

Initial Analysis  

Our investigation began with the analysis of a Triada sample inside ANY.RUN’s Interactive Sandbox.  

Analysis of the fake Telegram app in ANY.RUN’s Sandbox 

The service quickly identified the malware family, and we were able to confirm this by recognizing the characteristic domains it communicated with.  

The domains Triada communicates with detected during sandbox analysis 

From there, we moved on to inspect the packer. 

Detect malware in a live, interactive environment
Analyze suspicious files and URLs in ANY.RUN’s Sandbox 



Sign up with business email


Why We Decided to Analyze Ducex 

Not malicious itself, Ducex is still quite interesting. Its developers tried their best to complicate analysis as much as possible and confuse detection of the payload it carries using various techniques that will be discussed below. 

We have not obtained the source code of this tool, so we relied exclusively on its reverse engineering. For the same reason, we gave it a name ourselves. Studying the classes in the code, we noticed that its authors used the word “Duce” a lot.  

“Duce” names of code entities 

The library they utilize is called “libducex”. Based on this, we called the packer “Ducex”. 

Ducex: The Architecture 

Before proceeding to detailed analysis of the tool, let’s look at the general operation scheme of Ducex: 

General Ducex scheme 

We have outlined all the stages of the tool’s execution; we will demonstrate and discuss them below in their interplay.  

Analyzing App’s Java Code 

When opening the malicious APK in an Android decompiler (e.g., JADX), the AndroidManifest shows the following data under the application tag: 

Application tag contents in AndroidManifest 

The com.m.a.DuceApplication class immediately catches our attention. It is specified in android:name, which means an instance of this class is created immediately after the application is launched. Going to its code, we see two methods: attachBaseContext() and onCreate(). 
 

Methods within com.m.a.DuceApplication class 

We are interested in these methods specifically, as they will be called when creating an instance of the class, in the specified order. Inside them, we see calls to methods of the CoreUtils class:  

Methods of CoreUtils class 

That’s where it gets interesting. As can be seen in the screenshot above, most methods of this class are native, and they are implemented in the “libducex.so” library. 

Additionally, if we decrypt the presented strings, we get, respectively, “org.telegram.messenger.ApplicationLoaderImpl” and “androidx.core.app.CoreComponentFactory“.  

As can be seen in DuceApplication.onCreate() and DuceApplication.attachBaseContext(), it is org.telegram.messenger.ApplicationLoaderImpl that will be called to launch the fake Telegram using the CoreUtils.runOn() and CoreUtils.runOnC() methods, and androidx.core.app.CoreComponentFactory will be used in DuceAppComponentFactory, which overrides the creation of various application components (Application, Activity, etc.) to inject custom logic during their initialization using the same methods from CoreUtils

A CoreUtils method is used to override the creation of Application 

Since the most interesting things happen in the libducex.so library, let’s proceed to its analysis. 

Libducex.so Library Analysis 

Encrypted Functions 

It turned out, you can’t just start analyzing the library. For example, the program entry point looks like this: 

Entry point with obfuscation 

Most instructions are displayed incorrectly. It looks like very serious obfuscation or a completely non-working function. At this point we estimated what would be called even before the program entry point and moved to JNI_OnLoad.  

JNI_OnLoad is called after the library is loaded into memory and the control is passed to the JVM.  

JNI_OnLoad with obfuscation 

As we can see in the screenshot, the situation is exactly the same. The code looks non-working, which makes you think that it might not even be obfuscated but encrypted. So, if even JNI_OnLoad is encrypted, then we need to turn to functions that execute earlier than both it and Entry Point, which means before control is transferred to the JVM.  

This function could be .init_proc, which executes immediately after loading the library into memory. And indeed, it was there that we found correct code capable of being decompiled: 

.init_proc function 

Getting a little ahead of ourselves, we’ll say that the functions were indeed encrypted, and their decryption occurs in .init_proc, in the nested create_key__decrypt_funcs function. Now back to the analysis.  

The first thing that interests us is the configuration that is passed as the second argument to the function called in the figure above: 

Configuration used to decrypt functions 

It consists of the following fields, presented in the same order as in the screenshot above: 

  1. Magic value ‘mxe’;  
  1. Decryption start address (in our case, Entry Point);  
  1. Number of bytes for decryption;  
  1. Function that will be called after completion of decryption;  
  1. 16-byte buffer – key that will be used during decryption. 

The decryption itself, occurring in create_key__decrypt_funcs, is carried out according to the configuration data. At first it seemed to us that the code had been encrypted using classic RC4, but this is not quite so. The developers slightly modified the algorithm by adding additional shuffling, so standard RC4 implementations didn’t work; we had to implement it ourselves.  

Our decryption script takes a key as input, as well as the part of the file containing encrypted functions. The necessary parameters (the key, the start address of the encrypted part, and its size) are taken from the configuration described above. 

def rc4_init(key):  

    s = list(range(256))  

    s += [0, 0]  

    j = 0 

    for i in range(256): 
        key_byte = key[i & 0xf] 
        j = (j + s[i] + key_byte) &0xff 
        s[i], s[j] = s[j], s[i] 
 
    return s 
  

def rc4_process(s, encoded_data):  

    i = s[256]  

    j = s[257]  

    output = bytearray(encoded_data) 

    for n in range(len(encoded_data)): 
        i = (i + 1) & 0xff 
        a = s[i] 
        j = (j + a) & 0xff 
        b = s[j] 
        s[i], s[j] = b, a 
        output[n] ^= s[(a + b) & 0xff] 
 
        for _ in range(2): 
            i = (i + 1) & 0xff 
            a = s[i] 
            j = (j + a) & 0xff 
            b = s[j] 
            s[i], s[j] = b, a 
 
    return bytearray(output) 
  

def decrypt(key, func_buf): 

    s = rc4_init(key) 
	decoded_funcs = rc4_process(s, func_buf) 
	return decoded_funcs 

Applying the script to our library, we got the correct code. Entry Point, for example, now looks as follows: 

Decrypted Entry Point code 

Now all instructions are correct, and there are no problems with code decompilation. 

Encrypted Strings 

The encrypted functions, as it turned out, are not the only thing that developers decided to hide from the researcher’s eyes. All strings used by the packer are also encrypted. The algorithm is quite simple: it’s sequential XOR with a given key consisting of 16 bytes. The algorithm is unchanged throughout the entire sample, only the key changes. Example script with a given key: 

def xor_data(data, key):  

result = bytearray()  

for i in range(len(data)):  

result.append(data[i] ^ key[i % len(key)]) return result 

key = b"GeGrE0tX`:0^6qLS"  

encoded_buf = bytearray([0x1d, 0x0c, 0x37, 0x52, 0x2c, 0x5e, 0x12, 0x34, 0x01, 0x4e, 0x55, 0x5e])  

decrypted = xor_data(encoded_buf, key) 

print(decrypted) 

In the output after running this simple script, you can see: bytearray(b’Zip inflatex00′).

What’s interesting here is that all the functions used for decryption are called immediately upon initialization, not when required in the code. And the reference to the array with pointers to these functions is located, respectively, in .init_array, there are a huge number of them there: 

.init_array section with functions for decryption 

Control Flow Obfuscation 

In case the researcher breaks through the encryption, the developers have further obfuscated the code. For example, this is how the structure of the function in which native method names are decrypted looks: 
 

Structure of the function responsible for native methods names decryption 

And this is not even half of the function’s code, which could have been about 20 lines long. And all because of such constructions of cycles and conditions, which do not serve any purpose except to complicate the code: 

Cycles with the sole purpose to complicate the code 

At this point you might think that it would be enough to run the sample under a debugger and go through all these conditional jumps, see the decrypted functions and strings, but if only it were that simple. 

APK Signature Verification 

The first thing we attempted was to parse the APK using Apktool utility and insert the line android:debuggable=”true” under the application tag in AndroidManifest, then rebuild and sign the APK again.  

Upon doing this in the smartphone settings, it’s possible to set up the application to wait for a debugger to connect. It also makes it possible to utilize the following command for adb: adb shell am set-debug-app -w –persistent <package>. On the next launch, a window “Waiting For Debugger” should appear and disappear after the debugger is connected. 
 
But in our case this didn’t work. After rebuilding the application, it began to “crash” immediately after loading libducex. The app checks the APK signature and terminates if it doesn’t match the expected one. Below you can see decrypted strings used when obtaining APK signatures: 

Decrypted strings used when obtaining APK signatures 

Self-Debugging 

Thus, debugging the application by changing the APK didn’t work. Fortunately, there is a way to debug Android applications without setting the debuggable flag and, accordingly, without the need to rebuild and re-sign the file. 

This method is more laborious, as it involves building your own Android image and then using it in the emulator. Its description can be found here.   

Thus, we built the image, used the debug command given above, and tried to run the application under the debugger. However, the packer held another surprise: after launching the application, the “Waiting For Debugger” window appears only for a fraction of a second, after which normal launch occurs, without waiting for the debugger. This may indicate that the application “debugs itself” so that no one else can debug it. 

We found the corresponding functions: a process sets handlers using the rt_sigaction system call, for example, for the SIGCHLD event. This handler checks what happened to the child process, after which it can either terminate or continue its execution. 
 

rt_sigaction call 
  • In x8 – rt_sigaction call code; 
  • In x0 – SIGCHLD code; 
  • In x1 – handler structure address; 
  • In x2 – flags, in our case 0. 

And then another interesting thing occurs: the parent process uses the fork system call to create a child process, which, in turn, will work to monitor the state of the parent process. It uses the ptrace system call with the PTRACE_ATTACH parameter to attach to the parent process, after which it waits for events from it and, depending on what happened, resumes the process or terminates it and its own execution. 

ptrace call  
  • In x8 – ptrace call code; 
  • In x0 – PTRACE_ATTACH code; 
  • In x1 – value from global variable – parent process pid; 
  • In x2 – unused addr argument; 
  • In x3 – unused data argument. 

So, parent and child processes monitor each other’s state and react in case of certain events. Additionally, considering that a process can have only one tracer, connecting an external debugger becomes impossible in this case. 

Detection of Frida, Xposed, and Others 

At this point, two options remain: either static code analysis, at most using tools capable of emulating instructions (e.g., emulators on the Unicorn engine), or installing Frida hooks to change application behavior without actual APK modification and still be able to dynamically analyze the unpacking process. 

But Frida wouldn’t work either, because the packer has built-in checks for the presence of frida_server, as well as Xposed, Substrate, etc. 

Decrypted strings in memory 

If any of these strings are found in memory, the process terminates execution. This can be patched, but it’ll take time to find the implementations in the code. And we shouldn’t forget about the signature verification used in the library. So we continue with static code analysis.  

Where Does the Packer Get Code to Run?

Ducex doesn’t create a separate encrypted file with a payload, instead it stores this code inside its own classes.dex, in a special additional section that goes after the main application code, thus avoiding detection of additional files: 

The large section after the map section contains the payload 

The screenshot shows that after the map section, where everything should have ended, there is an additional section, and a very large one in size. That’s where the payload is located. 

The payload, i.e., Triada, is stored as additional dex modules. It’s noteworthy that despite all its complexity, this tool doesn’t fully compress and encrypt the dex files that it later runs. Instead only 2048 == 0x800 bytes from the beginning of each module are encrypted, while the rest remains untouched. Thus, at least the dex module header will always be completely encrypted, as it has a fixed size of 0x70 bytes, further sections do not have a fixed size. There are 5 modules in the file.  

There is also a configuration shared by all modules:  

Another configuration common to all modules 

Here there is a certain magic value starting with “mx”, but now with additional bytes x01x02. The most useful thing here is the 16-byte key. It is highlighted in red in the screenshot above. 
 
Additionally, there are configurations for each of the modules storing information for decrypting them: 

Configuration with info for decrypting a module 

The first 4 bytes of this structure are very important, here we have the value 0x100==256, which means that the first 0x800 bytes of this dex are encrypted. If there was, for example, the number 258, this would mean that part of the dex is compressed using zlib. Then come the highlighted fields with module sizes and another 16-bit decryption key, immediately after which the encrypted module block begins. 

File structure can be represented like this:  

Classes.dex file structure 

This scheme doesn’t depict all fields in the configurations, but those that were required for our analysis. 

How Is Decryption Performed?  

Decryption here is quite complex. Two algorithms are used for it: a modified RC4 and SM4. 
 
The first one is well known, but not the second. SM4 is a Chinese block encryption standard. It is not as popular as, for example, AES, and we managed to identify it by the substitution table: 

The substitution table used by SM4 

It’s important to note that even the table was encrypted, as well as all the strings used by the tool.  

As we know, the unpacker’s dex file contains one main configuration and configurations for each of the 5 modules. All of them require a key. Tracking the execution flow, we can draw the following conclusions: 

  1. native_init method: first, the module is decrypted using the key specified in the module’s configuration. This is a preparatory action that can change depending on the first four bytes of the configuration. In our case it’s 256, which means decryption is needed, as indicated above; 
  1. native_dl method: here, the same decryption function is called, only now a different key is used, the one specified in the configuration shared by all modules. 

What Happens Next?  

Let’s turn back to our Java code and remember what happens in the attachBaseContext method: 

attachBaseContext method 

The native init() and dl() methods are called, and there the main decryptions occur. Immediately after this the CoreUtils.runOn() method is called with the method name org.telegram.messenger.ApplicationLoaderImpl as an argument by the decrypted string CoreUtils.ran. Here no decryptions are observed, only the method launch. Now Triada begins its work. 

Summary 

Ducex is indeed a very complex and interesting Chinese tool, worthy of the malware that it carries as payload. Its developers tried to foresee as much as possible to confuse Triada detection and complicate analyst research. 

IOCs 

Title Description
Name 131eaf37b939f2be9f3e250bc2ae8ba44546801b5ca6268f3a2514e6a9cb8b5c.apk
MD5 25faee9bbf214d975415ae4bd2bcd7b9
SHA1 06f8ca016b2bf006702501263818b7606851f106
SHA256 131eaf37b939f2be9f3e250bc2ae8ba44546801b5ca6268f3a2514e6a9cb8b5c

The post Technical Analysis of Ducex: Packer of Triada Android Malware  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Shrinking your digital footprint: a checklist by Kaspersky | Kaspersky official blog

In today’s world, having an online presence is practically unavoidable. More and more of our daily lives happen online, and unless you’re a sailor out at sea or a forest ranger, living completely offline is a rare luxury. It’s estimated that each of us generates roughly two to three gigabytes of data every hour through our smartphones, IoT devices, and online services. So, it’s no wonder that, for example, around 70% of Americans are concerned about the government collecting their data, and a staggering ~80% worry about corporations doing the same. Today, we explore where and how our everyday actions leave digital trails, and what we can do about it.

Your morning routine: how your smartphone and browser track you

You wake up, check the weather, maybe scroll through some reels, like a few posts, and check your commute to see the possible traffic jams. When it comes to social media privacy settings, it’s pretty straightforward: you tweak them so your parents and colleagues don’t get a heart attack from your edgy humor. Our Privacy Checker website can help with that. But it gets trickier with geolocation data, which seemingly everyone wants to collect. We’ve already dived deep into how smartphones build detailed profiles on you, and explained what geolocation data brokers are and what happens when their data leaks.

Just imagine: about half of popular Android apps ask for your geolocation even though they don’t need it. And by default, Chrome and Safari allow cross-domain cookie tracking. This lets advertising networks build detailed user profiles for personalized ads. Pretty much all of your smartphone’s telemetry is used to create a thorough consumer portrait — no need for customer interviews or focus groups. The best marketer is in your pocket, but it’s not working for you. What should you do?

Normal measures

  • Head to Settings → Privacy → Permission Manager. From there, disable background access to the device’s location for messaging apps, weather widgets, and any other apps that needn’t be tracking your movements in the background.
  • Go to Settings → Privacy & Security → Tracking and turn off Allow Apps to Request to Track. Also, in newer iOS versions, under Settings → Privacy & Security, you’ll find a Safety Check section. This is a great place to review and adjust app and user access to your data, and even reset all access types in an emergency.
  • You can minimize tracking by following the instructions in our post What Google Ad Topics is, and how to disable it.
  • Enable Prevent cross-site tracking in Safari’s privacy and security settings on both your mobile devices and computers. Then, in the advanced settings, turn on Use advanced tracking and fingerprinting protection for all browsing.

Paranoid measures

  • Consider getting a Google Pixel and flashing it with GrapheneOS modified firmware that has Google Play Services disabled. Alternatively, research if AOSP firmware is available for your current Android phone. AOSP gives you a bare-bones Android experience where you choose exactly which services to install.
  • Enable Lockdown Mode (found under Settings → Privacy & Security). While it significantly limits functionality, it drastically reduces your chances of being tracked or having your iPhone compromised. We’ve covered this mode in detail in our article Protection through restriction: Apple’s new Lockdown Mode.
  • Set up a local DNS filter: for example, Pi-hole can block more than 280,000 trackers. Alternatively, you can install browser extensions like Privacy Badger for Firefox, Opera, Edge, and Chrome. Many modern routers also allow you to configure DNS filters that can block most ad network traffic on websites. For more on this, check out our post Why you should set up secure DNS — and how.

Hitting the road: the dangers of connected cars

You’re ready for your commute, hop into your car, hit the ignition… The system automatically plays your favorite playlist and has your loved ones on speed dial. Convenient, right? Absolutely, but there’s a caveat. Modern vehicles can transmit a staggering 25 GB of (your!) data per hour!

This creates two categories of problems. First, connected cars are often easier to hack because automotive manufacturers generally have a less-than-stellar approach to cybersecurity. While compromising a car’s onboard systems doesn’t always lead to theft, many vulnerabilities allow attackers to track you, or even remotely control your vehicle. For instance, in November 2024, a vulnerability was discovered in the Mazda Connect onboard system that allowed attackers to execute arbitrary code with root privileges. Before that, significant vulnerabilities were found in vehicles from Kia, Tesla, Jeep, and dozens of other carmakers.

Second, car makers themselves often enthusiastically monitor owners of the vehicles they sell and resell that collected data to data brokers and insurance companies.

What to do?

Normal measures

  • Dive into your car’s smart features menu and disable any that you don’t actively use or need.
  • Install an immobilizer that breaks the data bus connection. Some vehicles come with one built-in, but if yours doesn’t, consider a third-party immobilizer.
  • Regularly update your ECU firmware through official service centers. This helps patch known vulnerabilities, though it’s worth noting that new, undiscovered vulnerabilities could emerge with updates.

Paranoid measures

  • If you’re serious about minimizing data collection, consider buying a used car with minimal data-gathering and transmission capabilities. The absence of its own cellular module (GSM/3G/4G) in the car is a reliable sign that you’re on the right track.
  • Embrace public transport or cycling!

Lunch time: the hidden dangers of delivery apps

That much-anticipated lunch break is the perfect time to unwind… and leave a few more digital footprints. Whether you’re ordering coffee through an app or checking in to your favorite bakery on social media, you’re constantly adding to your online profile. This includes your location, payment details, and even your order history from delivery apps.

Food delivery apps, in particular, are incredibly data hungry. On average, they collect 21 categories of personal data, and a staggering 95% of this information is directly linked to your identity. Much of this data doesn’t stay with the delivery service; it gets sent elsewhere. Uber Eats, for instance, shares 12 out of 21 collected data points with partner companies, including your phone number, address, and search and order histories.

What’s more, food delivery services can experience data breaches. When that happens, your personal information — everything from your name, phone number and address to your shopping list and order costs — can end up exposed.

So, it’s clear: we need to do something about this too.

Normal measures

  • Check your app’s location settings. Instead of granting always-on access, switch it to “only while using the app”. If you’re extra cautious, you can turn off location services entirely and manually enter your address.
  • Unless the app’s core features genuinely require it, don’t let delivery services access your contacts, gallery or messages.

Paranoid measures

  • Set up a burner email address and use a different name for all your food orders. Even more radically, consider a second smartphone exclusively for delivery apps and other potentially risky applications.
  • Avoid providing your exact apartment number. Meet the courier at the entrance to the building instead. This can prevent your precise living location from being linked to your spending habits in case of a data breach.
  • Opting for cash payments ensures your purchase details aren’t stored in a payment system profile.
  • For a drastically reduced digital footprint, skip electronic lunch ordering altogether. Grab some cash, leave your phone at the office, and head to a local eatery. No phone means no GPS tracking, and cash transactions leave no digital trace whatsoever. While this won’t make you completely invisible (security cameras are still a thing!), it significantly shrinks your digital footprint.

Home sweet home: what your smart devices know about you

There’s nothing quite like relaxing at home after a long day. You ask your voice assistant to turn on the lights or recommend a movie. Smart speakers, TVs, robot vacuums, and other gadgets certainly make life easier. However, they also create a host of vulnerabilities for your home network, and often have questionable privacy practices. For instance, in 2023, Amazon faced a $25 million fine for retaining children’s voice recordings and other privacy violations related to Alexa.

And it’s not just corporations misusing voice assistant capabilities. Surveillance cameras, smart plugs, and even smart kettles are frequently hacked — often being roped into botnets for DDoS attacks. There have even been unsettling cases where malicious actors gained access to home cameras, using them for surveillance or pranks like speaking through a compromised baby monitor.

Normal measures

  • Dive into your smart home management app (Google Home, Apple Home, the Alexa app, and so on) and look for sections titled Privacy or similar. Turn off options that send your voice recordings for analysis. For Alexa, this is typically Use of Voice Recordings. For Google Assistant, opt out of the quality improvement program. Enable automatic deletion of your voice history. You can also manually clear your query history. With Alexa, just say, “Alexa, delete everything I said today”. For Google Assistant, manage and delete recordings through your Google account. This significantly reduces the amount of data stored.
  • Every smart speaker has a microphone mute button. If you don’t need the assistant, especially during private conversations, hit that mute button.
  • Laptops and some smart cameras come with built-in privacy shutters or covers. Use them! It’s a simple way to prevent unwanted peeping.
  • Many smart TVs allow you to disable the collection of viewing statistics (often called ACR). It’s a good idea to turn this off to stop your TV from sending reports about every channel you flip through.
  • Modern routers often let you set up a secondary or guest Wi-Fi network. Connect all your IoT devices to that network. This prevents the gadgets from “seeing” your main computers and phones on your home network. Even if one of your smart devices gets hacked, the attacker won’t be able to access your personal data. Plus, it makes it easier to cut off internet access to IoT devices when they’re not in use.
  • Use a strong, unique password for every device. When you first set up a smart device, always change the default login and password. A reliable password manager like Kaspersky Password Manager can help you generate and store secure passwords.

Paranoid measures

  • The most drastic option is to completely abandon voice assistants and cloud-based smart home services. Flip those light switches manually, and use mechanical timers for your appliances. The fewer microphones and cameras in your home, the more peace of mind you’ll have. If you absolutely must have an assistant, consider offline alternatives. There are open-source projects like Mycroft AI that can be configured to process commands locally — without sending data to the cloud.
  • If you’re concerned about covert listening, consider purchasing a bug detector – if it’s allowed in your country. These devices help locate hidden cameras and microphones when, for example, you suspect that a smart light bulb is actually a spy cam. You can also check the four ways to find spy cameras, which we described earlier.
  • During confidential meetings, either unplug suspicious gadgets or remove them from the room entirely.
  • Look for IoT devices that can function autonomously. Examples include cameras with local storage that don’t stream to the cloud, or smart home systems built on a local server like openHAB where all your data stays right in your home.

Takeaways

In today’s digital world, your data is a valuable commodity. While it’s impossible to completely erase your digital footprint, that doesn’t mean you should give up doing what you can. By staying aware and implementing smart security measures, you can control a significant portion of your data exposure. The extra protection services found in Kaspersky Premium can further enhance your privacy and payment protection. And our Privacy Checker website offers a wealth of comprehensive guides: these cover privacy settings for smartphones, computers, social networks, apps, and even entire operating systems. Whether you’re looking for simple adjustments or more thorough security measures — we’ve got you covered.

While achieving absolute anonymity often requires an extreme, almost paranoid level of effort, while most people don’t need anonymity, adopting even the “normal” measures from our recommendations will significantly limit the ability of both cybercriminals and corporations to track you.

What other steps should you take to stay safe? Below are some examples:

Kaspersky official blog – ​Read More

How to protect your online store from fraud attacks

According to Juniper Research data, global e-commerce turnover surpassed $7 trillion in 2024, and is projected to grow by 1.5 times over the next five years. But cybercriminal interest in this field is growing even faster. Last year, losses from fraud exceeded $44 billion — and they’re expected to reach US$107 billion within five years.

Any online platform — regardless of size or industry — can become a target, whether it’s a content marketplace, a hardware store, a travel agency, or a water park website. If you accept payments, run a loyalty program, and allow creation of customer accounts, fraudsters will definitely come knocking. So which attack schemes are most common, what kind of damage can they cause, and how can you stop them?

Account theft

Thanks to infostealers and various database leaks, attackers have access to billions of email-password combinations used on various sites. They can try these combinations on any other site with user accounts, on the assumption that humans often use the same password for different services. This attack method is known as “credential stuffing”, and if successful, attackers can place orders using the victim’s linked bank card or spend loyalty points. Criminals can also use compromised accounts to make fraudulent payments with other credit cards.

Testing stolen cards

Just as with login credentials, attackers may have a database of credit-card data stolen using malware. They need to test which cards are still valid and can process online payments — and for this, any e-commerce site will do. These “test” purchases are usually small. Working cards are then resold to other criminals, who go on to drain the funds in various ways.

From the store’s side, this looks like a customer adding a bunch of random inexpensive items to their cart and repeatedly trying to check out, each time with a different card. Even small stores can end up with hundreds of abandoned carts. Eventually, the payment gateway may block the store for exceeding the allowed number of failed payment attempts.

Buyer fraud

Sometimes real customers may complete an order, only to later tell their bank they never made the purchase — and demand a refund. This could be a case of deliberate fraud, or simply one family member using another’s card without permission — for instance, a teenager using a parent’s card. Although such incidents are usually small-scale, they can still cause serious damage — especially if the store becomes known in “lifehacker” communities as a site that easily refunds money.

Fraudulent purchases

Depending on your store’s niche, location, and other factors, criminals may try to use stolen credit cards to “cash out” by purchasing goods or services. This can result in a wave of orders followed by a flood of disputes and cancellations. In some extreme cases, the volume alone becomes a threat — one store received 118 000 fraudulent orders, with criminals placing a fake order every three seconds.

Gift card attacks

If your store accepts gift cards, bots may attempt to brute-force thousands of card numbers and verification codes to find valid ones. Once found, they’re either used to make purchases or resold on the secondary market.

Loyalty points theft

If your store allows purchases using accumulated loyalty points without requiring additional verification via SMS or other methods, attackers can either immediately drain any account they manage to access, or wait for the victim to accumulate more points. The latter often happens with stores that sell high-value products and have a loyal customer base.

Scalping exclusive products

If you sell, say, tickets to popular concerts or limited-edition sneakers, be prepared for resellers. Scalper bots can snap up all exclusive stock within minutes, triggering justified outrage from loyal customers. There’s an active black market for bots designed for popular e-commerce platforms, such as Shopifybot.

Mass account registration

To successfully run the schemes described above, attackers often create hundreds or thousands of accounts in your store, increasing operational costs — for instance, by triggering welcome SMS messages and follow-up email campaigns.

Direct and indirect business losses

Even if neither you nor your customers lose money or goods, any of the above schemes can lead to a wide range of problems and expenses:

  • Costs from fraudulent transactions and repeated failed payments. Depending on the situation and the terms of your agreement with the payment gateway, you might have to cover transaction and chargeback fees, fines, and other costs. You might also exceed your transaction limits and temporarily lose access to the payment gateway — effectively paralyzing normal operations.
  • Advertising costs and distorted analytics. Bots often arrive via referral links, paid search ads, and other forms of online advertising. This means your real advertising budget may be wasted attracting fake users. Even if the bots don’t consume your budget directly, their activity can mess up ad platform algorithms, resulting in lower-quality traffic to your site.
  • Costs for marketing campaigns and promotions that are misused by exploiting newly created accounts. Already registered users create new accounts to spend welcome bonuses for the first purchase, and fraudsters look for vulnerabilities and try to obtain bonuses en masse by dishonest means. As a result, the marketing budget allocated for attracting and increasing user loyalty is wasted.
  • Poor planning. Numerous fake orders can be hard to filter out of your analytics — especially if you rely on the default analytics tools built into your e-commerce platform. As a result, planning for demand and stock becomes much more difficult.
  • Wasted time. Dealing with hundreds of abandoned carts, thousands of bogus accounts, and countless failed payment attempts consumes your employees’ time and energy, leading to operational delays and losses.
  • Customer dissatisfaction. Depending on the attack type, customers may suffer direct losses (money stolen, loyalty points drained, fraudulent activity on their account) or indirect inconveniences (product shortages, failed transactions). Whatever the issue, your support and marketing teams will have to handle it — offering discounts, compensation and so on. But many customers will simply walk away and never come back.

It’s no surprise that, according to some estimates, for every hundred dollars in fraudulent orders, businesses lose over double that in total costs.

How to protect your online business

The days of blocking bots by filtering IP addresses or adding a CAPTCHA at checkout are over. The AI boom has empowered not only automation in marketing and customer support — but also a new generation of dangerous fraud bots that easily bypass traditional protection.

That’s why businesses of all sizes need next-generation security technologies that monitor every user session from the moment they land on the site until checkout. This kind of continuous protection helps detect any anomalies — whether it’s a compromised legitimate account, abuse of the payment gateway API, mass fake account creation, or attempts to circumvent security measures.

A leading solution in this space is Kaspersky Fraud Prevention. By continuously analyzing the user’s device, behavior, environment, and metadata in real time, it builds a profile of a legitimate user, detects anomalies early on, and protects against account compromise and fraud. Kaspersky Fraud Prevention can be tailored to the specific needs of your store using flexible rules that leverage both your own data and global analytics. The solution does not require installation on the user’s device and is integrated into an existing website and mobile application with minimal effort.

Many site owners report that advanced anti-fraud analytics actually improve the customer experience — since legitimate users encounter fewer CAPTCHAs, SMS verifications, and other friction points. And ultimately, your business faces fewer losses — and can focus more on developing your product range and service.

Kaspersky official blog – ​Read More

How to get into cybersecurity | Unlocked 403 cybersecurity podcast (S2E3)

Cracking the code of a successful cybersecurity career starts here. Hear from ESET’s Robert Lipovsky as he reveals how to break into and thrive in this fast-paced field.

WeLiveSecurity – ​Read More

Task scams: Why you should never pay to get paid

Some schemes might sound unbelievable, but they’re easier to fall for than you think. Here’s how to avoid getting played by gamified job scams.

WeLiveSecurity – ​Read More