Ground zero: 5 things to do after discovering a cyberattack
When every minute counts, preparation and precision can mean the difference between disruption and disaster
WeLiveSecurity – Read More
When every minute counts, preparation and precision can mean the difference between disruption and disaster
WeLiveSecurity – Read More
Great news for all Linux users: our product line for home users now includes Kaspersky for Linux. Our cybersecurity solution with the highest number of global accolades now delivers maximum protection for home users across all their devices running Windows, Linux, macOS, Android, and iOS — all with just one Kaspersky for Linux subscription.
If you thought Linux was immune to cyberthreats, it’s time to rethink that view. The number of malicious programs targeting this OS has increased 20-fold over the past five years! These threats include miners, ransomware, and even malware embedded into the source code of popular applications. For instance, last year’s attack involving a backdoor in the XZ archiving utility, which is built into many popular Linux distributions, could have become the most widespread attack on the Linux ecosystem in its entire history.
Beyond viruses, Linux users face other threats that are common across all platforms: phishing and malicious websites, as well as theft of passwords and banking and personal data.
As interest in Linux-powered devices grows year after year, we want to ensure our users have 100% protection across every operating system. To achieve this, we’ve adapted our business security solution, which has been used worldwide for years, to meet the needs of home users.
The key features of Kaspersky for Linux include:
AI-powered antivirus scans and blocks infected files, folders, and applications upon detecting viruses, ransomware Trojans, password stealers, and other malware, preventing infection of your PC, other devices, and your entire network.
Anti-phishing warns you about phishing links in emails and on websites to protect your login credentials and banking data from theft.
Online payment protection verifies the security of bank websites and online stores before you execute any financial transactions.
Anti-cryptojacking prevents unauthorized crypto mining on your device to ensure cybercriminals can’t drain its performance.
Scanning of removable media, such as USB drives and external hard drives, upon connection to your computer uses the tried and true method of defending against the spread of viruses.
Kaspersky for Linux supports major 64-bit Linux distributions, including Ubuntu, ALT Linux, Uncom, and RED OS.
To install the software, your PC must meet the following minimum specifications: at least a Core 2 Duo 1.86GHz CPU, 2GB of RAM, at least 1GB of swap space, and 4GB of free disk space. You can find the full system requirements here.
First, sign in to your My Kaspersky account. If you don’t have one, it’ll be created automatically when you purchase a subscription or install the free trial version.
Next, download the installation files compatible with your flavor of Linux: Kaspersky for Linux is distributed in DEB and RPM package formats.
Before you run the installer, double-check all requirements regarding your computer’s configuration, OS settings, and any installed software.
Follow the detailed step-by-step guide to install and set up the application. If you have any questions during setup or while using the application, you can consult the extensive Kaspersky for Linux help documentation.
Currently, the set of features available to users of Kaspersky for Linux doesn’t depend on your subscription — be it Kaspersky Standard, Kaspersky Plus, or Kaspersky Premium. This allows you to choose the most cost-effective option: for example, if you only need to protect a single PC running Linux, Kaspersky Standard is sufficient.
However, if you have a multi-device home ecosystem with computers, laptops, smartphones, and tablets running various operating systems, consider Kaspersky Premium. With this plan, you can protect up to 10 devices for all your family members. In addition to the top-tier security for Windows, Linux, macOS, Android, and iOS, you get a password manager, a fast and unlimited VPN, and a Kaspersky Safe Kids app for child protection and parental control (the last three are for Windows, macOS, Android, and iOS only).
You can explore everything Kaspersky for Linux can do with a free 30-day trial.
NB: Kaspersky for Linux isn’t GDPR-ready just yet.
Kaspersky official blog – Read More
October brought another strong round of updates to ANY.RUN, from a new ThreatQ integration that connects our real-time Threat Intelligence Feeds directly into one of the industry’s leading TIPs, to hundreds of new signatures and rules that sharpen network and behavioral detection.
With 125 new behavior signatures, 17 YARA rules, and 3,264 Suricata rules, analysts can now spot emerging threats faster and with greater precision. Together with the ThreatQ connector, these improvements make it easier for SOCs and MSSPs to enrich alerts, automate response, and gain deeper visibility into live attack activity.
October brought another major milestone to ANY.RUN’s growing ecosystem; a new integration that links ANY.RUN’s Threat Intelligence Feeds directly with ThreatQ, one of the industry’s leading Threat Intelligence Platforms (TIPs).
This integration helps SOC teams and MSSPs gain real-time visibility into active global threats, cut investigation time, and strengthen detection accuracy across phishing, malware, and network attack surfaces.
Now, analysts using ThreatQ can automatically ingest fresh, high-confidence IOCs gathered from live sandbox investigations of malware samples detonated by 15,000+ organizations and 500,000+ analysts worldwide.
How this update helps security teams:

The connector works through the STIX/TAXII protocol, ensuring full compatibility with existing ThreatQ environments. Security teams can configure feeds to update hourly, daily, or on a custom schedule; no custom development or infrastructure changes required.

For detailed information, see ANY.RUN’s TAXII connection documentation.
In October, our team continued to strengthen detection capabilities so SOCs can stay ahead of new and evolving threats:
These updates enable analysts to gain faster, more confident verdicts in the sandbox and enrich SIEM, SOAR, and IDS workflows with fresh, actionable IOCs.
This month’s updates focus on helping analysts catch stealthy activity earlier in the attack chain. The new behavior signatures detect payload downloads, privilege escalation attempts, and persistence mechanisms used by modern ransomware, stealers, and loaders.
We also expanded coverage of mutex detections and legitimate administrative tools often abused by attackers. Together, these improvements provide clearer visibility into real-world execution flow and strengthen automated classification in the sandbox.
Highlighted families and techniques include:
In October, we added 17 new YARA rules focused on detecting emerging malware families, credential-dumping utilities, and reconnaissance tools increasingly used in modern attack chains.
These additions strengthen both automated detection and manual hunting, helping analysts identify threats that blend malicious code with legitimate administrative software.
Several new rules were built directly from live samples analyzed in the sandbox, capturing real payloads, shellcode fragments, and memory artifacts tied to loaders, stealers, and botnets. This ensures faster and more reliable classification when scanning new samples or correlating incidents across environments.
Highlighted YARA rules include:
This month, the detection team delivered 3,264 new Suricata rules to improve coverage of phishing activity, APT operations, and evasive web-based malware behavior.
These updates expand network visibility for SOCs and MSSPs, helping analysts detect malicious traffic even when it hides behind trusted services or multi-stage redirects.
Highlighted additions include:
ANY.RUN supports more than 15,000 organizations worldwide across industries such as banking, manufacturing, telecom, healthcare, and technology, helping them build faster, smarter, and more resilient cybersecurity operations.
Our cloud-based interactive sandbox enables teams to safely analyze threats targeting Windows, Linux, and Android systems in real time. Analysts can observe every system and network action, interact with running samples, and extract IOCs in under 40 seconds; all without complex infrastructure setup.
Combined with Threat Intelligence Lookup and Threat Intelligence Feeds, ANY.RUN helps SOCs accelerate investigations, reduce noise, and improve detection accuracy. Teams can easily integrate these capabilities into SIEM and SOAR systems to automate enrichment and streamline response.
Ready to see it in action?
Start your 14-day trial of ANY.RUN
The post Release Notes: ANY.RUN & ThreatQ Integration, 3,000+ New Rules, and Expanded Detection Coverage appeared first on ANY.RUN’s Cybersecurity Blog.
ANY.RUN’s Cybersecurity Blog – Read More

Welcome to this week’s edition of the Threat Source newsletter.
This one is pretty much an updated, Halloween-themed version of my newsletter from July, including data up through Q3.
October 14th has passed, so free support for Windows 10 has come to an end, leaving you with no more fixes unless you’re willing to pony up. While users in many countries must now pay to get Windows 10 security updates (the “trick”), private users in the European Economic Area get free security updates (the “treat”) until Oct. 14, 2026. This special reward, won after consumer rights groups pushed Microsoft to do better under EU law, means no $30 fee, no reward points, and no cloud backup needed… just a Microsoft account.
There’s another trick: The treat is for consumers, not companies, and there are some technical prerequisites (described here).
While Cybersecurity Awareness Month is coming to end, you still have a chance to reach out to friends and family and encourage them to update their software (one of the Core4 Messages this year). Get them to enable the Extended Security Updates (ESU), update to Windows 11, or migrate to any other OS that will receive future patches.
Patching is critical. In Q3, we did not run short on vulnerabilities.

With roughly 35,000 CVEs by the end of September, we are still tracking a pace of about 130 CVEs per day. If the almost-linear trend continues, we will land at round about 47,000 for 2025. And for legal purposes, I am not challenging anyone to break the barrier of 50,000!
This is not just about theoretical vulnerabilities. Known Exploited Vulnerabilities (KEVs) are also on the rise. In comparison, the number of KEVs stayed nearly the same between 2023 and 2024, with 187 and 186, respectively.

With 183 at the end of Q3, I think it is safe to say we are going to surpass the number this year. (Spoiler: At the time of writing, there were already 210.) KEVs that affect network-related gear are up by 3% to 28%, which is not a massive increase but for sure a relevant portion. Overall, vendor diversity also continues to expand, increasing from 61 in July to 79 so far this year.

While the oldest CVE added to the catalog was from 2017 last time, the third quarter introduced a few new negative records from 2007, 2013, 2014, and 2016.
While this isn’t a part of our Q3 data, CVE-2025-59287 caught my attention late Friday afternoon. I didn’t expect WSUS service to be publicly exposed to the internet, but it found its way into the KEV, too.
In a pumpkin shell: Keep stalking those bugs and patching your spells, because vulnerabilities won’t patch themselves. Happy Halloween!
We’re introducing the Tool Talk series, where Talos shares open-source tools alongside practical insights, tips, and enhancements to help cybersecurity professionals and researchers work smarter and more effectively.
Our first post introduces dynamic binary instrumentation (DBI) and provides a step-by-step guide to building your own DBI tool using the open-source DynamoRIO framework on Windows 11. DBI lets you analyze and modify running programs — crucial for malware analysis, security audits, reverse engineering, and performance profiling — even when you don’t have the original source code. The post covers DynamoRIO’s strengths, compares it to other frameworks, and offers practical examples, including sample code from our GitHub repository.
If you’re interested in malware analysis, debugging, or getting a deeper look inside how binaries behave at runtime, this blog shows you how to do all that without needing source code access. DBI tools like DynamoRIO are essential for modern security research, especially for bypassing common malware defenses and anti-analysis tricks.
Ready to get hands-on? Follow the blog’s step-by-step instructions to build your own DBI client, test it out, and explore the example code provided. Whether you’re looking to automate malware analysis, profile software, or just tinker with low-level instrumentation, you’ll find everything you need to kickstart your own DBI projects.
Microsoft issues emergency patch for critical Windows Server bug
This CVE is a remote code execution (RCE) flaw in WSUS, which is part of Windows Server and allows administrators to schedule, manage, and deploy patches, hotfixes, service packs, and other updates. (DarkReading)
Shutdown sparks 85% increase in U.S. government cyberattacks
Cyberattacks against federal employees have nearly doubled since the US government shut down on Oct. 1. Experts emphasize that the most serious cyber consequences of the shutdown won’t come in the form of immediate breaches. (DarkReading)
Over 250 Magento stores hit overnight as hackers exploit new Adobe Commerce flaw
E-commerce security company Sansec has warned that threat actors have begun to exploit a recently disclosed security vulnerability in Adobe Commerce and Magento Open Source platforms. (The Hacker News)
Hacking Team successor linked to malware campaign, new “Dante” commercial spyware
Kaspersky found that victims were infected through personalized phishing links exploiting a zero-day Chrome vulnerability, with the campaign targeting a broad range of Russian organizations for espionage. (CyberScoop)
SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
MD5: 2915b3f8b703eb744fc54c81f4a9c67f
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
Example Filename: e74d9994a37b2b4c693a76a580c3e8fe_1_Exe.exe
Detection Name: Win.Worm.Coinminer::1201
SHA256: d933ec4aaf7cfe2f459d64ea4af346e69177e150df1cd23aad1904f5fd41f44a
MD5: 1f7e01a3355b52cbc92c908a61abf643
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=d933ec4aaf7cfe2f459d64ea4af346e69177e150df1cd23aad1904f5fd41f44a
Example Filename: cleanup.bat
Detection Name: W32.D933EC4AAF-90.SBX.TG
SHA256: c0ad494457dcd9e964378760fb6aca86a23622045bca851d8f3ab49ec33978fe
MD5: bf9672ec85283fdf002d83662f0b08b7
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=c0ad494457dcd9e964378760fb6aca86a23622045bca851d8f3ab49ec33978fe
Example Filename: f_003b84.html
Detection Name: W32.C0AD494457-95.SBX.TG
SHA256: 41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610
MD5: 85bbddc502f7b10871621fd460243fbc
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610
Example Filename: 85bbddc502f7b10871621fd460243fbc.exe
Detection Name: W32.41F14D86BC-100.SBX.TG
SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974
MD5: aac3165ece2959f39ff98334618d10d9
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974
Example Filename: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974.exe
Detection Name: W32.Injector:Gen.21ie.1201
Cisco Talos Blog – Read More
Each cyberattack leaves behavioral evidence. A malware sandbox provides the secure environment analysts need to study that activity and uncover hidden tactics.
Teams using sandbox analysis report measurable gains:
Behavior-based visibility gives SOCs the upper hand against stealthy attacks. Let’s see how sandbox security works, and why it has become essential for modern threat detection.
A malware sandbox is a controlled, isolated environment designed to safely run and observe suspicious files, links, or applications. It allows analysts to see exactly how a threat behaves without risking real systems or networks.
Instead of relying on signatures or predefined rules, a sandbox focuses on dynamic malware analysis, monitoring how code acts in motion. This approach helps detect new, unknown, or obfuscated malware that traditional antivirus tools often miss.
Watch the full video on how ANY.RUN’s malware sandbox works
Inside the sandbox environment, analysts can observe file system changes, registry modifications, network requests, and command execution in real time. Every action is recorded, creating a detailed behavioral profile that reveals the malware’s purpose, persistence methods, and communication patterns.
In short, a malware analysis sandbox turns hidden threats into visible data, giving cybersecurity teams the clarity they need to understand, detect, and stop complex attacks before they spread.
A malware sandbox operates by executing suspicious files, links, or processes in a virtual and fully isolated environment that imitates a real operating system. This lets analysts safely observe every action the sample performs, without exposing actual devices or networks to risk.
Modern sandboxes can be built on virtual machines, containers, or emulation frameworks. Each architecture recreates realistic conditions, including file systems, system registries, network connections, and even user interactions, so malware behaves as it would in the wild.
Here’s how sandbox analysis typically unfolds:
This approach, known as dynamic malware analysis, focuses on behavior instead of static code. It allows analysts to detect zero-day threats, hidden payloads, and polymorphic variants that traditional antivirus tools often miss.
Advanced malware detection sandboxes also counter evasion tactics by simulating real user activity, extending runtime to catch delayed triggers, and randomizing system identifiers to appear like genuine machines.
By sandboxing malware, security teams gain deep behavioral visibility, understanding not just what the file is, but what it tries to do.
To see how this process works in practice, let’s look at a real-world example. Inside the ANY.RUN sandbox, a phishing sample pretending to be a Google Careers page was analyzed.
The sandbox reveals the entire attack chain in just 60 seconds, from the Salesforce redirect and Cloudflare CAPTCHA to the fake login page that stole credentials and sent them to its command server.

All stages were captured in real time: every request, redirection, and data theft attempt. The sandbox also generated a full picture of the attack: key indicators like file hashes and domains, the techniques the malware used, its network activity, and a clear process timeline. Everything an analyst would need to investigate or build detection rules was right there in one report.
A malware sandbox gives analysts a clear view of what really happens when a threat runs. Instead of guessing based on static scans or file signatures, teams can watch the malware in action; safely, and in real time.
Here’s why that matters:
In short, a malware sandbox helps teams move from guessing to knowing, turning hidden behavior into clear, actionable insights that speed up detection and response.
Not all sandboxes work the same way. Depending on how they’re deployed and what they’re used for, organizations can choose between several types, each offering a different balance of control, scalability, and performance.
1. On-Premise Sandboxes
These sandboxes run inside an organization’s own infrastructure. They’re ideal for teams that handle sensitive data and need full control over their analysis environment. On-premise setups can be customized to mimic internal systems closely, from OS configurations to network settings, but they often require more maintenance and hardware resources.
2. Cloud Sandboxes
A cloud sandbox runs remotely, making it easier to scale and share results across distributed teams. It’s especially useful for SOCs that need to analyze large volumes of samples daily or for companies that want access without complex local setup. Cloud solutions also stay up to date automatically, ensuring faster adaptation to new threats.
3. Open-Source Sandboxes
These types of sandboxes allow researchers and security teams to build their own sandbox environments from scratch. They’re highly customizable and great for experimentation or research, though they usually require more technical know-how to maintain.
Each of these types of malware sandboxes serves a different need; from enterprise-grade automation to hands-on analysis. Choosing the right one depends on how much control, customization, and scale your security operations require.
| Feature | On-Premise | Cloud | Open-Source |
|---|---|---|---|
| Easy setup & deployment |
|
|
Manual |
| Automatic updates |
|
|
|
| Scalable for multiple analyses |
Limited |
|
Limited |
| Customizable environment |
|
Partial |
|
| Real-time collaboration |
|
|
|
| No maintenance required |
|
|
|
| Integration with other tools (SIEM, SOAR, etc.) |
Possible |
|
Manual |
| Cost efficiency |
Medium |
|
(Free) |
| Data privacy & local control |
|
Depends on provider |
|
| Ideal for large SOCs & MSSPs |
Sometimes |
|
|
— Yes
— Partial / depends on setup
— No
A malware sandbox is a daily necessity across different areas of cybersecurity. From SOC teams to threat intelligence analysts, everyone benefits from being able to see how malware behaves in a safe environment.

Here’s how different professionals rely on sandbox analysis:
Antivirus software protects against threats that are already known. It scans files, compares them to a database of malware signatures, and blocks anything that matches. This method works well for common, well-documented attacks, but it struggles with new or changing ones.
Modern malware often hides its code, changes its structure, or uses encryption to stay invisible to signature-based tools. That’s where a malware sandbox makes all the difference.
Instead of checking what a file looks like, a sandbox watches what it does. It runs the file in a safe, isolated environment and tracks every move; the processes it starts, the files it creates, and the connections it tries to make. This approach, called behavior-based detection, exposes even the newest or most complex threats.
Simply put, antivirus tools stop what’s already known. A malware sandbox uncovers what’s new and unknown.
Used together, they give teams both quick protection and deeper visibility; a strong mix for modern cyber defense.
As malware sandboxes become more advanced, attackers are learning to adapt. Some modern malware doesn’t simply run its code right away; it first checks where it’s running. If it senses it’s inside a sandbox, it may stay quiet, hoping to slip through undetected.
These are some of the most common tricks attackers use:

To keep up, modern malware analysis sandboxes have grown much smarter. They simulate human actions like typing and clicking, randomize virtual hardware details, and even extend analysis time to catch delayed behavior. Some advanced platforms also run the same sample across multiple environments to expose hidden logic or secondary payloads.
So yes, attackers keep trying to fool sandboxes. But as sandbox technology evolves, those tricks are becoming less effective. Each new generation of sandbox security makes it harder for malware to hide, ensuring analysts still see the full picture before damage is done.
With so many sandbox solutions available, choosing the right one can be tricky. Some focus on quick verdicts, others on deep behavioral insights. The best choice depends on your goals; whether you’re running a SOC, enriching threat intelligence, or conducting malware research.
When evaluating options, start with what matters most: visibility. A good sandbox doesn’t just tell you that a file is malicious, it shows why. It should capture every action the sample performs: file system changes, registry edits, process trees, and network traffic. These behavioral details are what make sandbox analysis so powerful.
Realism is equally important. The closer the sandbox mimics a real system, the more accurate the results. Platforms that support multiple operating systems and simulate user activity (like mouse clicks or typing) are better at exposing evasive malware that would otherwise stay hidden.
Speed, scalability, and integration also matter. Cloud-based sandboxes process hundreds of samples in parallel, deliver reports within minutes, and connect easily to SIEM, SOAR, or threat intelligence systems. Structured exports in formats like JSON or STIX/TAXII make automation effortless.

Finally, consider privacy. If you work with sensitive or client data, make sure your sandbox offers private or isolated analysis modes.

When choosing your sandbox, think beyond detection. Look for visibility, speed, flexibility, and control; the qualities that help you understand how malware behaves and stop it before it spreads.
If you take those same criteria and apply them to ANY.RUN, you’ll see how closely the platform aligns with what modern security teams need.
| Factor | How ANY.RUN Delivers It |
|---|---|
| Behavioral visibility | Displays every system and network action in real time, with visualized process trees and detailed logs. |
| Realistic environment | Simulates genuine user behavior, forcing evasive malware to reveal its payloads. |
| Multiple OS environments | Supports Windows, Linux, Android analysis, with new profiles added regularly. |
| Interactivity | Analysts can click, type, and interact with running samples — exposing behavior that static tools miss. |
| Speed and scalability | Cloud infrastructure processes multiple samples in parallel, generating full reports in minutes. |
| Automation and integrations | Connects with SIEM, SOAR, and TI tools via API or webhook for seamless workflow automation. |
| Threat intelligence enrichment | Extracts IOCs, maps MITRE ATT&CK techniques, and links to related CVEs automatically. |
| Clear, exportable reports | Offers human-readable summaries and structured outputs (JSON, STIX/TAXII). |
| Privacy options | Private analysis mode ensures sensitive data stays isolated and secure. |
| Ease of use | Intuitive interface and quick setup make analysis accessible to any skill level. |
| Anti-evasion features | Randomized environments, user simulation, and adjustable runtime defeat stealthy malware tactics. |
| Managed lookups & history | Analysts can search past public or private sessions and track recurring threats. |
ANY.RUN combines what most teams need from a sandbox: visibility, control, and speed; all in a secure, interactive cloud environment. It helps analysts move faster, collaborate better, and uncover behaviors that traditional tools simply can’t see.
Teams Using ANY.RUN’s Interactive Sandbox Report Measurable Results:
For businesses, that means lower risk exposure, more productive analysts, and faster containment of incidents, all without expanding headcount or infrastructure.
Yes. Antivirus tools catch known threats using signatures, while a sandbox helps detect unknown or evolving malware by observing real behavior. Together, they form a stronger defense.
It’s extremely rare with modern platforms. Reputable sandboxes, especially cloud-based ones, use strict isolation layers to ensure any malicious code stays fully contained.
Most analyses complete in a few minutes. Cloud sandboxes are faster because they can run multiple sessions at once and generate reports almost instantly. For instance, 90% of sandbox analysis carried out inside ANY.RUN sandbox last around 60 seconds.
Static analysis examines code without executing it. Dynamic analysis, what a sandbox performs, actually runs the file to observe its real behavior and system impact.
Look for detailed behavioral reports, IOCs extraction, and options for interactivity or automation. If it helps you understand why a file is malicious, not just that it is, it’s doing its job well.
Yes, when privacy features are enabled. Some solutions, like the ANY.RUN sandbox, let users run fully private sessions where samples and results stay completely isolated.
Dynamic environments are especially useful for ransomware, downloaders, stealers, and phishing payloads; malware that changes behavior based on context or timing.
Many modern sandboxes like ANY.RUN’s Interactive Sandbox support API connections, STIX/TAXII feeds, and SIEM/SOAR integrations. This allows automatic data sharing and faster incident response.
ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, makes this kind of investigation fast and accessible. The service processes millions of analysis sessions and is trusted by 15,000+ organizations and over 500,000 professionals worldwide.
Teams using ANY.RUN report measurable results; up to 3× higher SOC efficiency, 90% faster detection of unknown threats, and a 60% drop in false positives thanks to real-time interaction and behavior-based analysis.
Explore ANY.RUN’s capabilities during 14-day trial→
Discover how ANY.RUN can help your team detect faster, analyze deeper, and respond smarter.
The post What is a Malware Sandbox? Everything SOC Analysts and CISOs Need to Know appeared first on ANY.RUN’s Cybersecurity Blog.
ANY.RUN’s Cybersecurity Blog – Read More
Managed Extended Detection and Response (MXDR) solutions have long been a staple for large corporations. They provide 24/7 monitoring, continuous threat handling, and rapid incident response — all without the need to deploy and maintain in-house infrastructure. Crucially, they also make cybersecurity costs predictable. It sounds like an ideal option for small and medium-sized businesses (SMBs) as well. In practice, however, this isn’t always the case. For an SMB, a standard MXDR solution may end up complicating matters for the internal IT security team instead of simplifying them, overloading the team members with a barrage of confusing alerts and an abundance of tools.
This post discusses the differences between an MXDR service suitable for a large enterprise, and one that would fit perfectly into the security framework of a growing SMB. We’ll also outline the qualities that we believe the ideal MXDR solution for SMBs should possess.
Large companies typically already have a dedicated cybersecurity team with relatively mature processes and qualified experts on board who are capable of smoothly integrating and competently managing the service. Therefore, large businesses often use MXDR solutions as part of a hybrid SOC model: an external provider’s team handles some tasks, but a significant portion of the work remains with the in-house team.
Most SMBs lack the necessary arsenal of solutions and, most importantly, a dedicated in-house cybersecurity team — at least one with a sufficient understanding of attacker tactics, techniques, and procedures (TTPs), along with the skills to counteract them. They often don’t have enough time or expertise to integrate multiple telemetry sources, set up correlation rules, or analyze a flood of alerts. More often than not, security in SMBs falls to IT team members who simply don’t even have the bandwidth for continuous communication with external analysts.
The result of trying to integrate an enterprise-level solution in SMB infrastructure is often an overload rather than a simplification of processes: a deluge of incident alerts with no one to analyze them, and complex interfaces and processes that the team simply gets lost in. Under these conditions, it’s extremely difficult to develop in-house expertise: the team is simply too busy just trying to maintain an adequate level of company security. This is precisely why SMBs need a different MXDR format: one that is clearer, built on partnership, and focused on developing the internal team rather than replacing it.
When the internal team needs to not only ensure security, but also develop its own expertise, the MXDR service should provide support from experienced and qualified experts rather than simply replace the cybersecurity function. This should be a partnership where the provider doesn’t just take on some of the responsibilities and helps neutralize threats, but also:
In other words, the ideal MXDR service for an SMB works with the team — not instead of it. Below, we look at the specific qualities this solution should have.
SMBs can vary not only in their needs, but also in their degree of cybersecurity maturity. Therefore, an MXDR service shouldn’t be limited to basic automation or one-size-fits-all scenarios. The solution provider must be able to adapt to the specifics of each client.
This means that detection and alert triage rules must be configured based on the characteristics of the infrastructure, the software and security tools in use, and the behavior of various user groups. This makes it possible to distinguish a real threat from normal activity and, as a result, reduce the number of false positives.
This level of customization helps reduce the number of clarifying requests that MXDR experts have to address to the client’s team — for example, whether a certain user running PowerShell is standard or anomalous behavior. It speeds up threat detection and incident response, and reduces the workload on the client’s internal cybersecurity team, allowing them to focus on strategic tasks.
For the team responsible for cybersecurity at an SMB, it’s critical not to get drowned in hundreds of notifications. It needs to quickly understand what is truly a threat, what actions were already taken, and what steps need to be taken next. Therefore, a high-quality MXDR service team must analyze not only obviously malicious events, but also suspicious activity from legitimate software. From there, out of thousands of alerts, only those related to adversarial activity should be selected. The client should be presented not with a multitude of hypotheses, but a clear, ready-made picture of what happened, consolidated into a single incident and accompanied by context. This includes the identified root cause, related events, and affected assets.
To make it easier for the business to navigate, the provider should offer an overview of all protected company assets and their current status so the client can open a dashboard at any time and see what’s under control and what needs attention. If the internal team still has questions, it should always be able to reach out directly to the service’s experts to work together — for example, go over the details of an incident.
Another element of transparency is reporting. There should be an option to customize the reports to meet the client’s needs and requests; for instance, by providing a convenient bi-weekly overview with key takeaways and, if required, a detailed description of incidents. Flexibility in communication methods is also vital; for example, the client should be able to choose the most convenient channel — whether a messaging app, email, or something else — to ensure the internal team can be reached in a timely manner when an incident requires a decision. This helps company management keep a close eye on things, while technical experts can monitor events at a reasonable pace and dive deeper when needed.
Thanks to this approach, MXDR alleviates one of the biggest challenges for SMBs: the need to independently parse and prioritize hundreds of notifications.
In case the in-house team prefers to handle hypothesis testing and root cause analysis internally, it’s essential for the MXDR solution to enable proactive threat hunting and artifact analysis using the available XDR tools. Therefore, the MXDR provider needs to grant the client access to knowledge bases on current attacker techniques and tactics (threat intelligence), information on new campaigns, and relevant analytics. However, if needed — such as when the client’s team realizes its expertise is insufficient despite having the TTP data — it still needs to have the option to escalate the alert to the MXDR team for analysis.
A large portion of incidents begins with employee error. Therefore, a good MXDR provider should help the client foster a healthy cybersecurity culture within the organization. This is largely done by raising the awareness of rank-and-file employees about the modern tricks used by attackers.
The most effective approach doesn’t entail abstract lectures, but training based on real-life incidents that have actually occurred within the company. For example, if an attack began with employees in a certain team opening a phishing email, that team should undergo training that focuses on that exact scenario. Ideally, its progress should be tested with a simulated phishing campaign. Such proactive measures help mitigate risks associated with the human factor, thereby reducing potential financial losses — a critical concern for growing organizations.
For instance, our Kaspersky Next MXDR Optimum allows you to assign employee training directly from the alert card in just a few clicks. Furthermore, to enhance the skills and knowledge of “frontline defenders”, our solution offers response training programs tailored for IT and cybersecurity teams. These programs allow specialists to engage deeply with advanced tools in environments that replicate real-world scenarios, enabling them to solve incidents quickly and effectively. For example, they can learn how to safely check password hashes, search for discrepancies between recommended and actual domain policies, and assess the security of Active Directory parameters.
For SMBs, a good MXDR solution is far from a “black box” service. It’s an ecosystem of partnership that combines:
It is with this philosophy in mind that we created our Kaspersky Next MXDR Optimum: as a service that works in concert with XDR tools and supports the SMB growth strategy. You can learn more about this solution on the Kaspersky Next Optimum page.
Kaspersky official blog – Read More

Binary instrumentation involves inserting code into compiled executables to monitor, analyze, or modify their behavior — either at runtime (dynamic) or before execution (static) — without altering the original source code. Tools like DynamoRIO, Intel PIN, Valgrind, Frida, and QDBI are commonly used in the field. Static binary instrumentation (SBI) injects code before a binary runs, typically by modifying the file on disk, whereas dynamic binary instrumentation (DBI) operates in memory while the program runs. These techniques are widely used for profiling, debugging, tracing, security analysis, and reverse engineering.
DynamoRio (DR) is a matured, well-maintained, and frequently updated open-source DBI framework. HP and the Massachusetts Institute of Technology (MIT) developed the first version in collaboration around 2000. Derek Bruening, DynamoRio’s lead developer, described the main concept in his PhD Dissertation in 2004, and further information about the history of DR can be found here.
The main reasons why Talos uses DR for Windows- and Linux-based malware analysis is its low performance impact at execution time, the excellent transparency (the target application does not recognize it is instrumented), and the open source license. Table 1 provides a brief comparison of some of the common instrumentation frameworks used in the industry. Please consider this as a general reference only, as this comparison might be biased by our use cases and may not remain accurate over time due to the ongoing development of the different frameworks. There is also not a best overall toolkit, as it depends on the use case.
It should also be noted that it always depends on the user code how well a certain instrumentation framework works. Even the best framework cannot fix bad user code.
|
Feature |
DynamoRIO |
Intel PIN |
Frida |
|
Type |
DBI |
DBI |
Dynamic Runtime Instrumentation via API Hooking |
|
Instrumentation granularity |
Basic blocks and instructions |
Instruction-level (very fine-grained) |
Function-level and instruction-level (via memory hooks) |
|
Language (API) |
C/C++ |
C/C++ |
JavaScript, Python, C |
|
Target platforms |
Windows, Linux (limited macOS, Android forks) |
Windows, Linux (x86/x64 only) |
Windows, Linux, macOS, Android, iOS |
|
Architecture support |
x86, x64, ARM (partial), AArch64 (forks) |
x86, x64 |
x86, x64, ARM, ARM64 |
|
License |
Open source (BSD-like) |
Proprietary (free for non-commercial) |
Open-source core and commercial license (Frida Pro) |
|
Performance overhead |
Medium (2–10× depending on tool complexity) |
High (10–20× or more with deep instrumentation) |
High (especially with many hooks or on mobile) |
|
Transparency (anti-debug evasion) |
Medium (code caching may leak) |
Medium to low (can be fingerprinted) |
Low (easily detectable by injected libraries or syscalls) |
|
Best use cases |
Runtime analysis, instrumentation, sandboxing |
Deep instruction analysis, academic research |
API hooking, mobile analysis, debugging, live patching |
|
Shellcode detection feasibility |
Excellent (module-level execution monitoring) |
Good, but more effort needed |
Limited (good for allocation and hook, not raw exec detection) |
|
Community and documentation |
Active community, used in research and industry |
Older, still maintained by Intel |
Very active, large community, modern docs |
Table 1. DBI framework comparison.
Ultimately, the possibilities are only limited by your creativity and malware technology knowledge. However, here are some examples:
Malware samples can be executed on real hardware and still be monitored and analyzed. Alternatively, VM detection functions can be patched at runtime to make sure the malware does not recognize it is running in a VM.
Malware uses many simple but common anti-X techniques (such as anti-debugging, anti-emulation, anti-tamper, anti-disassembler, self modification, etc.) that do not recognize DBI or do not have any impact on the DBI analysis. Many code runtime manipulation techniques which the frameworks are using are either transparent or hidden to the malware or the malware is just not trying to find them. The latter probably applies to the majority of malware today.
For example, code traces and memory dumps based on certain conditions can give the analyst a better idea of what the malware is actually doing.
It is relatively simple to build a shellcode execution detection tool with DBI to find a second stage in a packer by looking for functions which are allocating RWX memory, copying data into it and jumping into these memory areas later. You can also look for cryptographic constants which might be hidden in Mixed-Boolean-Arithmetic functions and are rebuilt at runtime to find unpacking routines or string obfuscation functions. Another example would be to count how often functions are called to find interesting functions which might be used to decode strings.
With DBI you can monitor the execution and get runtime values of registers or memory locations. You can also trigger on certain values of registers or other conditions to do dynamic memory dumps at runtime.
Build an unpacker or config extractor for frequently seen malware families
Before starting to write your own client, it is important to understand some basics about how DR works under the hood. DR is a process virtual machine (PVM). In the context of DR, this refers to a virtual execution environment that allows DR to dynamically instrument, monitor, and modify the behavior of a running application at the level of individual processes. DR operates entirely in user space (ignoring some experimental features). When DR starts a target application, it injects itself into the process and hooks or intercepts system calls. It takes control before the target application begins execution and starts copying the first basic block(s) of the target application into a code cache, which is a memory region DR has full control over. Then it redirects the execution flow to this code cache. This enables DR to monitor and modify the original instructions of the target application. It relocates addresses and modifies the target application code in a way that it is working semantically exactly like it would if it is executed natively (Figure 2).


For performance optimization, the basic block code cache is extended with a trace cache (Figure 3). DR monitors and measures the execution of basic blocks in the basic block cache and builds groups of basic blocks which are frequently executed in the same order. At a certain threshold it combines these basic blocks and copies them into the trace cache including the instrumentation code and some inline checks. In reality, it is not a simple counter, it depends on multiple factors. This technique speeds up the execution time due to the fact that less context changes are necessary. In this context a context change means a switch between the target application code and DRs core and dispatching routines.

All these operations should be transparent/hidden to the target application. It should appear to the program as if it were running natively on hardware — unaware of and unaffected by any instrumentation. DR takes care about the following side effects:
More details can be found here.
The development environment used:
The DR release package includes several tools for memory debugging, profiling, instrumentation, legacy CPU simulation, cache simulation and more. See the DR homepage for more details. In this blog post, we will not use these tools, but instead build our own.
When writing your own instrumentation client, you usually run it via “drrun.exe”, which is the DR loader application. The syntax is the following for a 64-bit client and 64-bit target application; for 32-bit, you just use the drrun.exe 32-bit version from the “bin32” directory, rather than the “bin64”.
<DYNAMORIO_INSTALL_DIR >bin64drrun.exe -c "
<DIR_TO_YOUR_CLIENT>client.dll" [client arg1, client arg2, …] –
“<TARGET_APP_DIR>target_app.exe”
The DR client (“client.dll”) is the instrumentation code you are writing to do something with the target application (“target_app.exe”). The “target_app.exe” is the malware which we want to instrument.
While you can also write standalone tools which are not loaded via “drrun.exe”, it is quite complex and out of scope of this tutorial. There are several options to configure the instrumentation process (details here). For this blog we will choose the simplest way via “drrun.exe” and default configurations.
To test the tool chain we will first write a DR “hello world” client (simple_client1). You can find the code on our GitHub repository. All examples in the repository are organized as seen in Figure 4.

They all include the client source code (e.g., client.c), a corresponding CMake file (CMakeLists.txt), and build32/64.bat scripts, which include the CMake build commands and MSYS_build32/64.sh scripts for starting the build process on MSYS2. All of the source code is heavily commented and prints out debugging messages and information of the important steps happening during the instrumentation phases. The code is written in a way that makes it easier to understand how things are working, not for performance or security goals. For example, it lacks some exception or input checks, which you might want to add if your client runs in a productive environment.
IMPORTANT: Make sure you change the CMakeLists.txt file content depending on your installation. In other words, mainly verify/change the directories. Also verify the other scripts to ensure the directories and filenames match your environment.
The build process is usually very easy. Install Visual Studio 2019 incl. CMAKE, verify/edit the directories inside the scripts, and run the MSYS_build32/64.sh script in a MSYS2 shell. The script mainly does the following:
Note: Most scripts build “Release” versions. You can change the CMAKE_BUILD_TYPE parameter of the build commands in the build32/64.bat to “RelWithDebInfo” for a release with debug information or to “Debug” for a full debug release, but in most cases this is not necessary for normal troubleshooting.
Again, the client library needs to have the same bit width like the target application; in other words, if the target application is 32-bit, you need a 32-bit DR client. If the target application is 64-bit, the client needs to be 64-bit, too. If there is a mismatch, you will get the error message below (Figure 5). This tutorial will only focus on 64-bit apps and clients, but the GitHub repository also contains 32-bit versions of most demos.

Run the client via drrun.exe against a test program. For example:
"C:toolsDynamoRIO-Windows-11.3.0bin64drrun.exe" -c ".buildReleasesimple_client.dll" – ../testsamples/threads/x64/Release/threads.exe
If you can see the “Hello from DynamoRIO client” message after instrumentation of the test application, your development environment works and you can proceed to the next example of a client which will be a bit more useful. If something goes wrong, you can also run the target application with drrun.exe only (e.g., drrun.exe – <target application>).This will tell you if there is an issue with your client or you run into a DR bug. Another useful drrun.exe switch is “-debug”; beside other things, it finds memory leaks in your client. We are aware that one of the demo clients in the repository has a memory leak, but that is on purpose. See the source code for details.

Let’s start to write a simple client that prints out all DLLs loaded by a process at run time. Look at the simple_client2 example from GitHub for the implementation details. Running the client looks like this:

Every client starts execution with the “dr_client_main” function. This is the start function of your client. Here we will do some initialization and you can find the callback register functions. See Figure 8 below for an example.
The callback functions are controlling the custom instrumentation process. In other words, DR is using callback functions for executing the code you want to use during instrumentation. The function names give you a good idea about what they do. For example, in Figure 8, the “drmgr_register_module_load_event” function is registering a callback function “event_module_load_trace_instr” which is then getting called every time a module (e.g., a DLL) is loaded by the target application at runtime. In our simple example code, the “event_module_load_trace_instr” function will just print out the name of the loaded module.
Most of the functions you need are part of DR extensions. These extensions are part of the DR project and they are already included in the DR distribution package (the ZIP file you have downloaded). You do NOT need to install or configure them. They offer higher-level abstractions for commonly needed features, so you don’t have to implement them from scratch. These functions usually start with the name of the extensions (e.g., “drmgr_register_module_load_event” for drmgr functions). Here are some extensions with brief descriptions:
In this tutorial we will look at “drmgr” and “drwrap”. Let’s have a closer look at these extensions.
It is a helper library designed to make writing DR clients easier, cleaner, and safer.
Drmgr offers the following functions:
The picture below shows some examples for event registration callback functions included in drmgr. They are called when the corresponding event occurs. The names are more or less self-explanatory.

Drmgr wraps several of the low level functions from dr_events.h. For example, “drmgr_register_bb_instrumentation_event()” wraps “dr_register_bb_event()”. In common, it is simpler and/or more secure to use the functions in drmgr.
The GitHub simple_client3 example is a simple code tracer. For the code trace, we register a callback function which is called for every instruction. To do this, use the insertion_func parameter of drmgr_register_bb_instrumentation_event(). The registered callback function and subfunction then disassembles the instructions and prints it out. The implementation details can be found in the mentioned simple_client3 project. The simple_client3 example also patches a function at runtime. The patching process is described in the next chapter.
Another function you probably always want to register in your client is the exit event callback function.

It is registered via the dr_register_exit_event() function and is used to clean up things when the target application process has exited. For example, it is used to call extension exit functions or free allocated memory (Figure 9).
Drwrap offers helper functions to simplify the runtime patching of the target application functions. For example, it can help wrap a function before (pre) and/or after (post) execution of the function. This means you can either use a pre-wrap function, which could manipulate arguments handed over to the function, or a post-function, which could manipulate the return value of a function in the target application. A good use case would be patching an VM detection routine or an anti-analyzing function in a malware which detects the instrumentation process. That being said, many of the typical anti-analysis tricks malware uses do not work under DR — or DR is actively putting counter measures in place. For most of the common anti-analyzing tricks in malware, you do not need to patch anything. We will talk about the details later on in this blog.
A full example for an instrumentation with drwrap can be found in simple_client3. From a high level overview, it is as simple as this: You first register a drwrap pre- or post-function (e.g., when the module which includes the function you want to patch is loaded). Then the registered function (wrap_post_function()), manipulates the function at runtime. In this example it sets the return value to zero (Figure 10).

This should be enough for a quick intro and overview on how to write DR clients. You can find many more examples and details in the GitHub repository.
If you wrote a useful client and want to share it with the community, we are happy to add it to the ”3rdparty” directory. We are not responsible for the code in this directory, so use it at your own risk.
Vibe coding for DR clients works alright. If you use it for simple functions, it often works. From Talos’ experience, vibe coding full and complex clients usually doesn’t work. In most cases, it will just crash and you will spend more time cleaning the bugs than saving time with it. The best case scenario is it runs with sub-optimal side effects. However, AI can be an excellent tool to brainstorm which DR function to use for certain tasks or how to get started for a certain use case.
We have built a simple Anti-X pseudo malware without any malicious functions to test DR’s transparency and robustness against common anti-analyzing techniques frequently seen in malware. You can find it in the Anti-X GitHub repository. It includes different anti-debugging techniques, self modifying code, large loops, exceptions, and simple code verification techniques to detect hooks or breakpoints. The self-modifying aspect also changes the control flow graph at runtime to verify that DR can handle this. We provide the source code so you can verify that it does not do any harm to your machine.
This test case decodes a shellcode at runtime and executes it.

Result: No problem for DR. The code is executed as expected. No difference to the native execution without instrumentation.
In this test case we downloaded an info page about our external IP address from “https[://]ifconfig[.]me” to test whether or not we could intercept the TLS traffic.


Result: This works well on most machines we tested it on. Only on very recent Intel Mobile CPUs the download failed with ERROR 12175 (ERROR_WINHTTP_SECURE_FAILURE) if the anti-x application was instrumented. This error usually occurs if the verification of certain security parameters of the TLS connection failed (e.g. the certificate is not valid yet or timed out). Whether this is a new Windows anti-tamper feature or a DR bug is currently unknown. We are investigating this and we will update the blog once we find the root cause. If you have an idea about the root cause or if this occurs on your machine, as well, we would be happy to hear from you.
In this test case, we are verifying the CRC32 of the bytes of a certain function in memory. If any byte of the function gets changed (e.g., a debugger sets a breakpoint [0xCC]), the CRC32 would be different than calculated over the native function.

Result: The CRC32 value does not change if the target application is instrumented.
The first test (IsDebuggerPresent) should be self-explanatory, and the second uses the threat context object to check if one of the debug registers (DR0-3) is set, which would indicate a hardware breakpoint.

Result: None of the tests detect that the application is instrumented.
In this test case, we trigger an exception which is handled by the application to investigate if it has any side effects to the target application.


Result: No problem. The application is executed the same way without instrumentation.
This test checks if the execution time between two code sections is too long. This detects debuggers or anything that significantly delays the code execution.

Result: As long as you do not do foolish things, such as insert too much code at runtime or instrument a large loop inside the target application, DR is usually fast enough to avoid detection. Of course, this depends on the kind of instrumentation you do and on how aggressive the timer is set. DR usually adds a delay of between 2 – 10 times that of the original execution time.
Test case: Large loops

Result: Similar to the above, you don’t want to instrument something inside this loop, but anything outside is no problem.
In our self-modifying code, we are doing multiple things:
First, let’s have a closer look at the self-modifying code which we are using. Here you can see the pre-patch code. It converts the original “jmp end_label” instruction to a “mov rax,3” instruction. Once again, it is not only the instruction that is being changed; the control flow graph (CFG) is also being modified.

The next one is the post-patch, which is similar to the pre-patch and overwrites a “mov rax,1234” with a “mov rax,1”, but this modification is done after the code was executed. This means when the function is called the next time, the values in rax and rbx are not equal anymore, the comparison later fails, and the jump is not taken.

Last but not least, in the middle of the function, we do the interleave trick. The “test rax,rax” will set ZF=0 (RAX = 3) which means the “jz” will not be taken and the “jnz” will be taken. Some static disassemblers get confused by this and disassemble the byte starting with “048h, 0C7h, …” to a “mov rax, 0FFFFFFFF90900CEB”, because they do not realize that these bytes are never executed.

Only after manually converting it, the disassembler shows the jmp (= EB 0C), in other words the byte we jnz’ed to. Again, we want to see if DR’s code cache generator or dispatcher gets confused by this.

Result: The self-modifications are working as expected. DR executes everything like it would on a real CPU. These self modifications were no problem for DR.
The screenshot below is from the simple_client3 DR client run against the anti-x target application mentioned above, which is doing a code trace over the self modifying function. We deleted all messages from the client output, except the instruction related output.
The left side is the instruction trace showing when the self modifying function was called the first time. The right side is the second call of the self modifying function.

DR offers a wide range of built-in capabilities to counter common malware anti-analysis techniques. However, some methods can still detect or circumvent DR; these were intentionally excluded from this blog to avoid aiding malware authors. In this post, we introduced how to use DR for malware analysis and demonstrated how it can help bypass typical anti-analysis measures with minimal effort. We hope this helps readers get started with DR and encourages you to contribute to our GitHub project. Have fun exploring DBI and DynamoRIO!
Cisco Talos Blog – Read More
Mass user migrations between social media services have become a more frequent phenomenon in recent years. Most of the time, this happens not because users are drawn to a cool new social media site, but because the ones that have been around for a while suddenly become a much worse place to be. Users are driven away by changes in ownership, post-sorting algorithms, and aggressive data processing policies, such as using content for AI training.
If you’re thinking about migrating, be sure to consider how social media, video hosting services like YouTube and Twitch, and community-based sites like Reddit and Quora are handling user information in 2025. The experts at Incogni, in their Social Media Privacy Ranking 2025, conducted a detailed analysis of the current state of affairs.
Fifteen leading platforms were compared across multiple criteria: from data collection and resale to the readability of their privacy policies and the number of fines they’ve been hit with for privacy violations. In short, Pinterest and Quora stood out for their strong concern for users’ privacy, while TikTok and Facebook ranked at the bottom. But let’s be honest, we rarely choose which social media to post photos or discuss stamp collecting on based on how many fines it has been handed. Besides, this is hardly an apples-to-apples comparison, as we don’t typically expect to have fully private conversations on social media, unlike on chat apps. That’s why we’ve dedicated a separate post to the privacy of popular messaging apps. Today, we decided to review a summary of the Incogni study that focuses exclusively on social media, video hosting services, and community sites. We’ll only consider practical, everyday criteria. And for simplicity, we’ll refer to all these services as “social media” from here on out.
In the overall ranking that accounts for all criteria, the leaders outperform the laggards by more than a two-fold margin in points, with fewer points indicating higher privacy.
It’s worth noting that up to 10 points could be lost due to fines for violating various jurisdictions’ personal data and storage location regulations, such as GDPR and CCPA. The study accounted for fines not only in Europe and the U.S., but also in other major countries, spanning from Brazil to Turkey. Data breaches across each social media service’s entire history were also factored in.
Facebook amassed a hefty 9.6 penalty points, a key factor behind its bottom ranking. The second-to-last spot went to X, with six penalty points; no one else exceeded 4.4.
If we only consider criteria like data collection on the website and in the app, the use of information for AI training, the number of privacy settings, and the visibility of personal data to other users, the top and bottom of the rankings change significantly:
Interestingly, the ranking kept intact three distinct groups — leaders, laggards, and a middle pack — though some reshuffling occurred within these tiers. The bottom placements for TikTok and Facebook come as no surprise to anyone following cybersecurity news, but LinkedIn’s relatively high ranking was unexpected. However, it’s better not to limit ourselves to ranking numbers, and to look at the platforms’ specifics in key categories.
In recent years, one of the most contentious issues has been the use of user content to train neural networks. Many people don’t want to just hand over their texts and photos to Big Tech companies, so the ability to opt out of this training is important to them.
Of the social media reviewed, only Twitch makes no claim at all about training AI on user content. All the others plan to either train their own in-house AI or provide training services to partners. Facebook and YouTube plan to do both. You can opt out of this in the settings on Pinterest, X, Quora, and LinkedIn. On YouTube, the opt-out is partial: it’s only available for video creators and only applies to the training of third-party AI not owned by Google.
All platforms aggregate user data for various purposes, from product improvement to showing ads. Some even explicitly state that they may sell this data. Information is collected through websites and mobile apps, and includes not only what users write in their posts or profiles, but also IDs, geolocation data, data about activity in apps and on websites (both the company’s own and external pages), and much more.
After reviewing the data processing policies, the researchers concluded that Twitch, LinkedIn, TikTok, YouTube, Facebook, and Instagram all collect and process sensitive personal information for advertising. Only Pinterest “sells” information (as defined by the CCPA). However, far more social media platforms “share” information with partners: LinkedIn, Pinterest, Quora, Twitch, X, and YouTube all do this. Pinterest, Reddit, and Quora also share data on users’ in-app search queries with third parties.
The social media rankings in the data collection category differ from the overall placement: Quora, Reddit, and X are the least data-hungry. They’re followed by TikTok, LinkedIn, Twitch, Facebook, and Instagram. The laggards in this category are YouTube and Pinterest. At the same time, the mobile apps “greediest” for various user data are Facebook and Instagram, which collect 37 out of 38 possible types of data on user devices. They’re followed by LinkedIn with 31 data types, and YouTube and Pinterest with 27 each.
The researchers compared the number of privacy settings across social media and checked whether the most secure option was selected by default. Here, Pinterest is the absolute leader, offering a high level of privacy in its settings by default and collecting little data during account creation. Close behind are Quora, Reddit, and Twitch, which show a similar profile.
Surprisingly, Facebook, YouTube, and LinkedIn rank mid-list, each providing a substantial array of privacy settings. Instagram, X, and TikTok have the fewest privacy options and the worst default settings.
Almost all platforms let you configure your account to show a minimum of data to others. Public exposure can be minimized most effectively on Pinterest, Facebook, and TikTok, while LinkedIn and X are the worst in this regard.
No social media platform reviewed achieved an ideal rating. Privacy leaders such as Twitch and Quora focus on specific content types and aren’t general-purpose social media services, while the most popular social networks happily collect and utilize user data. LinkedIn has managed to strike a balance between privacy settings and data collection. However, its image as a professional social network and the inability to partially hide personal data restrict its broader application.
We recommend double-checking the privacy settings for all the social media you use. Our free Privacy Checker service can help you with that.
What other privacy concerns might arise on social media? Read about them in our other posts:
Kaspersky official blog – Read More
Phishing campaigns and ransomware families evolved rapidly this October, from fake Google Careers pages and ClickUp redirect chains to Figma-hosted credential theft and LockBit’s move into ESXi and Linux systems. ANY.RUN analysts also uncovered TyKit, a reusable phishing kit hiding JavaScript inside SVG files to steal Microsoft 365 credentials across multiple sectors.
Each of these threats shows how attackers are increasingly abusing legitimate cloud platforms, layering CAPTCHA checks and redirects to bypass detection. All cases were analyzed inside ANY.RUN’s Interactive Sandbox, revealing execution flows and behavioral indicators missed by static tools; insights SOC teams can turn into actionable detection logic.
Let’s break down how these attacks unfolded, who they targeted, and what security teams can learn to strengthen their defenses before the next wave hits.
ANY.RUN analysts uncovered a phishing campaign posing as Google Careers, where attackers combined a Salesforce redirect, Cloudflare Turnstile CAPTCHA, and a fake job application page to steal corporate credentials. The campaign primarily targets employees in technology, consulting, and enterprise service sectors, exploiting the trust people place in well-known brands and cloud services.
Unlike typical phishing kits, this campaign weaves together multiple legitimate platforms to make the flow appear authentic, slipping through filters and reputation-based security tools. Once credentials are entered on the fake Google Careers portal, they’re exfiltrated to the command-and-control (C2) server, such as satoshicommands[.]com, enabling further compromise of work accounts, client data, and internal collaboration tools.
For organizations, this attack creates a chain reaction: compromised mailboxes, lateral movement across SaaS ecosystems, and potential exposure of customer or partner data; all while evading detection from traditional tools that trust the Salesforce and Cloudflare domains in the redirect path.
See full execution chain exposed in 60 seconds

Adversaries in this campaign misuse legitimate platforms to host phishing flows that evade automated detection. The combination of trusted domains and multi-step redirection makes these attacks particularly hard to catch without behavioral visibility.
Below are ready-to-use Threat Intelligence Lookup queries to expand visibility, uncover infrastructure overlaps, and convert findings into detection rules, not just IOCs:
Google-like application domains: domainName:”apply.g*.com” OR domainName:”hire.g*.com”
Vercel deployment patterns: domainName:”puma-*.vercel.app” OR domainName:”hiring*.vercel.app”
YouTube TLD impersonation: domainName:”hire.yt”
C2 domain: domainName:”satoshicommands.com”

Gathered IOCs:
ANY.RUN analysts identified a growing wave of phishing attacks abusing Figma, where public design prototypes are used to host and deliver Microsoft-themed credential theft campaigns. This trend highlights a serious blind spot in corporate defenses; the exploitation of trusted cloud platforms that security systems often whitelist by default.
Attackers are turning to Figma because it offers everything they need for a convincing delivery: it’s a widely trusted domain, allows anyone to publish and share prototypes publicly without authentication, and renders interactive content directly in the browser. That makes it perfect for embedding phishing elements, buttons, links, and visuals that look completely legitimate, while bypassing traditional email filters and URL reputation checks.
Across multiple samples analyzed last month, 49% of these attacks were linked to Storm-1747, followed by Mamba (25%), Gabagool (2%), and several smaller operators. Each uses Figma as the initial hosting vector, sending victims “document” invitations that appear genuine and trigger the phishing flow upon interaction.
Check real case: Figma abuse leading to fake Microsoft login page

Inside ANY.RUN’s Interactive Sandbox, analysts can safely detonate these links, visualize the full redirection flow, and expose the hidden credential capture mechanism; something static filters miss entirely. This interactive approach gives SOC teams real behavioral context for tuning detections and reduces investigation time when facing similar cloud-hosted phishing chains.
To uncover additional campaigns abusing Figma and connected infrastructure, use the following TI Lookup query:
domainName:”figma.com” AND threatName:”phishing”

This search surfaces recent submissions that share behavioral traits, letting SOC teams expand visibility and transform isolated IOCs into behavioral detection rules.
Gathered IOCs:
Researchers spotted a major update from the LockBit group on its sixth anniversary: LockBit 5.0. Unlike earlier releases, this version targets not only Windows but also Linux and VMware ESXi, meaning attackers are now going after core infrastructure. A single successful intrusion can take down many virtual machines at once and knock whole systems offline.
LockBit 5.0 introduces stronger obfuscation, flexible configuration files, and enhanced anti-analysis techniques, making it significantly harder to detect and dissect. The campaign primarily targets enterprise networks, managed service providers, and government systems across Europe, North America, and Asia, where virtualized environments form the backbone of daily operations.
A single LockBit 5.0 intrusion can shut down dozens of servers simultaneously, halting production systems, paralyzing data centers, and causing prolonged outages with severe financial and reputational consequences.

View real-world analysis of VMware ESXi variant
The most critical of the three builds. A dedicated encryptor for hypervisors capable of disabling multiple virtual machines at once. Its CLI closely mirrors the Windows version but adds datastore and VM config targeting, enabling it to halt operations across entire host environments in seconds.
View real-world analysis of Windows variant

The mainline variant runs with DLL reflection, supports both GUI and console modes, encrypts local and network drives, and performs cleanup actions like deleting shadow copies, stopping critical services, and clearing event logs. It drops a ransom note linking to LockBit’s live negotiation portal.
View real-world analysis of Linux variant
A lightweight console-based encryptor that replicates Windows behavior with added mount point filters, disk wiping, anti-analysis routines, and region-based execution restrictions to evade detection and avoid unwanted publicity in certain locales.
Inside ANY.RUN’s Interactive Sandbox, analysts can trace how the new encryptors behave across each operating system, from memory injection and service termination to encryption logic and ransom note delivery, helping SOC teams identify new TTPs early and enrich detection logic with behavioral indicators, not just static IOCs.
Use the following Threat Intelligence Lookup queries to identify LockBit 5.0 activity and enrich your SOC’s detection coverage with live sandbox data:
ESXi Lockbit 5.0: commandLine:”vmware -v”
Linux Lockbit 5.0: filePath:”^/home/user/.local/share/evolution/tasks/ReadMeForDecrypt.txt$”
Windows Lockbit 5.0: filePath:”^C:\ReadMeForDecrypt.txt$”
These queries help analysts pivot from OS-specific artifacts to global attack patterns, connecting infrastructure and payload updates across submissions.
ANY.RUN analysts found attackers abusing ClickUp to host redirect pages and hide phishing flows. In many cases ClickUp is the visible domain the victim clicks, then the chain moves through other trusted services (like Microsoft’s microdomains and Azure Blob Storage) before landing on a credential-harvesting page.

Attackers use ClickUp because public docs and prototypes are quick to create, look legitimate in inboxes, and come from a domain most organizations don’t block. Besides ClickUp, they also exploit microdot-style Microsoft endpoints and Azure Blob Storage to host the final phishing page, making the whole flow look like normal collaboration traffic.
Check a real-world case that exposes the full attack chain in ~1 minute

Because every domain in the chain belongs to a legitimate provider, these campaigns are hard to detect. Filters and whitelists that trust SaaS vendors often let the traffic pass, and users are less likely to be suspicious when the URL looks familiar.
Inside ANY.RUN’s Interactive Sandbox, analysts can observe how each redirect unfolds across real Microsoft and ClickUp domains, see the credential-harvesting page render inside Azure Blob Storage, and extract live indicators for immediate defense updates. This visibility helps SOC teams shorten investigation time and enrich detection logic with behavioral context, not just URLs.
Use the following TI Lookup queries to uncover related infrastructure and track recurring phishing activity across trusted cloud providers:
Azure Blob Storage: domainName:”*.blob.core.windows.net$” AND threatName:”phishing
Microsoft Forms: domainName:”forms.office.com$” AND threatName:”phishing
ClickUp: domainName:”clickup.com$” AND threatName:”phishing”
Gathered IOCs:
Detailed breakdown of TyKit attack
ANY.RUN analysts identified Tykit, a reusable phishing kit that hides JavaScript inside SVG files to push victims through a multi-stage flow and steal Microsoft 365 logins.
First seen in May 2025 with activity peaking in September–October 2025, it hits organizations across the US, Canada, LATAM, EMEA, SE Asia, and the Middle East, with notable impact on finance, government, telecom, IT, real estate, construction, professional services, education, and more.
Tykit blends redirects, basic anti-debugging, and staged C2 checks to outlast simple filters. A successful phish can lead to account takeover, data theft from mailboxes and cloud drives, lateral movement, and MFA bypass via AitM logic.
View analysis session with TyKit

How the attack unfolds:

To collect all IOCs and perform a detailed case analysis, see the following TI Lookup query:
SVG/C2 pattern: domainName:”^segy.*”
Combined query: sha256:”a7184bef39523bef32683ef7af440a5b2235e83e7fb83c6b7ee5f08286731892″ OR domainName:”^loginmicr*.cc$” OR domainName:”^segy*”

Using ANY.RUN’s Interactive Sandbox during incident response accelerates this process: analysts can safely replay the infection chain, confirm what data was exfiltrated, and extract accurate IOCs within minutes. This shortens MTTR and helps strengthen detections for the next wave of similar campaigns.
Gathered IOCs:
SHA256 (SVGs):
Domains & patterns:
URLs & requests:
From phishing kits and stealers to ransomware and zero-day exploits, today’s attacks evolve faster than static defenses can keep up. Investigating them manually can take hours, while attackers move in minutes. ANY.RUN helps SOC teams close that gap with real-time, interactive analysis.
Here’s how teams stay ahead:
For SOCs, MSSPs, and threat researchers, ANY.RUN delivers the speed, depth, and live visibility needed to turn reactive defense into proactive threat hunting and stay ahead of every new campaign.
Explore ANY.RUN’s capabilities during 14-day trial→
ANY.RUN supports more than 15,000 organizations worldwide, including leaders in finance, healthcare, telecom, retail, and tech, helping them strengthen security operations and respond to threats with greater confidence.
Designed for speed and visibility, the solution blends interactive malware analysis with live threat intelligence, giving SOC teams instant insight into attack behavior and the context needed to act faster.
By integrating ANY.RUN’s Threat Intelligence suite into your existing workflows, you can accelerate investigations, minimize breach impact, and build lasting resilience against evolving threats.
The post Major Cyber Attacks in October 2025: Phishing via Google Careers & ClickUp, Figma Abuse, LockBit 5.0, and TyKit appeared first on ANY.RUN’s Cybersecurity Blog.
ANY.RUN’s Cybersecurity Blog – Read More

As many seasoned industry professionals remember, 2008 – 2010 was a tough time for the tech industry as well as the larger U.S. economy. During the Great Recession, unemployment rose as high as 10%, and IT and cybersecurity budgets were certainly not spared. During the 2020 COVID-19 crisis, the need for tech workers and larger IT budgets to support remote work was so strong that it outweighed the global economic slowdown. As a result, many new IT professionals never experienced what a real recession feels like.
The FBI noted a 22.3% increase in cybercrime complaint submissions from 2008 – 2009, which some attributed in part to unemployed, financially desperate tech workers turning their skillsets to crime. At that time, threat actors mostly targeted individuals in the form of scams, fraud, and other crimes. In today’s environment, a similar economic downturn could easily lead to a surge in the number and talent of ransomware operators.
Why? Unlike in the Great Recession, most corporate networks are now remote- or hybrid-enabled by default. While nothing about a network’s attack surface would inherently change due to an economic downturn, any increase in the number and skill level of attackers, decrease in the number and skill of defenders, or decrease in the quality of security measures could have devastating consequences for the IT environment owner.
As was painfully highlighted in recent years by Salt Typhoon incursions into telecommunications networks, working with legacy hardware and software is a risk many businesses take. As belts tighten during an economic downturn, cybersecurity budgets will decrease, and many businesses will inevitably need to postpone technology upgrades beyond end of life. While this introduces risk, there are a few solid strategies to mitigate that risk.
While these terms were both solid contenders for the No. 1 Sales Buzzword of 2023, they reflect a valuable underlying principle: Assume the adversary is going to gain a foothold and architect accordingly.
If a business must continue to use 40% legacy firewalls and only has budget to replace 60%, those legacy firewalls should be positioned in the interior of the network versus on the perimeter and logically separated so an adversary cannot “island-hop” from one to the next using the same vulnerability. If a legacy server must be positioned in a public-facing location, it should be placed in a tightly-controlled DMZ where compromise of that server would not lead to further network intrusion.
No breach is desirable, but you can minimize the potential for lateral movement.
Many vulnerable applications and systems are targeted via plugins or extra features that an organization isn’t even using. The classic example is a webserver with an abandoned WordPress plugin that later is discovered to be vulnerable. Another example is the SSH login method on a VMWare ESXi hypervisor — an organization may accidentally leave this enabled and allow an adversary to log in as root.
For vulnerable systems and software, it is critical to review what is strictly necessary for it to operate and disable all other functionalities. This is an important part of attack surface reduction.
While closed-source commercial security tools usually offer the easiest setup and best overall experience, transitioning a budget-constrained organization to a blend of commercial and open-source software may be the right answer for maximum efficacy. Here are some rules of thumb for selection.
Open-source software excels when the product does not depend on frequent updates or detailed technical support. Initial setup may be involved and challenging, but financial savings can be significant. A good current example is the Zeek network security monitor, which is not a standalone security product but significantly enhances network-based detection capabilities. An open-source SIEM solution that may be suitable for smaller businesses is Security Onion.
For solutions that depend on frequent updates, particularly time-sensitive signature/definition updates, commercial security solutions are the only answer. This primarily includes endpoint detection and response (EDR)/antivirus (AV), firewall, and DNS security solutions. Recognizing that this is a mandatory expenditure will help solidify planning for other areas of cost savings.
For organizations that don’t have the budget for new security systems, making the most of what you already have can go a long way to ensure that basic level of security and hardening is applied. For further information beyond what is reflected below, consider reading this paper on practical security measures for small and/or budget-constrained organizations.
Review configuration and policy settings for your existing security investments like AV or EDR solutions. Optimizing them is an easy way to increase security for free. Revisit any policies that were not recently reviewed. Simple configuration changes like turning on heuristic scanning in the AV software can help to catch threats that haven’t been seen before or use more advanced methods of compromise. During the AV/EDR review, checking the exclusions list is always a good idea. As an extreme example that Talos IR has unfortunately seen during incident response, having the whole C: drive excluded prevents any detections at all. Exclusions should be targeted and precise.
Another powerful, albeit time-consuming, security measure is to optimize Windows domain policies and configurations to help protect the organization. Windows Security baselines, published by Microsoft, are a great starting point. Policy settings like enforcing strong passwords, limiting admin access, and disabling unnecessary features can help tighten security without spending extra money. The CIS also recently published an extensive guide on Active Directory and GPM configuration best practices. For cloud environments, CISA’s SCuBA program offers excellent configuration security guidance.
Locking down PowerShell so only trusted users can run it, or setting it to a restricted mode, makes it much harder for attackers to use it against you. The newest versions of PowerShell provide excellent controls, allowing your team to restrict access, limit which scripts can be executed, and configure other granular restrictions, which will help ensure that even if a malicious PowerShell script lands somewhere in the environment, the hardened configuration of PowerShell will limit its functionality.
Various tricks to prevent executables from running by default can be surprisingly effective. For example, changing the default program for opening .js files to Notepad stops these scripts from running. These small changes may seem simple, but together they can create strong layers of defense. For organizations with limited resources, these tweaks can make a big difference in reducing risk without breaking the bank. The following is a very simple PowerShell script which will ensure that malware on unsuspecting user systems is treated as a text file. Of course, these suggestions should be tested and modified to ensure that they do not impair valid enterprise functions.
# List of dangerous file extensions to associate with Notepad
$extensions = @(".js", ".jse", ".vbs", ".vbe", ".wsf", ".wsh", ".ps1", ".cmd", ".bat", ".hta", ".scr")
foreach ($ext in $extensions) {
try {
$assoc = New-Object -ComObject WScript.Shell
$assoc.RegWrite("HKCUSoftwareMicrosoftWindowsCurrentVersionExplorerFileExts$extUserChoiceProgid", "Applicationsnotepad.exe", "REG_SZ")
Write-Host "Set $ext to open with Notepad"
}
catch {
Write-Warning "Failed to set $ext: $_"
}
}
Figure 1: Sample script to neuter executables.
Assuming you have the storage space, optimizing logging and alerting is a cheap way to improve network security when a breach is likely. A good understanding of which systems are legacy and therefore vulnerable is an excellent starting point — prioritizing visibility on those systems is key.
Thoughtful placement of canary tokens, decoy/honey accounts, and other creative countermeasures on vulnerable systems are other mechanisms to quickly detect and shut down an adversary in the network. This is especially important when you start with the assumption that you will be breached at some point due to vulnerable systems or software.
The majority of organizations have firewalls and network boundary devices deployed across their infrastructure. Tuning these devices to filter high ports and allow for common ports like 80/443 outbound while restricting access to unnecessary services results in the disruption of many command-and-control malware channels, which often try to evade detection by using high ports for communication.
An ISC2 survey showed that 24% of cybersecurity departments faced layoffs in 2024, a trend which seems to be continuing into 2025. This was not due to a surplus of cybersecurity staffing. 67% of respondents also agreed that they no longer had the staff to meet their goals. In an economic downturn, this situation would only worsen. It is therefore important to consider how to use the remaining personnel budget as effectively as possible.
Recent developments have virtually guaranteed a future shortage of skilled mid-career cybersecurity professionals. First, the glut of cybersecurity talent on the market due to recent layoffs have led to many mid-career professionals taking entry-level jobs. Second, the advent of generative AI has led many organizations to reduce their hiring of entry-level professionals. These two factors have created an extremely hostile environment for recent graduates from cybersecurity educational programs. The authors of this post have personally observed several promising students graduate with cybersecurity degrees and ultimately pivot to unrelated fields due to the lack of opportunity. Unless gen AI advancements truly replace cybersecurity professionals, the current entry-level pipeline collapse may well lead to a shortage of skilled mid-career professionals in the next 5 – 10 years as the replacement rate drops below the rate of retirement and general attrition.
With that in mind, forward-thinking organizations should take care to attract above-average, early-to-mid career talent and make every effort to train and retain them. It is currently a strong employers’ market, and forward investment now may result in relatively cheap, seasoned employees in the future when the pendulum swings back.
In a budget-constrained environment, having a strong relationship with on-demand cybersecurity consultants can be a form of leverage, providing tremendous benefit at a relatively cheap cost. If an organization is large enough to experience a significant cybersecurity incident every week, it would make sense to fully staff an in-house incident response team. However, for most organizations that only experience a few incidents per year, it makes good financial sense to employ a team of cybersecurity generalists and have an incident response provider on retainer for extreme circumstances.
Using Cisco Talos as an example, not only is an annual retainer with Cisco Talos Incident Response cheaper than employing a single full-time incident responder, but the retaining organization also gets the benefit of a highly-experienced incident response team that deals with major incidents around the globe on a weekly basis.
Hard decisions are inevitable when the security budget decreases. However, exploring new options to add efficiency can not only protect the organization in the short term, but also provide long-term efficiency gains when budgetary restrictions eventually ease.
Cisco Talos Blog – Read More