Several popular npm packages used in a number of web projects have been compromised and trojanized by unknown attackers. The attackers, through a phishing attack on maintainers, were able to gain access to at least one repository and injected the packages with malicious code used to hunt for cryptocurrency. Thus, all web applications that used trojanized versions of the packages were turned into cryptodrainers. And there can be quite a few of them — as the compromised packages had more than two billion downloads per day (according to Aikido Security).
What are the dangers of the trojanized packages used in this attack?
Obfuscated JavaScript was added to all affected packages. If the compromised package is used in a web application, the malicious code is activated on the devices that were used to access this application. Acting at the browser level, malware intercepts network traffic and API requests, and changes data associated with Ethereum, Bitcoin, Solana, Litecoin, Bitcoin Cash, and Tron cryptocurrency wallets. The malware spoofs their addresses and redirects transactions to the attackers’ wallets.
About three hours after the attack began, the npm administration started to remove the infected packages, but it’s not known exactly how many times they were downloaded during this time.
How the attackers managed to gain access to the repositories
The attackers used a rather banal technique — they created a phishing email in which maintainers were urged to update their two-factor authentication credentials at the first opportunity. Otherwise, they were threatened with account lockout starting September 10, 2025. The emails were sent from a mailbox on the domain npmjs[.]help, similar to the legitimate npmjs.com. The same domain also hosted a phishing site that mimicked the official npm registry page. Credentials entered on this site immediately fell into the hands of the attackers.
The attack was successful against at least one maintainer, compromising the npm packages color, debug, ansi-regex, chalk, and several others. However, the phishing attack appears to have been more extensive, because other maintainers and developers received similar phishing emails, so the full list of trojanized packages may be longer.
Which packages were compromised?
At the time of writing this post, the following packages are known to be compromised:
ansi-regex
ansi-styles
backslash
chalk
chalk-template
color-convert
color-name
color-string
debug
error-ex
has-ansi
is-arrayish
simple-swizzle
slice-ansi
strip-ansi
supports-color
supports-hyperlinks
wrap-ansi
However, as we have already written above, the list may grow. You can keep an eye on the GitHub advisory page for updates.
How to stay safe
Kaspersky Lab products, both for home and for corporate users, successfully detect and stop the malware used in this attack.
Developers are advised to audit the dependencies in their projects, and if one of the compromised packages was used there, pin the safe version using the overrides function in package.json. You can find more detailed instructions here.
Maintainers and developers with access to open source software repositories are advised to be doubly careful when receiving emails urging them to log into their accounts. Better yet — also use security solutions with an anti-phishing engine.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-09 08:06:362025-09-09 08:06:36Popular npm packages compromised | Kaspersky official blog
Over the past two and a half years (January 2023 through June 2025), Cisco Talos Incident Response (Talos IR) has responded to numerous engagements that we classified as pre-ransomware incidents.
Talos looked back to analyze what key security measures were credited with deterring ransomware deployment in each pre-ransomware engagement, finding that the top two factors were swift engagement with the incident response team and rapid actioning of alerts from security solutions (predominantly within two hours of the alert).
We also classified almost two dozen observed pre-ransomware indicators in these engagements, as the top observed tactics provide insight into what malicious activity frequently preempts a more severe attack. Finally, we analyzed Talos IR’s most frequent recommendations to customers to ascertain common security gaps.
Aggregation of this data and the follow-on analysis is intended to provide actionable guidance that can assist organizations in improving their defenses against ransomware activity.
What characterizes an incident as “pre-ransomware?”
Talos IR associates specific adversary actions with pre-ransomware activity. When threat actors attempt to gain enterprise-level domain administrator access, they often conduct a series of account pivots and escalations, deploy command-and-control (C2) or other remote access solutions, harvest credentials and/or deploy automation to execute the modification of the OS. Though the specific tools or elements in the attack chain vary by adversary, Talos IR has seen these same classic steps in practice for years. These actions, along with observed indicators of compromise (IOCs) or tactics, techniques and procedures (TTPs) that we associate with known ransomware threats without the end result of enterprise-wide encryption, lead us to categorize an incident as “pre-ransomware.”
It is worth noting that some of the above attack techniques are also often deployed by initial access brokers (IABs) who seek to gain and sell access to compromised systems, and it is possible some of the incidents involved in this case study could have therefore been perpetrated by IABs instead of ransomware operators. While it is often challenging to determine a threat actor’s end goal, we have high confidence that all incidents involved tactics are consistently seen preceding ransomware deployment. If the adversary was instead an IAB, we have seen these types of IAB campaigns very frequently result in a ransomware attack after access has been sold, rendering the activity relevant to this analysis.
Key security actions and measures that deter ransomware deployment
Talos analyzed incident response engagements spanning the past two and a half years that we categorized as pre-ransomware attacks, identifying actions and security measures that we assessed were key in halting adversaries’ attack chains before encryption. An overview of our findings can be found in Figure 1, followed by a more thorough breakdown of each category to explore exactly how certain actions impeded ransomware execution.
Figure 1. Pie chart of factors hindering ransomware deployment.
Swift engagement of Talos IR
Engaging Talos IR within one to two days of first observed adversary activity (though we advise engagement as quickly as possible) was credited with preventing a more serious ransomware attack in approximately a third of engagements, providing benefits such as:
Extensive knowledge of the threat landscape: In multiple engagements, Talos IR was able to correlate TTPs and IOCs on customers’ networks with other ransomware and pre-ransomware engagements we had responded to, identifying when the infection was part of a larger, widespread campaign. This insight helped Talos IR anticipate and intercept adversaries’ next steps as well as provide customers additional IOCs to block that were seen in other engagements.
Actionable recommendations for isolation and remediation: In some engagements, the customers quickly acted on Talos’ pre-ransomware security guide, which Talos IR assessed prevented more catastrophic events.
Enhanced monitoring: The Cisco Extended Detection and Response (Cisco XDR) team can provide extra vigilance in their monitoring after containment of the pre-ransomware threat to ensure full eradication.
We observed numerous incidents where Talos IR was not engaged by the customer immediately, which enabled the adversary to continue working through their attack chain and conduct data theft and/or ransomware deployment. This often results in consequences such as backup files being corrupted or encrypted, endpoint detection and response (EDR) and other security tools being disabled, disruption to day-to-day operations and more.
Vigilant monitoring of security solutions and logs allows network administrators to act quickly when a threat is first detected, isolate the malicious activity and cut off threat actors’ ability to escalate their attack. In our case study, action from the security team within two hours of an alert from the organization’s EDR or managed detection and response (MDR) solution correlated with successful isolation of the threat in almost a third of engagements. Some of the observed alerts that prompted swift response in pre-ransomware engagements included, amongst others:
Attempted connections to blocked domains
Brute force activity
PowerShell download cradle
Deviations from expected baseline activity as determined by the organization
Newly created domain administrator accounts
Successful connections to an unknown, outside public IP addresses
Reconnaissance activity, including shell access and user discovery commands such as whoami
Modification of multi-factor authentication (MFA) tooling to provide bypass tokens
Modification of an account to be exempt from MFA requirements
USG and/or other partners notified on ransomware staging
In almost 15 percent of engagements, targeted organizations were able to get ahead of the threat to their environment due to notification from U.S. government (USG) partners and representatives of their managed service provider (MSP) about possible ransomware staging in their environment. In particular, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has launched an initiative to provide early warnings about potential ransomware attacks, aiming to help organizations detect threats and evict actors before significant damage occurs. CISA’s intelligence predominately derives from their partnerships with the cybersecurity research community, infrastructure providers and cyber threat intelligence companies.
Security solutions configured to block and quarantine malicious activity
In over 10 percent of Talos IR engagements, customers’ security solutions actively blocked and/or quarantined malicious executables, effectively stopping adversaries’ attack chains in their tracks.
Talos often observes organizations deploying endpoint protection technology in a passive manner, meaning the product is producing alerts to the user but not taking other actions. This configuration puts organizations at unnecessary risk, and Talos IR has responded to multiple engagements where passive deployment enabled threat actors to execute malware, including ransomware. A more aggressive configuration impeded ransomware deployment in this case study, underscoring its importance.
Robust security restrictions prevented access to key resources
Based on our analysis, organizations’ robust security restrictions were key in impeding ransomware actors’ attack chains in nine percent of engagements. For example, in one engagement, the threat actors compromised a service account at the targeted organization, but appropriate privilege restrictions on the account prevented their attempts to access key systems like domain controllers.
Also of note, organizations who implemented thorough logging and/or had a SIEM in place to aggregate event data were able to provide Talos with forensic visibility to determine the exact chain of events and where additional security measures could be implemented. When an organization lacks these records, it can be challenging to identify the precise security weaknesses that enabled threat activity.
Most observed pre-ransomware indicators
Upon categorizing TTPs observed in this case study per the MITRE ATT&CK framework, Talos found that the following in Figure 2 were most frequently seen across engagements.
Figure 2. Prevalence of pre-ransomware TTPs.
We dove deeper into some of the top attack techniques and found the following:
Remote Services: Talos IR frequently saw remote services such as RDP, PsExec and PowerShell leveraged by adversaries.
Remote Access Software: Frequently seen remote access software included AnyDesk, Atera, Microsoft Quick Assist and Splashtop.
OS Credential Dumping: Top observed credential dumping techniques/locations included the domain controller registry, the SAM registry hive, AD Explorer, LSASS and NTDS.DIT. Mimikatz was also frequently used.
Network Service Discovery: Top observed tools and commands used for network service discovery included netscan, nltest and netview.
The top observed TTPs serve as a reminder to security teams on what malicious activity often preempts a more severe attack. For example, prioritizing moderating the use of remote services and remote access software and/or securing the aforementioned credential stores could assist in limiting the majority of adversaries seen in these pre-ransomware engagements.
Observed security gaps and prevalent Talos IR recommendations
Talos IR crafts security recommendations for customers in each incident upon analyzing the environment and the adversary’s attack chain to help address any existing security weaknesses. Our most frequent recommendations include:
Bring all operating systems and software patching up to date.
Store backups offline.
Configure security solutions to permit only proven benign applications to launch and prevent the installation of unexpected software.
Require MFA on all critical services, including remote access and identity access management (IAM) services, and monitor for MFA misuse.
Deploy Sysmon for enhanced endpoint visibility and logging.
Implement meaningful firewall rules for both inbound and outbound traffic to block unwanted protocols from being able to be used by adversaries as part of their C2 or data exfiltration actions.
Implement robust network segmentation to minimize lateral movement and reduce the attack surface, ensuring valuable assets such as domain controllers do not connect directly to the internet aside from critical functions.
Establish or intensify end-user cybersecurity training on social engineering tactics, including coverage of recently popularized attacks such as MFA fatigue attacks and actor-in-the-middle token phishing attacks.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-08 10:06:372025-09-08 10:06:37Stopping ransomware before it starts: Lessons from Cisco Talos Incident Response
As the attack surface expands and the threat landscape grows more complex, it’s time to consider whether your data protection strategy is fit for purpose
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-06 08:06:442025-09-06 08:06:44Under lock and key: Safeguarding business data with encryption
The internet is now a second home for most kids and teens. Many get their first device in elementary or middle school, while modern education basically runs on technology. Cybercriminals know this, and they can trick kids into revealing personal details, send harmful links, lure them into unsafe chats, or even drain their parents’ bank accounts.
That’s why cybersecurity needs to become a part of everyday life at home. Our guide to reducing your kids’ digital footprint will give you a firm grasp of the risks, and create a safe online environment — while avoiding blanket bans or grudging grievances.
What to watch out for
First, let’s identify the digital “hot spots” where your attention as a parent matters most:
Group chats for schools or universities on unsecured messaging apps
Voice chats in video games
Oversharing on social platforms
Searching on the web and across global social networks
Using AI tools and generating content safely
General safe-use practices for devices and public networks
The best way to protect your kids isn’t through strict controls — it’s through honest conversation. Sure, you can block websites, introduce a phone curfew, and hover over your child every time they use Gemini. But this risks losing their trust: you could end up looking like a villain standing in the way of their freedom. Heavy-handed restrictions always invite attempts to get around them. It’s far better to build understanding, and explaining why the rules exist in the first place.
Here are some practical steps to help your child stay out of trouble and keep their digital footprint under control.
Watch what you post
For Gen Z and Gen Alpha, sharing life online is second nature. But oversharing — being too open online — often opens the door to hacking and even offline risks.
Remind your child never to share their last name, date of birth, school name, or city when signing up for services. Explain the risk: attackers could use that data to find them and build false trust — for example, greeting them by name and posing as a classmate’s relative.
Turn off geolocation in posts and stories by default. If a post needs a location, only publish it after your child has left that place.
Also be careful with places your child visits regularly, and avoid sharing travel plans. The “gold standard” is to teach your child to remove geotags from photos they upload. Why this matters — and how to do it — we covered in our post Metadata: Uncovering what’s hidden inside.
Another taboo is sharing personal info — and in some cases even school uniforms. If the school has a distinctive look, photos or videos of clothing (whether sports or regular) can still give away too much.
Reinforce the first rule of the internet: what goes online, stays online. Everything they post can have consequences — from damaged reputations to data in scammers’ hands. If your child simply wants to share their experiences, suggest starting a blog. We cover how to do this safely here: How to help your kid become a blogger without ever worrying about their safety.
Too-good-to-be-true offers, surprise prizes, and other “incredible deals” should always raise suspicion — and be shown to you before following the link. We’ve covered phishing schemes in detail, for example, in our post How scammers attack young gamers; use the examples there to show your child what can happen if links aren’t checked.
Be careful with who you play with online
Caught up in a multiplayer game with voice chat, teens may let their tongues run wild. The gaming world has become a prime space for grooming — when adults build trust with teens for harmful purposes. So set a clear boundary with your child: voice chat should stick to gameplay only. If someone tries to steer things into personal topics, it’s safer to end the conversation — and if they persist, block them.
Avoid public Wi-Fi
Explain that using public Wi-Fi networks is inherently unsafe: attackers can easily intercept logins, passwords, messages, and other sensitive data. Whenever possible, it’s best to stick to mobile data. If connecting to unsecured Wi-Fi is the only way to stay online, protect the connection with atrusted VPN service. That way your child’s data won’t leak.
Watch what you download
Android smartphones are tempting targets for scammers of all stripes. Although malicious apps exist for iPhones too, it’s still easier to sneak onto Android. Teach your child that malicious files can take many forms. They may arrive through messengers or email disguised as photos or documents — even forwarded “homework assignments” — and can also hide behind links in their favorite Discord channels. By default, all attachments should be treated with caution and scanned automatically with a reliable antivirus.
Use AI wisely — and think for yourself
Unsupervised chatbot use isn’t just an ethical or psychological issue — it’s a security risk. Recently, Google indexed tens of thousands of ChatGPT conversations, making them accessible internet-wide.
Explain to your child not to treat AI as a best friend for pouring out their soul. AI tools often collect large amounts of personal data — everything your child types, asks, or uploads in the chat. Make it clear they also shouldn’t share real names, school information, photos, or private details with AI.
And emphasize that chatbots are tools and helpers — not “wizards” that can think for them. Explain that AI can’t think, so any “facts” offered must be double-checked.
Help with content filters and parental controls
Start by enabling parental controls on all devices your child uses: smartphones, tablets, computers — even smart TVs. Most operating systems offer built-in features to block explicit websites, restrict certain apps, and filter search results.
On streaming platforms, enable “Restricted” or “Kids” mode to prevent access to adult content. For more fine-tuned control, your best option is Kaspersky Safe Kids, which filters content in real time, allows you to set screen-time limits, and monitors installed apps. It detects and blocks unwanted content that standard filters might miss — especially in browsers — and even shows your child’s physical location and phone battery level.
Watch and discuss together
The most effective filter isn’t a program — it’s you. Make time to watch shows, surf the web, and play games together with your child. This will help you understand what’s going on in their life and create a space to discuss values, feelings, and real-life situations.
To further minimize your child’s digital footprint and reduce the risks of cyberattacks and cyberbullying, use:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-05 15:06:362025-09-05 15:06:36How to reduce the digital footprint of kids and teens | Kaspersky official blog
Welcome to this week’s edition of the Threat Source newsletter.
This is the way the world ends This is the way the world ends This is the way the world ends Not with a bang but a whimper. – T.S. Eliot
So this is how Summer Camp 2025 ends, not with a bang but a whimper. We’ve put the summer behind us and are moving on to the next phase of the year, where we all put our noses down and grind from here to the holiday season. Happy Grind Season 2025.
As you know, threat research never takes a day off, but I’m going to step in and remind you all to look at your calendars. Decide, here and now, to take some time before that holiday season so that you can take care of your mental health, because mental health is health.
This is doubly important if you lead a team of people. Take a minute and make sure that they are going to do the same. Ensure your entire team is taking care of themselves. In the end, you will all be better for it.
“As artificial intelligence (AI) systems attain greater autonomy and complex environmental interactions, they begin to exhibit behavioral anomalies that, by analogy, resemble psychopathologies observed in humans.”
The behavior of an evolving AI, and the psychosis it could present, is a touch-point to the long-standing problematic internal employee. This creates an interesting dynamic for defense and strategies within the evolving internal landscape.
I think understanding this presented framework can go a long way in identifying the types of behaviors that lead to malicious activity — not unlike understanding employee behavior. Stay ahead of the curve and prepare for not only a hallucinated package from an internal AI tool but perhaps a revelation that leads to new and interesting malicious behaviors.
The one big thing
In the latest episode of The Talos Threat Perspective, we explore three vulnerabilities that Talos researchers uncovered (and helped to fix) this year which highlight how attackers are pushing past the boundaries defenders rely on. One lived in the security chip within Dell laptops’ firmware, another in Microsoft Office for macOS permissions and the third in small office/home routers.
Why do I care?
These aren’t just isolated issues. The Dell vulnerability showed that even a clean Windows reinstall isn’t always enough to kick out an attacker. The Office for macOS issue demonstrated how adversaries can “borrow” sensitive permissions like microphone access from trusted apps. And compromised routers allowed attackers to blend in with legitimate ISP traffic, making malicious connections hard to spot. Each case reveals current attacker creativity levels.
TransUnion says hackers stole 4.4 million customers’ personal information TransUnion is one of the largest credit reporting agencies in the United States, and stores the financial data of more than 260 million Americans. They confirmed that the stolen PII includes customers’ names, dates of birth, and Social Security numbers. (TechCrunch)
Google warns that mass data theft hitting Salesloft AI agent has grown bigger Google is advising users of the Salesloft Drift AI chat agent to consider all security tokens connected to the platform compromised following the discovery that unknown attackers used some of the credentials to access email from Google Workspace accounts. (Ars Technica)
High-severity vulnerability in Passwordstate credential manager Passwordstate is urging companies to promptly install an update fixing a high-severity vulnerability that hackers can exploit to gain administrative access to their vaults. (Ars Technica)
JSON config file leaks Azure ActiveDirectory credentials A publicly accessible configuration file for ASP.NET Core applications has been leaking credentials for Azure ActiveDirectory (AD), potentially allowing cyberattackers to authenticate directly via Microsoft’s OAuth 2.0 endpoints and infiltrate Azure cloud environments. (Dark Reading)
WhatsApp zero-day exploited in attacks targeting Apple users Tracked as CVE-2025-55177 (CVSS score of 5.4), an attacker could have exploited the issue to trigger the processing of content from arbitrary URLs, on the victims’ devices, WhatsApp’s advisory reads. (SecurityWeek)
Can’t get enough Talos?
Cisco: 10 years protecting Black Hat Cisco works with other official providers to bring the hardware, software and engineers to build and secure the Black Hat USA network: Arista, Corelight, Lumen, and Palo Alto Networks.
Tales from the Black Hat NOC How do you build and defend a network where attacks are not just expected, but a part of the curriculum? Hazel sits down with Jessica Oppenheimer to learn more.
Static Tundra exposed A Russian state-sponsored group, Static Tundra, is exploiting an old Cisco IOS vulnerability to compromise unpatched network devices worldwide.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-04 18:06:462025-09-04 18:06:46From summer camp to grind season
The flaws and vulnerabilities of cellular networks are regularly exploited to attack subscribers. Malicious actors use devices with catchy names like IMSI Catcher (Stingray) or SMS blaster to track people’s movements and send them spam and malware. These attacks were easiest to carry out on 2G networks, becoming more difficult on 3G and 4G networks through the introduction of security features. But even 4G networks had implementation flaws that made it possible to track subscriber movements and cause other information leaks. Can we breathe a sigh of relief when we upgrade to 5G? Unfortunately not…
An upgrade in reverse
Many practical attacks, such as the aforementioned SMS blaster, rely on a downgrade: forcing the victim’s smartphone to switch to an older communication standard. Legacy standards allow attackers more leeway — from discovering the subscriber’s unique identifier (IMSI), to sending fake text messages under the guise of real companies. A downgrade typically uses a device that jams the signal of the legitimate carrier’s base station, and broadcasts its own. However, this method can be detected by the carrier, and it will become less effective in the future as smartphones increasingly incorporate built-in protection against these attacks, which prevents the switch to 2G and sometimes even 3G networks.
Researchers at Singapore University of Technology and Design have demonstrated a SNI5GECT attack, which works on the latest 5G networks without requiring easy-to-detect actions like jamming legitimate base station signals. An attacker within a 20-meter radius of the victim can make the target device’s modem reboot and then force-switch it to a 4G network, where the subscriber is easier to identify and track. So how does this attack work?
Before a device and a 5G base station connect to each other, they exchange some information — and the initial stages of this process aren’t encrypted. Once they establish a secure, encrypted connection, the base station and the smartphone exchange handshakes, but coordinate the session parameters in a plain, unencrypted format. The attacker’s device monitors this process and selects the precise moment to inject its own information block before the legitimate base station does. As a result, the victim’s modem processes malicious data. Depending on the modem and the contents of the data packet, this either causes the modem to switch to a 4G network and refuse to reconnect to said 5G base station, or to crash and reboot. The latter is only good for temporarily disconnecting the victim, while the former brings all known 4G-based surveillance attacks into play.
The attack was demonstrated on the OnePlus Nord CE 2, Samsung Galaxy S22, Google Pixel 7, and Huawei P40 Pro smartphones. These devices use completely different cellular modems (MediaTek, Qualcomm, Samsung, Huawei, respectively), but the problem lies in the characteristics of the standard itself — not in the particular smartphones. The differences are subtle: some modems can be rebooted while others can’t; on some modems, inserting a malicious packet has a 50% success rate, while on others it’s 90%.
The practicality of SNI5GECT
In its current form, the attack is unlikely to become widespread since it has two major limitations. First, the distance between the attacker and the victim can’t be over 20 meters under ideal conditions — even less in a real urban environment. Second, if the smartphone and the 5G base station have already established a connection, the attack cannot proceed. The attacker has to wait for a moment when the victim’s movement or changes in the radio environment require the smartphone to re-register with the base station. This happens regularly, but not every minute, so the attacker has to literally shadow the victim.
Still, such conditions may exist in certain situations, like when targeting people attending a specific meeting, or in an airport business lounge, or similar scenarios. The attacker would also need to combine SNI5GECT with legacy 4G/3G/2G attacks to achieve any practical results, which means making some radio noise.
SNI5GECT plays a significant role as a stepping stone toward more complex and dangerous future attacks. As 5G becomes more popular and older generations of connectivity are phased out, researchers will increasingly work with the new radio protocol, and apply their findings to the next stages of the mobile arms race.
Currently, there is no defense against 5G attacks. Disabling 5G for protection is pointless, as the smartphone just switches to a 4G network, which is exactly what hypothetical attackers want. Therefore, we have three pieces of advice:
Regularly patch and update your smartphone’s OS — this usually also updates the modem firmware to fix bugs and vulnerabilities.
Turn on airplane mode before confidential meetings; to be super-safe — leave your device at home.
Consider disabling legacy communication standards (2G/3G) on your smartphone — we discussed the pros and cons of this solution in our post on SMS blasters.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-04 12:06:482025-09-04 12:06:48How the SNI5GECT attack on 5G connectivity works, and how it threatens subscribers | Kaspersky official blog
August was a busy month at ANY.RUN. We expanded our list of connectors with Microsoft Sentinel and OpenCTI, added Linux Debian (ARM) support to the SDK, and strengthened detection across hundreds of new malware families and techniques. With fresh signatures, rules, and product updates, your SOC can now investigate faster, detect more threats in real time, and keep defenses sharp against the latest campaigns.
Let’s dive into the details now.
Product Updates
New Connectors: Bringing Threat Intelligence into Your Existing Stack
We continue to expand ANY.RUN connectors so teams can work with familiar tools while boosting threat visibility. Our goal is simple: reduce setup friction and deliver fresh, high-fidelity IOCs directly into your workflows; no extra tools, no complex scripts, no wasted analyst time.
Microsoft Sentinel
ANY.RUN now delivers Threat Intelligence (TI) Feeds directly to Microsoft Sentinel via the built-in STIX/TAXII connector. That means:
Effortless setup: Connect TI Feeds with your custom API key
Enhanced automation: Sentinel’s playbooks automatically correlate IOCs with your logs, trigger alerts, and even block IPs.
Cost efficiency: Maximize your existing Sentinel setup, cut false positives, and reduce breach risks with high-fidelity indicators.
Rich context: Every IOC links back to a sandbox session with full TTPs for faster investigations and informed responses.
Faster detection: Fresh IOCs stream into Sentinel in real time, accelerating threat identification before impact.
Seamless interoperability: TI Feeds work natively within your Sentinel environment, so no workflows need to change.
Indicators with key parameters accessible for browsing inside MS Sentinel
Investigations become faster and responses more precise with IOCs enriched by full sandbox context. Unlike static or delayed threat feeds, ANY.RUN’s TI Feeds are powered by real-time detonations of fresh malware samples observed across attacks on 15,000+ organizations worldwide. The data is updated continuously and pre-processed by analysts to ensure high fidelity and near-zero false positives, so your SOC can act on threats that truly matter.
Want to integrate TI Feeds from ANY.RUN?
Reach out to us and we’ll help you set it up
For SOC teams using Filigran’s OpenCTI, ANY.RUN now provides dedicated connectors that bring interactive analysis and fresh threat intelligence directly into your workflows. Instead of juggling multiple tools, analysts can analyze files, enrich observables, and track emerging threats inside the OpenCTI interface they already use.
ANY.RUN connectors inside OpenCTI
Interactive Sandbox: Automate analysis of suspicious files and URLs to quickly understand their threat level, TTPs, and collect IOCs.
Detailed documentation on how to set up the OpenCTI connector
SDK Update: Linux Debian (ARM) Support
We’ve expanded our software development kit (SDK) to include Linux Debian 12.2 (ARM, 64-bit) in the Linux connector. This addition ensures that analysts can now automate malware analysis for ARM-based threats alongside Windows, Linux x86, and Android, all from the same SDK.
With this update, your team can:
Submit ARM samples for automated analysis and retrieve detailed reports.
Collect IOCs, IOBs, and IOAs from Debian (ARM) environments in real time.
Integrate ARM analysis seamlessly into SIEM, SOAR, or XDR workflows without extra tools.
Add ANY.RUN’s Interactive Sandbox to your SOC workflow Automate threat analysis, speed up detection, and shorten MTTDs
ARM-based malware is rapidly expanding across IoT, embedded systems, and cloud servers. By adding Debian ARM support, the SDK gives SOCs earlier visibility into these threats and helps reduce costs by keeping all environments under one automated process.
In August, our team continued to expand detection capabilities to help SOCs stay ahead of evolving threats:
104 new signatures were added to strengthen detection across malware families and techniques.
14 new YARA rules went live in production, boosting accuracy and enabling deeper hunting capabilities.
2,124 new Suricata rules were deployed, ensuring better coverage for network-based attacks.
These updates mean analysts get faster, more confident verdicts in the sandbox and can enrich SIEM, SOAR, and IDS workflows with fresh, actionable IOCs.
New Behavior Signatures
In August, we introduced a new set of behavior signatures to help SOC teams detect obfuscation, persistence, and stealthy delivery techniques earlier in the attack chain. These detections are triggered by real actions, not static indicators, giving analysts deeper visibility and faster context during investigations.
This month’s coverage includes new families and techniques across stealers, lockers, loaders, and RATs:
In August, we released 14 new YARA rules into production to help analysts detect threats faster, improve hunting accuracy, and cover a wider range of malware families and evasion tactics. Key additions include:
Updated extractor – improved parsing for modern samples
Updated Lumma rule – enhanced detection for new campaign variants (sample)
About ANY.RUN
ANY.RUN supports over 15,000 organizations across banking, manufacturing, telecom, healthcare, retail, and tech, helping them build faster, smarter, and more resilient cybersecurity operations.
Our cloud-based Interactive Sandbox enables teams to safely analyze threats targeting Windows, Linux, and Android systems in under 40 seconds; no complex infrastructure required. Paired with TI Lookup, YARA Search, and Threat Feeds, ANY.RUN empowers security teams to accelerate investigations, reduce risk, and boost SOC efficiency.
A recent MIT report, The GenAI Divide: State of AI in Business 2025, brought on a significant cooling of tech stocks. While the report offers interesting observations on the economics and organization of AI implementation in business, it also contains valuable insights for cybersecurity teams. The authors weren’t concerned with security issues: the words “security”, “cybersecurity”, or “safety” don’t even appear in the report. However, its findings can and should be considered when planning new corporate AI security policies.
The key observation is that while only 40% of surveyed organizations have purchased an LLM subscription, 90% of employees regularly use personal AI-powered tools for work tasks. And this “shadow AI economy” — the term used in the report — is said to be more effective than the official one. A mere 5% of corporations see economic benefit from their AI implementations, whereas employees are successfully boosting their personal productivity.
The top-down approach to AI implementation is often unsuccessful. Therefore, the authors recommend “learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives”. So how does this advice align with cybersecurity rules?
A complete ban on shadow AI
A policy favored by many CISOs is to test and implement — or better yet, build one’s own — AI tools and then simply ban all others. This approach can be economically inefficient, potentially causing the company to fall behind its competitors. It’s also difficult to enforce, as ensuring compliance can be both challenging and expensive. Nevertheless, for some highly regulated industries or for business units that handle extremely sensitive data, a prohibitive policy might be the only option. The following methods can be used to implement it:
Block access to all popular AI tools at the network level using a network filtering tool.
Configure a DLP system to monitor and block data from being transferred to AI applications and services; this includes preventing the copying and pasting of large text blocks via the clipboard.
Use an application allowlist policy on corporate devices to prevent employees from running third-party applications that could be used for direct AI access or to bypass other security measures.
Prohibit the use of personal devices for work-related tasks.
Use additional tools, such as video analytics, to detect and limit employees’ ability to take pictures of their computer screens with personal smartphones.
Establish a company-wide policy that prohibits the use of any AI tools except those on a management-approved list and deployed by corporate security teams. This policy should be formally documented, and employees should receive appropriate training.
Unrestricted use of AI
If the company considers the risks of using AI tools to be insignificant, or has departments that don’t handle personal or other sensitive data, the use of AI by these teams can be all but unrestricted. By setting a short list of hygiene measures and restrictions, the company can observe LLM usage habits, identify popular services, and use this data to plan future actions and refine their security measures. Even with this democratic approach, it’s still necessary to:
Conduct regular surveys to find out how often AI is being used and for what tasks. Based on telemetry and survey data, measure the effect and risks of its use to adjust your policies.
Balanced restrictions on AI use
When it comes to company-wide AI usage, neither extreme — a total ban or total freedom — is likely to fit. More versatile would be a policy that allows for different levels of AI access based on the type of data being used. Full implementation of such a policy requires:
A specialized AI proxy that both cleans queries on-the-fly by removing specific types of sensitive data (such as names or customer IDs), and uses role-based access control to block inappropriate use cases.
An IT self-service portal for employees to declare their use of AI tools — from basic models and services to specialized applications and browser extensions.
A solution (NGFW, CASB, DLP, or other) for detailed monitoring and control of AI usage at the level of specific requests for each service.
Only for companies that build software: modified CI/CD pipelines and SAST/DAST tools to automatically identify AI-generated code, and flag it for additional verification steps.
As with the unrestricted scenario, regular employee training, surveys, and robust security for both work and personal devices.
Armed with the listed requirements, a policy needs to be developed that covers different departments and various types of information. It might look something like this:
Data type
Public-facing AI (from personal devices and accounts)
External AI service (via a corporate AI proxy)
On-premise or trusted cloud AI tools
Public data (such as ad copy)
Permitted (declared via the company portal)
Permitted (logged)
Permitted (logged)
General internal data (such as email content)
Discouraged but not blocked. Requires declaration
Permitted (logged)
Permitted (logged)
Confidential data (such as application source code, legal or HR communications)
Blocked by DLP/CASB/NGFW
Permitted for specific, manager-approved scenarios (personal data must be removed; code requires both automated and manual checks)
Permitted (logged, with personal data removed as needed)
High-impact regulated data (financial, medical, and so on)
Prohibited
Prohibited
Permitted with CISO approval, subject to regulatory storage requirements
Highly critical and classified data
Prohibited
Prohibited
Prohibited (exceptions possible only with board of directors approval)
To enforce the policy, a multi-layered organizational approach is necessary in addition to technical tools. First and foremost, employees need to be trained on the risks associated with AI — from data leaks and hallucinations to prompt injections. This training should be mandatory for everyone in the organization.
After the initial training, it’s essential to develop more detailed policies and provide advanced training for department heads. This will empower them to make informed decisions about whether to approve or deny requests to use specific data with public AI tools.
Initial policies, criteria, and measures are just the beginning; they need to be regularly updated. This involves analyzing data, refining real-world AI use cases, and monitoring popular tools. A self-service portal is needed as a stress-free environment where employees can explain what AI tools they’re using and for what purposes. This valuable feedback enriches your analytics, helps build a business case for AI adoption, and provides a role-based model for applying the right security policies.
Finally, a multi-tiered system for responding to violations is a must. Possible steps:
An automated warning, and a mandatory micro-training course on the given violation.
A private meeting between the employee and their department head and an information security officer.
A temporary ban on AI-powered tools.
Strict disciplinary action through HR.
A comprehensive approach to AI security
The policies discussed here cover a relatively narrow range of risks associated with the use of SaaS solutions for generative AI. To create a full-fledged policy that addresses the whole spectrum of relevant risks, see our guidelines for securely implementing AI systems, developed by Kaspersky in collaboration with other trusted experts.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-03 16:06:382025-09-03 16:06:38How businesses should respond to employees using personal AI apps
Open any website, and the first thing you’ll likely see is a pop-up notification about the use of cookies. You’re usually given the option to accept all cookies, accept only necessary ones, or flatly reject them. Regardless of your choice, you probably won’t notice a difference, and the notification disappears from the screen anyway.
Today, we dive a little deeper into the cookie jar: what cookies are for, what types exist, how attackers can intercept them, what the risks are, and how to stay safe.
What are cookies?
When you visit a website, it sends a cookie to your browser. This is a small text file that contains data about you, your system, and the actions you’ve taken on the site. Your browser stores this data on your device and sends it back to the server every time you return to that site. This simplifies your interaction with the site: you don’t have to log in on every single page; sites remember your display settings; online stores keep items in your cart; streaming services know at which episode you stopped watching — the benefits are limitless.
Cookies can store your login, password, security tokens, phone number, residential address, bank details, and session ID. Let’s take a closer look at the session identifier.
A session ID is a unique code assigned to each user when they sign in to a website. If a third party manages to intercept this code, the web server will see them as a legitimate user. Here’s a simple analogy: imagine you can enter your office by means of an electronic pass with a unique code. If your pass is stolen, the thief — whether they look like you or not — can open any door you have access to without any trouble. Meanwhile, the security system will believe that it’s you entering. Sounds like a scene from a crime TV show, doesn’t it? The same thing happens online: if a hacker steals a cookie with your session ID, they can sign in to a website you were already signed in to, under your name, without needing to enter a username and password; sometimes they can even bypass two-factor authentication. In 2023, hackers stole all three of the YouTube channels of the famous tech blogger Linus Sebastian – “Linus Tech Tips” and two other Linus Media Group YouTube channels with tens of millions of subscribers — and this is exactly how they did it. We’ve already covered that case in detail.
What types of cookies are there?
Now let’s sort through the different cookie varieties. All cookies can be classified according to a number of characteristics.
By storage time
Temporary, or session cookies. These are only used while you’re on the website. They’re deleted as soon as you leave. They’re required for things like keeping you signed in as you navigate from page to page, or remembering your selected language and region.
Persistent cookies. These remain on your device after you leave the site. They spare you the need to accept or decline cookie policies every time you visit. They typically last for about a year.
It’s possible for session cookies to become persistent. For example, if you check a box like “Remember me”, “Save settings”, or some such on a website, the data will be saved in a persistent cookie.
By source
First-party cookies. These are generated by the website itself. They allow the website to function properly and visitors to get a proper experience. They may also be used for analytics and marketing purposes.
Third-party cookies. These are collected by external services. They’re used to display ads and collect advertising statistics, among other things. This category also includes cookies from analytics services like Google Analytics and social media platforms. These cookies store your sign-in credentials, allowing you to like a page or share content on social media with a single click.
By importance
Required, or essential cookies. These support core website features, such as selling products on an e-commerce platform. In this case, each user has a personal account, and essential cookies store their login, password, and session ID.
Optional cookies. These are used to track user behavior and help tailor ads more precisely. Most optional cookies belong to external parties and don’t affect your ability to use all of the site’s features.
By storage technology
These cookies are stored in text files in the browser’s standard storage. When you clear your browser data, they’re deleted, and after that, the websites that sent them will no longer recognize you.
There are two special subtypes: supercookies and evercookies, which store data in a non-standard way. Supercookies are embedded in website headers and stored in non-standard locations, which allows them to avoid being deleted by the browser’s cleanup function. Evercookies can be restored using JavaScript even after being deleted. This means they can be used for persistent and difficult-to-control user tracking.
The same cookie can fall into multiple categories: for example, most optional cookies are third-party, while required cookies include temporary ones responsible for the security of a specific browsing session. For more details on how and when all these types of cookies are used, read the full report on Securelist.
How session IDs are stolen through session hijacking
Cookies that contain a session ID are the most tempting targets for hackers. Theft of a session ID is also known as session hijacking. Let’s examine some of the most interesting and widespread methods.
Session sniffing
Session hijacking is possible by monitoring or “sniffing” the internet traffic between the user and the website. This type of attack happens on websites that use the less secure HTTP protocol instead of HTTPS. With HTTP, cookie files are transmitted in plain text within the headers of HTTP requests, meaning they’re not encrypted. A malicious actor can easily intercept the traffic between you and the website you’re on, and extract cookies.
These attacks often occur on public Wi-Fi networks, especially if not protected by either the WPA2 or WPA3 protocols. For this reason, we recommend exercising extreme caution with public hotspots. It’s much safer to use mobile data. If you’re traveling abroad, it’s a good idea to use an Kaspersky eSIM Store.
Cross-site scripting (XSS)
Cross-site scripting consistently ranks among the top web-security vulnerabilities, and with good reason. This type of attack allows malicious actors to gain access to a site’s data — including the cookie files that contain the coveted session IDs.
Here’s how it works: the attacker finds a vulnerability in the website’s source code and injects a malicious script; that done, all that remains is for you to visit the infected page and you can kiss your cookies goodbye. The script gains full access to your cookies and sends them to the attacker.
Cross-site request forgery (CSRF/XSRF)
Unlike other types of attacks, cross-site request forgery exploits the trust relationship between a website and your browser. An attacker tricks an authenticated user’s browser into performing an unintended action without their knowledge, such as changing a password or deleting data like uploaded videos.
For this type of attack, the threat actor creates a web page or email containing a malicious link, HTML code, or a script with a request to the vulnerable website. Simply opening the page or email, or clicking the link, is enough for the browser to automatically send the malicious request to the target site. All of your cookies for that site will be attached to the request. Believing that it was you who requested, say, the password change or channel deletion, the site will carry out the attackers’ request on your behalf.
That’s why we recommend not opening links received from strangers, and installing a Kaspersky Password Manager that can alert you to malicious links or scripts.
Predictable session IDs
Sometimes, attackers don’t need to use complex schemes — they can simply guess the session ID. On some websites, session IDs are generated by predictable algorithms, and might contain information like your IP address plus an easily reproducible sequence of characters.
To pull off this kind of attack, hackers need to collect enough sample IDs, analyze them, and then figure out the generating algorithm to predict session IDs on their own.
A large part of the responsibility for cookie security lies with website developers. We provide tips for them in our full report on Securelist.
But there are some things we can all do to stay safe online.
Only enter personal data on websites that use the HTTPS protocol. If you see “HTTP” in the address bar, don’t accept cookies or submit any sensitive information like logins, passwords, or credit card details.
Pay attention to browser alerts. If you see a warning about an invalid or suspicious security certificate when you visit a site, close the page immediately.
Update your browsers regularly or enable automatic updates. This helps protect you from known vulnerabilities.
Regularly clear browser cookies and cache. This prevents old, potentially leaked cookie files and session IDs from being exploited. Most browsers have a setting to automatically delete this data when you close them.
Don’t follow suspicious links. This is especially true of links received from strangers in a messaging app or by email. If you have a hard time telling the difference between a legitimate link and a phishing one, install a Kaspersky Premium that can alert you before you visit a malicious site.
Enabletwo-factor authentication(2FA) wherever possible.[placeholder KPM] is a convenient way to store 2FA tokens and generate one-time codes. It syncs them across all your devices, which makes it much harder for an attacker to access your account after a session has ended — even if they steal your session ID.
Refuse to accept all cookies on all websites. Accepting every cookie from every site isn’t the best strategy. Many websites now offer a choice between accepting all and accepting only essential cookies. Whenever possible, choose the “required/essential cookies only” option, as these are the ones the site needs to function properly.
Connect topublic Wi-Fi networksonly as a last resort. They are often poorly secured, which attackers take advantage of. If you have to connect, avoid signing in to social media or messaging accounts, using online banking, or accessing any other services that require authentication.
Want to know even more about cookies? Read these articles:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-02 12:07:292025-09-02 12:07:29How to protect your cookies and session ID | Kaspersky official blog
Running a SOC means living in a world of alerts. Every day, thousands of signals pour in; some urgent, many irrelevant. Analysts need to separate noise from real threats, investigate quickly, and keep the organization safe without letting cases pile up.
The challenge isn’t only about detecting threats but doing it fast enough to reduce escalations, avoid burnout, and keep operations efficient.
That’s where an all-in-one detection workflow changes everything. ANY.RUN brings together the tools analysts rely on most; live threat feeds, interactive sandboxing, and instant lookups, into a single, streamlined process. The result: faster answers, fewer escalations, and more confidence in every decision.
Why Fragmented Workflows Slow SOCs Down
It’s not the flood of alerts alone that puts SOCs under pressure but the fractured way they’re handled. One tool for threat feeds, another for detonation, a third for enrichment. Every time an analyst switches context, minutes are lost. Multiply that across hundreds of alerts, and the delays add up fast.
The bigger problem is what those delays cause: escalations that didn’t need to happen, senior staff tied up with routine checks, and threats that linger longer than they should. Instead of building momentum, investigations stall.
This is the hidden cost of disconnected tools. They don’t only slow analysts down but also create more work for everyone and open the door to mistakes.
From Chaos to Clarity: The Power of Unified Workflow
When detection runs as one continuous workflow, every step strengthens the next. Instead of losing time hopping between tools, analysts work with a steady flow:
Noise gets filtered early: Live feeds rule out known threats, reducing case load by up to 20% and cutting unnecessary escalations by 30%.
Investigations move faster: The sandbox reveals hidden behavior in real time, lowering MTTR by as much as 21 minutes per case.
Decisions are backed by context: Lookups provide history from millions of past analyses contributed by 15,000+ organizations, giving analysts 24× more IOCs to work with and ensuring every case is backed by evidence.
The result is measurable:
+62.7% more threats detected overall
94% of surveyed users report faster triage
63% year-over-year user growth, driven by analyst efficiency
30% fewer alerts require escalation to senior analysts
The outcome of this unified workflow is speed, clarity and confidence. Analysts know what to act on, what to ignore, and when a case can be closed without doubt.
Threat Feed: Cut Through the Noise
The first challenge in any SOC is deciding which alerts deserve attention. With live IOC streams collected from thousands of users worldwide, ANY.RUN’s TI Feeds works as your early filter. Analysts see instantly whether an IP, domain, or hash has already been confirmed as malicious and can rule out duplicates on the spot. That means less time wasted on “non-issues” and more focus on real threats that matter.
ANY.RUN’s TI Feed providing actionable IOCs to SOC teams
Every IOC in the feed is actionable and connected to sandbox analyses, giving analysts not just a red flag but the full context behind it. This means faster triage, more confident decisions, and the ability to trace threats back to their behavior in real-world samples.
The numbers speak for themselves: with Threat Feed and Lookup combined, analysts gain access to 24× more IOCs than from typical isolated sources. And because the feed captures real-world attacks, from targeted phishing campaigns to large-scale malware hitting banks and enterprises, your SOC works with threat data that reflects the real distribution of risks.
ANY.RUN’s Threat Intelligence Feeds with variety of format options and easy way of integration
ANY.RUN’s Threat Intelligence Feeds come in multiple formats with simple integration options, making it easy to plug into your existing SIEM, TIP, or SOAR setup.
When an alert passes the filter, it needs proof. This is where ANY.RUN’s interactive sandbox becomes the proving ground, turning suspicious files, scripts, and URLs into full investigations in real time. Instead of waiting for static reports or snapshots, analysts can detonate samples and watch the behavior unfold step by step, just like a real user would.
This approach uncovers what traditional sandboxes often miss:
Hidden payloads that require clicks or triggers to activate.
Staged downloads that reveal themselves only over time.
Evasive tactics designed to bypass automated detection.
But visibility doesn’t depend solely on manual clicks. With automated interactivity, ANY.RUN simulates user actions to expose threats faster, reducing the need for analysts to intervene at every step. Junior analysts gain confidence because the system highlights behaviors for them, while senior staff can focus on advanced investigations instead of routine triage.
The user-friendly interface and AI assistance add another layer of efficiency. Complex behaviors are explained clearly, reports are well-structured, and the entire attack chain is mapped from start to finish.
For example, in the case of Lumma Stealer, ANY.RUN captured the full infection chain, from initial dropper to persistence mechanisms, all preserved in a detailed report ready for escalation, rule writing, or sharing.
Lumma Stealer’s full attack chain detected inside ANY.RUN sandbox in 30 seconds
The outcome is a process where analysts of all skill levels can act faster, escalate less, and make decisions with confidence, while SOC leaders gain time back from their most experienced staff.
Threat Lookup: Context at Your Fingertips
Even with full sandbox results, one question always remains: Has this threat been seen before? Knowing whether an IOC belongs to a fresh campaign or something already circulating across industries changes how analysts respond.
Sandbox analyses of recent Tycoon attacks for faster decision making
ANY.RUN’s Threat Lookup delivers that answer in seconds. With access to millions of past analyses contributed by more than 15,000 organizations worldwide, analysts can instantly check whether an IP, domain, or hash has been observed elsewhere. This turns isolated alerts into patterns, helping teams connect the dots and react with confidence.
Early warning from others’ incidents: What hits one enterprise today could reach yours tomorrow. Lookup lets you learn from global telemetry before the threat arrives at your doorstep.
Deeper reporting without heavy lifting: Instead of manually searching across multiple feeds and databases, analysts enrich findings with one query.
Reduced unnecessary escalations: Confirmation from millions of past cases means analysts can validate faster and close tickets sooner.
The result is a smoother close to every investigation: sandbox analysis provides the behavior, Threat Lookup adds the history, and reports go out with stronger evidence. Analysts save time, senior experts get fewer escalations, and the SOC becomes more resilient with every case resolved.
Detect threats faster with ANY.RUN’s Interactive Sandbox
See full attack chain in seconds for immediate response
The real power of ANY.RUN is in how the solutions work together, seamlessly feeding into one another to create a single, continuous process.
Instead of bouncing between disconnected tools, analysts move through one streamlined workflow: alerts are filtered at the start, suspicious activity is detonated, the entire attack chain is exposed in real time, and findings are instantly validated against global threat history.
The outcome is faster resolutions, fewer unnecessary escalations, and reports enriched with both behavioral detail and historical context; the kind of evidence leaders and clients can trust.
Sign up today to see how ANY.RUN’s all-in-one suite can turn your SOC into a faster, more confident detection machine.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-09-02 11:06:432025-09-02 11:06:43Streamline Your SOC: All-in-One Threat Detection with ANY.RUN