Although direct messages sent through a chat app are often perceived as a private conversation, it’s actually not that simple. Not only can your chats and data be used for advertising and AI training, but they can be shared with law enforcement and intelligence agencies. Furthermore, perfect strangers, or scammers — pretending to be your boss, for example — might reach out to you directly. Then again, attackers can use social engineering techniques to gain access to your account and read all of your chats in real-time.
Which services minimize the chance of these unwelcome events? That’s the question experts at the company Incogni set out to answer. They decided to compare popular social networks and messaging apps, and ranked them as per privacy levels from highest to lowest. The result is the Social Media Privacy Ranking 2025. This was an extensive study covering 15 social networks and chat apps, and comparing them across 18 criteria. Today, we focus on the scores for messaging apps and direct communication platforms — selecting only the most practical evaluation criteria. So, which of the common messaging apps are the most privacy-oriented?
Overall privacy rankings
We’ll start with Incogni’s final conclusions. After summing up all their scores across the criteria, they produced the following privacy rankings (lower is better):
Discord: 10.23
Telegram: 13.08
Snapchat: 13.39
Facebook Messenger: 22.22
WhatsApp: 23.17
But don’t rush to migrate all your chats from WhatsApp to Discord just yet — comparing only the criteria that matter most reveals a different picture. Incogni’s comprehensive study included some very peculiar points, such as the number of fines for data retention violations across all countries, the number of past hacks and data breaches, the readability of the privacy policy, the time it takes to have your data deleted after an account closure request, and so on.
However, there are also highly practical criteria: the types of data collected by the mobile app, the privacy level by default, the amount of user data visible to non-contacts, the use of user data for AI training, and the option to opt out of this. For those concerned about excessive government interference in private correspondence, the score for the response rates to government requests for user information will also be of interest. If we add up the scores from only these practical categories, the rankings shift significantly:
Telegram: 4.23
Snapchat: 7.72
Discord: 8.14
WhatsApp: 11.93
Facebook Messenger: 13.37
Incogni penalized WhatsApp 3.4 points for the fact that chats may be used for AI training, and users can’t opt out. However, there’s one important caveat: as of today, this only applies to user chats with Meta’s AI assistant, while other chats are still protected by end-to-end encryption, and can’t be used for training. Therefore, in our view, a more accurate score for WhatsApp would be 8.53; this doesn’t change its position, but significantly narrows the gap between it and the leading trio.
Let’s move past the numbers now, and review the practically significant findings of the analysis.
Private by default
An app focused on user interests sets all security and privacy settings to safe and private upon installation. The user can then lower the level of privacy where they choose to. Telegram and Snapchat exhibit this commendable behavior. Discord’s default settings are less private, while Facebook Messenger and WhatsApp are down at the bottom of the rankings. A similar situation is found with the number of privacy settings — Telegram and Snapchat offer the most.
We’ve published detailed guides on setting up privacy in Telegram, WhatsApp, and Discord, and you can find privacy configuration tips for many other popular apps, devices, and operating systems on our free Privacy Checker portal.
Secure against strangers
Minimizing the amount of information strangers can see is crucial for both privacy and physical safety. It limits the possibilities for scams, spam, stalking, and child abuse. The most secure accounts are provided equally by Telegram and WhatsApp — tying for first place. Facebook Messenger and Snapchat share second, while Discord ranks last in this regard.
Cooperating with authorities
Telegram doesn’t disclose the percentage of government requests for personal data that it grants — though it’s known to be greater than zero. As for the other platforms, Snapchat most frequently approves such requests (82%), Meta’s services approve them in 78% of cases (the breakdown by service is unknown), and Discord is not far behind at 77.4%.
Collecting data for advertising and other purposes
Every platform collects a certain amount of information about its users, their socio-demographic profile, and preferences. The study distinguishes between general data collection and mobile-app data collection. The former was based on privacy policies; the latter used the data published for the apps in the App Store and Google Play.
Based on general data collection, the leaders with the least amount of collected data are Telegram and WhatsApp. Discord took second place, and Snapchat and Facebook Messenger both ranked last. Regarding mobile-app data collection, the picture is slightly different: Telegram leads by a significant margin, followed by WhatsApp in second place, then Discord, Snapchat, and finally, Facebook Messenger.
Which messaging app is best?
Among the services reviewed, Telegram collects the least data and provides the widest range of privacy settings. While Discord leads the overall rankings thanks to limited data collection and a clean record on privacy fines, it falls short in privacy settings, and doesn’t default to secure options. WhatsApp offers extensive protection against strangers, and collects a relatively modest amount of user data.
Note that the ranking focuses on mainstream apps; more niche messaging apps that place a strong emphasis on privacy were simply not included. Truly confidential/sensitive conversations should ideally be conducted on one of these dedicated private messaging apps.
Additionally, Incogni didn’t focus on encryption. Among the reviewed apps, only WhatsApp offers full end-to-end encryption for all chats by default. This is a crucial consideration, for the hugely popular Telegram doesn’t guarantee message privacy: chats aren’t end-to-end encrypted by default.
Finally, don’t forget that the indicated level of security applies only to the official mobile clients of these messaging services. The desktop versions of popular messaging apps are far more vulnerable due to their architecture. As for using mods or third-party clients, it’s best to avoid them entirely — malicious versions are routinely distributed both through channels and group chats within the messaging services themselves, and through official app stores such as Google Play.
Messaging apps today arguably hold the maximum amount of private information about each of us. To avoid becoming a victim of a data leak, read our other posts:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-24 17:06:482025-10-24 17:06:48Privacy rankings of popular messaging apps in 2025 | Kaspersky official blog
We’ve relied on passwords for years to protect our online accounts, but they’ve also become one of the easiest ways attackers get in. Many people reuse or simplify passwords, or even write them down because it’s hard to remember so many. That makes it easier for attackers to take advantage of stolen or reused credentials, and even worse, one stolen password can sometimes unlock several accounts.
Did you know? According to Forbes, 244 million passwords were leaked on a single crime forum, and half of the world’s internet users have been exposed to reuse attacks.
That’s why passwordless authentication is becoming so important. It lets you prove who you are without typing a password, using things like your fingerprint, face, or a security key on your device. This makes sign-ins easier for you and harder for attackers to fake, helping protect against phishing and stolen or weak passwords.
Clearing up the biggest myths about passwordless
Even with all these benefits, a few common myths still make people hesitate about going passwordless. Let’s clear them up.
It’s easy to assume that “passwordless” means skipping an important layer of protection.
In reality, passwordless is multi-factor. It verifies who you are using both your device and something only you can provide like your fingerprint or PIN.
When you log in, your device unlocks a unique digital key that never leaves it. Your fingerprint, face or PIN is only checked locally, not sent online. This makes it nearly impossible for attackers to steal or fake your login, the same strength as MFA, just without the password hassle.
A PIN might look like a password, but it doesn’t work the same way. Instead of being sent over the internet or stored on a company server, your PIN only unlocks your device locally. That means there’s nothing for attackers to steal or guess remotely.
Even a short PIN can be strong because your device limits how many times someone can try it. An attacker would have to physically possess your device to even attempt it. If you want extra protection, you can use a biometric like a fingerprint or face scan instead.
Biometrics sometimes get a bad reputation because people remember early flaws or scary headlines like phones that could be fooled by photos or fake fingerprints. Those issues came from outdated, low-cost sensors that were easier to trick.
Modern systems like Face ID and Windows Hello use 3D mapping, infrared light and “liveness” detection to make spoofing extremely difficult. In passwordless authentication, your fingerprint or face simply unlocks a private key stored on your device. That key never leaves your phone or computer and can’t be reused on other sites. Because biometrics are checked locally, not online, they block the remote attacks that plague passwords.
Some worry that using biometrics means handing over personal data that could be stolen. That concern usually comes from news about biometric surveillance, where information is stored in large central databases.
Passwordless authentication works differently. Your biometric stays on your device and is only used to unlock a local security key — it’s never uploaded, shared, or compared against a massive database.
The difference matters. Surveillance biometrics identify you remotely by matching your data against millions of records. Authentication biometrics, like Face ID or Windows Hello, simply confirm that you are the one holding your own device. That local check is what keeps your biometric private and safe.
A truly phishing-resistant passwordless system has a few built-in protections against modern phishing techniques.
Each login uses a unique digital key that stays on your device and never gets sent to the website. Even if someone builds a fake login page, there’s nothing to steal or reuse. That’s because passwordless systems check that you’re on the real website, not a look-alike page. Your browser does that check automatically before letting your device complete the login.
And only trusted software on your device can trigger your authenticator to approve a login. Hidden apps or push-phishing attempts can’t reach it.
Together, these protections make phishing far harder and, in most cases, stop it completely.
The bottom line: Easier, safer sign-ins for everyone
Passwordless isn’t just a new way to log in. It’s a safer, simpler way to protect what matters most. Whether at home or at work, taking small steps toward passwordless helps reduce risk and makes security easier for everyone.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-24 10:06:502025-10-24 10:06:50Think passwordless is too complicated? Let’s clear that up
Here at ANY.RUN, we know how crucial threat intelligence is for ensuring strong cybersecurity, especially in organizations.
This year, our efforts in promoting this data-driven approach to solving the needs of businesses were praised at CyberSecurity Breakthrough Awards. ANY.RUN was recognized as the Threat Intelligence Company of the Year 2025.
New Milestone on the Way to Safer Future
This wasn’t an easy win: the CyberSecurity Breakthrough Awards is a prestigious international program with an independent panel of industry experts in the jury. Our sincere thanks go to them for acknowledging our impact on leading innovative enterprise-grade solutions forward.
But above all, we’d like to thank our global community of clients, contributors, and partners. It’s a shared win for all of us.
ANY.RUN’s TI Lookup providing IOCs related to Agent Tesla threats submitted in Germany
Threat Intelligence Changes Everything
Earlier this year, ANY.RUN’s solutions gained global acclaim and won multiple awards, like Globee Awards and Cybersecurity Excellence Awards. But this victory stands out, as it recognizes our influence as a company and reflects our approach focused on the integrity of a unified workflow.
TI Feeds accumulate threat data and enrich your system with it for expanded threat coverage
ANY.RUN’s Threat Intelligence Feeds and Threat Intelligence Lookup are redefining how SOCs operate in today’s threat landscape. Instead of relying on outdated indicators from post-incident reports, ANY.RUN leverages insights from a global community of over 500,000 analysts and 15,000 organizations actively analyzing the latest threats in our Interactive Sandbox.
Gain 3x boost in performance rates Acclaimed TI solutions for your SOC
Continuously updated threat intelligence helps SOC teams gain automation and data needed to stay ahead of evolving attacks. Security leaders can make faster, more confident decisions, strengthen proactive defense strategies, and maximize the ROI of their security stack.
Access 24x more IOCs per incident for wider visibility: Live data on global attacks ensures comprehensive threat coverage of new malware and phishing.
Enrich your system with 99% unique IOCs to reduce workload: In-depth intel cuts Tier 1/Tier 2 investigations and promotes confident decisions.
Accelerate MTTR by 21 min per case for faster action: Threat behavior context for IOCs/IOAs/IOBs provides insights for streamlined incident mitigation.
About ANY.RUN
Over 500,000 cybersecurity professionals and 15,000+ organizations across finance, manufacturing, healthcare, and other industries trust ANY.RUN and accelerate their malware investigations worldwide.
Faster triage and response with ANY.RUN’s Interactive Sandbox: Safely detonate suspicious files, observe malicious behavior in real time, and gain insights for faster, confident security decisions.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-24 09:06:352025-10-24 09:06:35ANY.RUN Recognized as Threat Intelligence Company of the Year 2025
A recent publication by researchers at the University of California, Irvine, demonstrates a fascinating fact: optical sensors in computer mice have become so sensitive that, in addition to tracking surface movements, they can pick up even minute vibrations — for instance, those generated by a nearby conversation. The theoretical attack, dubbed “Mic-E-Mouse”, could potentially allow adversaries to listen in on discussions in “secure” rooms, provided the attacker can somehow intercept the data transmitted by the mouse. As is often the case with academic papers of this kind, the proposed method comes with quite a few limitations.
Specifics of the Mic-E-Mouse attack
Let’s be clear from the start — not just any old mouse will work for this attack. It specifically requires models with the most sensitive optical sensors. Such a sensor is essentially an extremely simplified video camera that films the surface of the desk at a resolution of 16×16 or 32×32 pixels. The mouse’s internal circuitry compares consecutive frames to determine how far and in which direction the mouse has moved. How often these snapshots are taken determines the mouse’s final resolution, expressed in dots per inch (DPI). The higher the DPI, the less the user has to move the mouse to position the cursor on the screen. There’s also a second metric: the polling rate — the frequency at which the mouse data is transmitted to the computer. A sensitive sensor in a mouse that transmits data infrequently is of no use. For the Mic-E-Mouse attack to even be feasible, the mouse needs both a high resolution (10 000DPI or more) and a high polling rate (4000Hz or more).
Why do these particular specifications matter? Human speech, which the researchers intended to eavesdrop on, is audible in a frequency range of approximately 100 to 6000Hz. Speech causes sound waves, which create vibrations on the surfaces of nearby objects. Capturing these vibrations requires an extremely precise sensor, and the data coming from it must be transmitted to the PC in the most complete form possible — with the data update frequency being most critical. According to the Nyquist–Shannon sampling theorem, an analog signal within a specific frequency range can be digitized if the sampling rate is at least twice the highest frequency of the signal. Consequently, a mouse transmitting data at 4000Hz can theoretically capture an audio frequency range up to a maximum of 2000Hz. But what kind of recording can a mouse capture anyway? Let’s take a look.
Results of the study on the sensitivity of a computer mouse’s optical sensor for capturing audio information. Source
In graph (a), the blue color shows the frequency response typical of human speech — this is the source data. Green represents what was captured using the computer mouse. The yellow represents the noise level. The green corresponds very poorly to the original audio information and is almost completely drowned in noise. The same is shown in a spectral view in graph (d). It looks as though it’s impossible to recover anything at all from this information. However, let’s look at graphs (b) and (c). The former shows the original test signals: tones at 200 and 400Hz, as well as a variable frequency signal from 20 to 16 000Hz. The latter shows the same signals, but captured by the computer mouse’s sensor. It’s clear that some information is preserved, although frequencies above 1700Hz can’t be intercepted.
Two different filtering methods were applied to this extremely noisy data. First, the well-known Wiener filtering method, and second, filtering using a machine-learning system trained on clean voice data. Here’s the result.
Spectral analysis of the audio signal at different stages of filtering. Source
Shown here from left to right are: the source signal, the raw data from the mouse sensor (with maximum noise), and the two filtering stages. The result is something very closely resembling the source material.
So what kind of attack could be built based on such a recording? The researchers propose the following scenario: two people are holding a conversation in a secure room with a PC in it. The sound of their speech causes air vibrations, which are transmitted to the tabletop, and from the tabletop to the mouse connected to the PC. Malware installed on the PC intercepts the data from the mouse, and sends it to the attackers’ server. There, the signal is processed and filtered to fully reconstruct the speech. Sounds rather horrifying, doesn’t it? Fortunately, this scenario has many issues.
Severe limitations
The key advantage of this method is the unusual attack vector. Obtaining data from the mouse requires no special privileges, meaning security solutions may not even detect the eavesdropping. However, not many applications access detailed data from a mouse, which means the attack would require either writing custom software, or hacking/modifying specialized software that is capable of using such data.
Furthermore, there are currently not many mice models with the required specifications (resolution of 10 000DPI or higher, and polling rate of 4000Hz or more). The researchers found about a dozen potential candidates and tested the attack on two models. These weren’t the most expensive devices — for instance, the Razer Viper 8KHz costs around $50 — but they are gaming mice, which are unlikely to be found connected to a typical workstation. Thus, the Mic-E-Mouse attack is future-proof rather than present-proof: the researchers assume that, over time, high-resolution sensors will become standard even in the most common office models.
The accuracy of the method is low as well. At best, the researchers managed to recognize only 50 to 60 percent of the source material. Finally, we need to consider that for the sake of the experiment, the researchers attempted to simplify their task as much as possible. Instead of capturing a real conversation, they were playing back human speech through computer speakers. A cardboard box with an opening was placed on top of the speakers. This opening was covered with a membrane with the mouse on top of it. This means the sound source was not only artificial, but also located mere inches from the optical sensor! The authors of the paper tried covering the hole with a thin sheet of paper or cardboard, and the recognition accuracy immediately plummeted to unacceptable levels of 10–30%. Reliable transmission of vibrations through a thick tabletop isn’t even a consideration.
Cautious optimism and security model
Credit where it’s due: the researchers found yet another attack vector that exploits unexpected hardware properties — something no one had previously thought of. For a first attempt, the result is remarkable, and the potential for further research is undoubtedly there. After all, the U.S. researchers only used machine learning for signal filtering. The reconstructed audio data was then listened to by human observers. What if neural networks were also used for speech recognition?
Of course, such studies have an extremely narrow practical application. For organizations whose security model must account for even such paranoid scenarios, the authors of the study propose a series of protective measures. For one, you can simply ban connecting mice with high-resolution sensors — both through organizational policies and, technically, by blocklisting specific models. You can also provide employees with mousepads that dampen vibrations. The more relevant conclusion, however, concerns protection against malware: attackers can sometimes utilize completely atypical software features to cause harm — in this case, for espionage. So it’s worth identifying and analyzing even such complex cases; otherwise, it may later be impossible to even determine how a data leak occurred.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-23 20:06:502025-10-23 20:06:50Researchers find a way to use a computer mouse for eavesdropping
Welcome to this week’s edition of the Threat Source newsletter.
“The truth about the world, he said, is that anything is possible… For existence has its own order and that no man’s mind can compass, that mind itself being but a fact among others.” ― Cormac McCarthy, “Blood Meridian”
Earlier this week, I spent a few days off to take a reading retreat with my wife. Diving into several books from various genres and sitting on several quiet acres in the Texas hill country was a wonderful way to refuel. While reading a completely different book, I was reminded of this quoted section from Cormac and it gave me pause.
We, as security practitioners, often move forward with the knowledge and expertise we’ve acquired along the various paths that led us to this point. It’s easy to fall into the trap of assuming that, because we’ve shared similar experiences, we all possess the same skillsets — each of us following our own string in the maze. In the end, the gaps between what we assume about each other and the reality can be tremendous. How do we ensure these aren’t the very gaps adversaries exploit, and that our perceived strengths don’t become our weaknesses?
It comes down to communication and community. Talking openly about our skillsets can feel unnecessary among seasoned practitioners—we often assume that everyone has followed a similar path through the maze of their careers. But in practice, conversations quickly reveal this isn’t the case. One of the best ways to build the soft skills that help your career grow is by meeting with your team and discussing the technical skills that got you here and how you apply them now, which skills still benefit your daily routine, and which have fallen to the wayside via obsolescence. If you can create a meeting focused on this topic — rotating among your direct team and involving the team or leader above — you’ll start to pinpoint the skillsets that have the greatest impact on your day-to-day work.
This process will help you identify specific technical skills needed for hiring or training new team members. It also gives junior analysts a clear view of what senior analysts rely on in their expanded roles, helping to guide their own education. Furthermore, when everyone understands the technical and soft skills that leadership uses, it can help remove the distance between technical and people leaders — and can leave a “string” for those early in their careers to follow as they navigate their own path through the maze
Know your environment. This is always my first answer whenever I’m asked what to do next or how to proceed. Knowing with extreme clarity both the skillsets on your team and the foundation they’re built upon will allow your team to thrive and grow. It also makes it easier to identify opportunities for cross-training and to provide targeted mentorship where it’s needed most.
“War was always here. Before man was, war waited for him. The ultimate trade awaiting its ultimate practitioner.” ― Cormac McCarthy, “Blood Meridian”
The one big thing
According to the Cisco Talos IR Trends Q3 2025 report, over 60% of our incident response cases involved attackers exploiting public-facing applications, mainly through the ToolShell attack chain against unpatched Microsoft SharePoint servers. This is a huge jump from under 10% last quarter. About 20% of cases were ransomware-related (down from 50%), but new ransomware variants and tactics, like using legitimate tools for persistence, were seen. Attackers also increased their use of compromised internal accounts for phishing, and public administration became the top targeted sector for the first time since we began these reports in 2021.
Why do I care?
Attackers are going after anyone with exposed or outdated systems, including local governments and public services. With attackers exploiting new vulnerabilities almost immediately (especially in widely used software like SharePoint), even small delays in patching or weak internal defenses can put your organization and its data at serious risk.
So now what?
Prioritize rapid patching of public-facing applications, especially after new vulnerabilities are disclosed, and implement strong network segmentation to limit attackers’ lateral movement. Additionally, enhancing multi-factor authentication, improving centralized logging, and educating your users can help detect and block attacks earlier.
Top security headlines of the week
Vulnerability in Dolby Decoder can allow zero-click attacks Tracked as CVE-2025-54957 (CVSS score of 7.0), the security defect can be triggered using malicious audio messages, leading to remote code execution. On Android, the vulnerability can be exploited remotely without user interaction. (SecurityWeek)
131 Chrome extensions caught hijacking WhatsApp Web for massive spam campaign Researchers have uncovered a coordinated campaign that leveraged 131 rebranded clones of a WhatsApp Web automation extension for Google Chrome to spam Brazilian users at scale. (The Hacker News)
Prosper data breach impacts 17.6 million accounts Hackers stole personal and financial details belonging to 17.6 million users of the Prosper lending platform, including Social Security numbers and government IDs. (SecurityWeek)
“PassiveNeuron” cyber spies target orgs with custom malware A threat campaign is targeting high-profile organizations in the government, industrial, and financial sectors across Asia, Africa, and Latin America, with two custom malware implants designed for cyber espionage. (Dark Reading)
Beers with Talos: Two Marshalls, one podcast Talos’ Vice President Christopher Marshall (the “real Marshall,” much to Joe’s displeasure) joins Hazel, Bill, and Joe for a very real conversation about leading people when the world won’t stop moving.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-23 18:06:432025-10-23 18:06:43Strings in the maze: Finding hidden strengths and gaps in your team
Fresh, actionable IOCs from the latest malware attacks are now available to all security teams using the ThreatQ TIP. ANY.RUN’s Threat Intelligence Feeds integrate seamlessly with the platform, enabling SOCs and MSSPs to boost detection rates, expand threat coverage, and streamline response.
Here’s how you can benefit from it.
Real-Time Visibility of the Current Threat Landscape
As a leading Threat Intelligence Platform (TIP) widely adopted in enterprise SOCs, ThreatQ serves as a powerful indicator management solution, facilitating response.
TI Feeds are extracted from live sandbox analyses of the latest threats
ANY.RUN’s Threat Intelligence Feeds connector for ThreatQ provides a real-time stream of fresh, filtered, low-noise network IOC from sandbox investigations of the latest attacks by 15,000+ companies and 500,000+ analysts.
With STIX/TAXII support, TI Feeds can be made a part of SOCs’ security infrastructure without additional custom development or costs, helping them maximize the value of their existing ThreatQ setup. As a result, security teams can achieve:
Early Detection: IOCs added to TI Feeds as soon as they emerge from live sandbox analyses, enabling proactive identification of new threats in your SOC.
Expanded Threat Coverage: 99% unique indicators from global attacks (e.g., phishing, malware) provide visibility into threats traditional feeds miss.
Informed Response: Each IOC comes enriched with a link to a sandbox report, showing the full attack being detonated on a live system, providing SOCs with actionable context for fast mitigation.
Reduced Workload: Filtered for malicious alerts, cutting Tier 1 analysis time spent on false positives.
Expand threat coverage. Slash MTTR. Identify incidents early.Try TI Feeds in your SOC and see instant results.
Threat Intelligence Feeds improve and simplify the core operations of security teams, delivering measurable results. Here’s a possible case for using our fresh threat intelligence to spot and contain attacks:
Easy Setup for Fast IOC Delivery Connect ANY.RUN’s TI Feeds to ThreatQ via STIX/TAXII in minutes. Choose hourly, daily, or custom schedules to get real-time IOCs from global incidents, keeping your SOC ahead of new threats.
Power SOC Analysis with Actionable Data ANY.RUN’s TI Feeds flow into ThreatQ, providing fresh IOCs to analyze alerts, investigate incidents, or enrich SIEM/EDR systems. This speeds up threat detection and strengthens your defense strategy.
Streamline Response and Prevention Use ANY.RUN’s IOCs in ThreatQ to automate threat blocking, isolate risks, or enhance playbooks and visualizations. SOC analysts and threat hunters can respond faster and prevent attacks, saving time and reducing breach risks.
How to Implement
The connector operates through the STIX/TAXII protocol, allowing clients to configure feed schedules within ThreatQ’s flexible options: hourly, every 6 hours, daily, bi-daily, bi-weekly, or monthly updates.
ThreatQ leverages ANY.RUN data for real-time or scheduled analysis as a malicious indicator source for alert and incident investigation. With additional connectors, organizations can optionally forward intelligence to their SIEM/EDR systems.
Workflow Capabilities
Depending on configuration settings, the system supports:
Manual or automated response actions (isolation, blocking, escalation)
Investigation enrichment and new rule/playbook configuration
Advanced visualizations for analysts and threat hunters
Quick Setup Guide with 5 Easy Steps:
1. Open ThreatQ and click My Integrations in the Integrations tab.
2. Click Add New Integration.
3. Configure TAXII Connection: go to the tab Add New TAXII Feed, fill out the configuration form.
4. After adding the TAXII feed, click on the settings button in the created connector card. Set switch to Enabled, and you’re all set up.
5. After finalizing the configuration, use the retrieved indicators to:
Export them to SIEM/SOAR to automate detection and blocking of threats
Prioritize high-risk threats to stay focused on the most critical incidents
Combine them with data from other sources to gain full visibility into attacks
Enrich and accelerate threat hunting and investigations with actionable intelligence
Launch playbooks for automated response to threats.
About ANY.RUN
ANY.RUN is trusted by more than 500,000 cybersecurity professionals and 15,000+ organizations across finance, healthcare, manufacturing, and other critical industries. Our platform helps security teams investigate threats faster and with more clarity.
Speed up incident response with our Interactive Sandbox: analyze suspicious files in real time, observe behavior as it unfolds, and make faster, more informed decisions.
Strengthen detection with Threat Intelligence Lookup and TI Feeds: give your team the context they need to stay ahead of today’s most advanced threats.
Threat actors predominately exploited public-facing applications for initial access this quarter, with this tactic appearing in over 60 percent of Cisco Talos Incident Response (Talos IR) engagements – a notable increase from less than 10 percent last quarter. This spike is largely attributable to a wave of engagements involving ToolShell, an attack chain that targets on-premises Microsoft SharePoint servers through exploitation of vulnerabilities that were publicly disclosed in July. We also saw an increase in post-exploitation phishing campaigns launched from compromised valid accounts this quarter, a trend we noted last quarter, with threat actors using this technique to expand their attack both within the compromised organizations as well as to external partner entities.
Ransomware incidents made up only approximately 20 percent of engagements this quarter, a decrease from 50 percent last quarter, despite ransomware remaining one of the most persistent threats to organizations. Talos IR responded to Warlock, Babuk, and Kraken ransomware variants for the first time, while also responding to previously seen families Qilin and LockBit. We observed an attack we attributed with moderate confidence to the threat actor that Microsoft tracks as China-based group Storm-2603 based on overlapping tactics, techniques, and procedures (TTPs). As part of their attack chain, the actors leveraged open-source digital forensics and incident response (DFIR) platform Velociraptor for persistence, a tool that has not been previously seen in ransomware attacks or associated with Storm-2603. We also responded to more Qilin ransomware engagements than last quarter, supporting our assessment from last quarter that the threat group is likely accelerating the cadence of their attacks.
ToolShell attacks underscore importance of robust segmentation and rapid patching
As mentioned above, threat actors exploited public-facing applications for initial access in over 60 percent of engagements this quarter. Almost 40 percent of all engagements involved ToolShell activity, majorly contributing to this tactic’s rise in popularity.
Starting in mid-July 2025, threat actors began actively exploiting two path traversal vulnerabilities affecting on-premises SharePoint servers: CVE-2025-53770 and CVE-2025-53771. These two vulnerabilities are related to CVE-2025-49704 and CVE-2025-49706, which had been previously featured in Microsoft Patch Tuesday updates in early July. One of the key features of the older vulnerabilities was that the adversary needed to be authenticated to obtain a valid signature by extracting the ValidationKey from memory or configuration. With the discovery of the newer vulnerabilities, attackers managed to eliminate the need to be authenticated to obtain a valid signature, resulting in unauthenticated remote code execution.
This quarter’s ToolShell activity highlights the importance of network segmentation, as attackers demonstrated how they can exploit poorly segmented environments once a single server is compromised to move laterally within a targeted network. For example, in one engagement, the victim organization was impacted by ToolShell exploitation against a SharePoint server, then experienced a ransomware attack a few weeks later. In the latter attack, Talos IR analysis indicated the actors transferred credential stealing malware from the affected public-facing SharePoint server to a SharePoint database server on the victim’s internal network, demonstrating how they leveraged the trusted relationship between the two servers to expand their foothold.
The wave of ToolShell attacks also shows how quickly threat actors mobilize when significant zero-day vulnerabilities are disclosed and/or proof-of-concepts appear. Active exploitation of the ToolShell vulnerabilities was first observed in the wild on July 18, a day before Microsoft issued its emergency advisory. Almost all Talos IR engagements responding to ToolShell activity kicked off within the following 10 days. Automated scanning enables attackers to rapidly discover and exploit vulnerable hosts while defenders race to test and deploy patches across diverse environments. Patching as soon as possible is key in narrowing that window of exposure, in addition to building safeguards through robust segmentation as mentioned above.
Post-exploitation phishing attacks from compromised accounts persist
Consistent with findings from last quarter, threat actors continued to launch phishing campaigns after their initial compromise by leveraging compromised internal email accounts to expand their attack both within the compromised organization as well as externally to partner entities. This tactic appeared in a third of all engagements this quarter, an increase from last quarter’s 25 percent. Last quarter, we predominately saw this tactic used when phishing was also used for initial access. This quarter, however, we also saw it appear in engagements where other methods, such as valid accounts, were used for initial access.
The follow-on phishing campaigns were primarily oriented towards credential harvesting. For example, in one engagement, the adversary used a compromised Microsoft Office 365 account to send almost 3,000 emails to internal and external partners. To evade detection, the adversary modified the email management rules to hide the sent phishing emails and any replies. Almost 30 employees of the targeted organization received the adversary’s phishing email and at least three clicked on the malicious credential harvesting link that was included; it is unknown how many users at external organizations were impacted. In another engagement, the adversary used a compromised email account to send internal phishing emails containing a link that directed to a credential harvesting page. The malicious site mimicked an Office 365 login page that was configured to redirect to the targeted organization’s legitimate login page upon the user entering their credentials, enhancing the attack’s legitimacy.
Looking forward, as defenses against phishing attacks improve, adversaries are seeking ways to enhance these emails’ legitimacy, likely leading to the increased use of compromised accounts post-exploitation that we have observed recently. Defenders should seek to improve identification and protection capabilities against internal phishing campaigns, with actions such as providing stronger authentication methods for users’ email accounts, enhancing analysis of users’ email patterns and notifying on anomalies, and improving user awareness training.
Ransomware trends
Ransomware incidents made up approximately 20 percent of engagements this quarter, a decrease from 50 percent last quarter, though we assess this dip is likely not indicative of any larger downward trend in the ransomware threat environment. Talos IR responded to Warlock, Babuk and Kraken ransomware variants for the first time, while also responding to previously seen families Qilin and LockBit.
Open-source DFIR tool Velociraptor adopted into ransomware toolkit
We responded to a ransomware engagement this quarter that we assessed with moderate confidence was attributable to the Storm-2603 threat group based on overlapping TTPs, such as the deployment of both LockBit and Warlock ransomware. Storm-2603 is a suspected China-based threat actor that was first seen in July 2025 when they engaged in ToolShell activity. While LockBit is widely deployed by various ransomware actors, Warlock was first advertised in June 2025 and has since been heavily used by Storm-2603. Notably, we also observed evidence of Babuk ransomware files on the customer’s network in this engagement, which has not been previously deployed by Storm-2603 according to public reporting, though it failed to encrypt and only renamed files. The incident severely impacted the customer’s IT environment, including connected Operational Support Systems (OSS), a critical component of telecommunication infrastructure that allows for remote management and monitoring of day-to-day operations.
We discovered the actors installed an older version of open-source DFIR platform Velociraptor on five servers to maintain persistence and launched the tool several times even after the host was isolated. Velociraptor is a legitimate tool that we have not previously observed being abused in ransomware attacks. It is a free product designed to help with investigations, data collection, and remediation during and after security incidents and it provides real-time or near real-time visibility into the activities occurring on monitored endpoints. The version of Velociraptor observed in this incident was outdated (version 0.73.4.0) and exposed to a privilege escalation vulnerability (CVE-2025-6264), which may have been leveraged for persistence as this vulnerability can lead to arbitrary command execution and endpoint takeover. The addition of this tool in the ransomware playbook is in line with findings from Talos’ 2024 Year in Review, which highlights the increasing variety of commercial and open-source products leveraged by threat actors.
Qilin ransomware operators likely accelerate pace of attacks
We saw an increased number of Qilin ransomware engagements kick off this quarter compared to last quarter, when we encountered it for the first time. We predicted last quarter the group was accelerating their operational tempo, based on an increase in disclosures on their data leak site since February 2025. We observed Qilin operators use TTPs consistent with last quarter, including valid compromised credentials for initial access, a binary encryptor customized to the victim, and file transfer tool CyberDuck for exfiltration. In one Qilin engagement this quarter we were able to determine the adversaries’ dwell time as well, finding that the ransomware was executed two days after the attack first began. The steady increase in Qilin activity indicates it will very likely continue to be a top ransomware threat through at least the remainder of 2025, pending any disruption or intervention.
Targeting
For the first time since we began documenting analysis of Talos IR engagements in 2021, public administration was the most targeted industry vertical this quarter. Though it hasn’t been the top targeted vertical before, it is often amongst the most seen, making this observation not entirely unexpected. Organizations within the public administration sector are attractive targets as they are often underfunded and still using legacy equipment. Additionally, the organizations targeted this quarter were largely local governments, which also typically oversee and support public school districts and county-run hospitals or clinics. As such, these entities often have access to sensitive data as well as a low downtime tolerance, making them attractive to financially motivated and espionage-focused threat groups, both of which we observed during these engagements.
Initial access
As mentioned, the most observed means of gaining initial access this quarter was exploitation of public-facing applications, largely due to ToolShell activity. Other observed means of achieving initial access included phishing, valid accounts, and drive-by compromise.
Recommendations for addressing top security weaknesses
Implement detections to identify MFA abuse and strong MFA policies for impossible travel scenarios
Almost a third of engagements this quarter involved multifactor authentication (MFA) abuse, including MFA bombing and MFA bypass — a slight decrease from approximately 40 percent last quarter. MFA bombing, also known as an MFA fatigue attack, involves an attacker repeatedly sending MFA requests to a user’s device, aiming to overwhelm them into inadvertently approving an unauthorized login attempt. MFA bypass encompasses a range of techniques leveraged by attackers to circumvent or disable MFA mechanisms and gain unauthorized access. Talos IR recommends defenders implement detections to identify when MFA has been bypassed, such as deploying products that use behavior analytics to identify new device logins and policies to generate alerts when detected.
Talos IR also encountered numerous engagements this quarter that involved impossible travel scenarios, and recommended organizations implement strong MFA policies when these are detected. An example of an impossible travel scenario would be if a user logs into their account from New York, then the adversary logs into the same account three minutes later from Tokyo.
Configure centralized logging capabilities across the environment
Insufficient logging hindered investigation and response in approximately a third of engagements, a slight increase from 25 percent last quarter, due to issues such as log retention limitations, logs that were encrypted or deleted during attacks, and lack of logs due to disablement by the adversary. Understanding the full context and chain of events performed by an adversary on a targeted host is vital not only for remediation but also for enhancing defenses and addressing any system vulnerabilities for the future. Talos IR recommends that organizations implement a Security Information and Event Management (SIEM) solution for centralized logging. In the event an adversary deletes or modifies logs on the host, the SIEM will contain the original logs to support forensic investigation.
Conduct robust patch management
Finally, vulnerable/unpatched infrastructure was exploited in approximately 15 percent of engagements this quarter. Targeted infrastructure included unpatched development servers and unpatched SharePoint servers that remained vulnerable weeks after the ToolShell patches were released — we did not include SharePoint servers that were vulnerable before the release of the patches in this category. Exploitation of vulnerable infrastructure enabled adversaries’ lateral movement, emphasizing the importance of patch management.
Top-observed MITRE ATT&CK techniques
The table below represents the MITRE ATT&CK techniques observed in this quarter’s Talos IR engagements. Given that some techniques can fall under multiple tactics, we grouped them under the most relevant tactic in which they were leveraged. Please note this is not an exhaustive list.
Key findings from the MITRE ATT&CK framework include:
Related to the internal phishing campaigns observed this quarter, we saw adversaries leveraging email hiding rules in numerous engagements, hiding certain inbound and outbound emails in the compromised user’s mailbox to evade detection. We also saw user execution of malicious links that directed to credential harvesting pages in these campaigns.
We observed web shells deployed for persistence in the ToolShell engagements this quarter. The most observed web shell, “spinstall0.aspx”, was used to extract sensitive cryptographic keys from compromised servers.
Tactic
Technique
Example
Reconnaissance (TA0043)
T1595.002 Active Scanning: Vulnerability Scanning
It is likely the majority of vulnerable SharePoint servers targeted in the ToolShell engagements this quarter were identified via adversaries’ active scanning methods.
Initial Access (TA0001)
T1190 Exploit Public-Facing Application
Adversaries may exploit a vulnerability to gain access to a target system.
T1598.003 Phishing for Information: Spear phishing Link
Adversaries may send spear phishing messages with a malicious link to elicit sensitive information that can be used during targeting.
T1078 Valid Accounts
Adversaries may use compromised credentials to access valid accounts during their attack.
T1190 Exploit in Public-Facing Application
Adversaries may exploit a vulnerability to gain access to a target system.
T1189 Drive-by Compromise
Adversaries may gain access to a system through a user visiting a website over the normal course of browsing.
Execution (TA0002)
T1204.001 User Execution: Malicious Link
An adversary may rely upon a user clicking a malicious link in order to gain execution. Users may be subjected to social engineering to get them to click on a link that will lead to code execution.
T1059.001 Command and Scripting Interpreter: PowerShell
Adversaries may abuse PowerShell to execute commands or scripts throughout their attack.
T1078 Valid Accounts
Adversaries may obtain and abuse credentials of existing accounts to access systems within the network and execute their payload.
T1021.004 Remote Services: SSH
Adversaries may use valid accounts to log into remote machines using Secure Shell (SSH). The adversary may then perform actions as the logged-on user.
Persistence (TA0003)
T1505.003 Server Software Component: Web Shell
Adversaries may backdoor web servers with web shells to establish persistent access to systems.
T1136 Create Account
Adversaries may create an account to maintain access to victim systems.
T1053 Scheduled Task/Job
Adversaries may abuse task scheduling functionality to facilitate initial or recurring execution of malicious code.
Adversaries may use valid accounts to log into a computer via RDP, then perform actions as the logged-on user.
T1078 Valid Accounts
The adversary may compromise a valid account to move through the network to additional systems.
T1547.001 Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder
Adversaries may achieve persistence by adding a program to a startup folder or referencing it with a Registry run key. Adding an entry to the “run keys” in the Registry or startup folder will cause the program referenced to be executed when a user logs in.
Defense Evasion (TA0005)
T1564.008 Hide Artifacts: Email Hiding Rules
Adversaries may use email rules to hide inbound or outbound emails in a compromised user’s mailbox.
T1562 Impair Defenses
Adversaries may maliciously modify components of a victim environment in order to hinder or disable defensive mechanisms.
T1070 Indicator Removal
Adversaries may delete or modify artifacts generated within systems to remove evidence of their presence or hinder defenses.
Credential Access (TA0006)
T1111 Multi-Factor Authentication Interception
Adversaries may target MFA mechanisms, (i.e., smart cards, token generators, etc.) to gain access to credentials that can be used to access systems, services, and network resources.
Adversaries may attempt to bypass MFA mechanisms and gain access to accounts by generating MFA requests sent to users.
T1110.003 Brute Force: Password spraying
Adversaries may use a single or small list of commonly used passwords against many different accounts to attempt to acquire valid account credentials.
Discovery (TA0007)
T1078 Valid Accounts
An adversary may use compromised credentials for reconnaissance against principle accounts.
T1083 File and Directory Discovery
Adversaries may enumerate files and directories or may search in specific locations of a host or network share for certain information within a file system.
T1087 Account Discovery
Adversaries may attempt to get a listing of valid accounts, usernames, or email addresses on a system or within a compromised environment.
T1135 Network Share Discovery
Adversaries may look for folders and drives shared on remote systems as a means of identifying sources of information to gather.
Adversaries may use Valid Accounts to log into a computer using the Remote Desktop Protocol (RDP). The adversary may then perform actions as the logged-on user.
T1033 System Owner/User Discovery
Adversaries may attempt to identify the primary user, currently logged in user, set of users that commonly uses a system, or whether a user is actively using the system.
Command and Control (TA0011)
T1219 Remote Access Tools
An adversary may use legitimate remote access tools to establish an interactive command and control channel within a network.
T1071.001 Application Layer Protocol: Web Protocols
Adversaries may communicate using application layer protocols associated with web traffic to avoid detection/network filtering by blending in with existing traffic.
Exfiltration (TA0010)
T1059.001 Command and Scripting Interpreter: PowerShell
Adversaries may abuse PowerShell commands and scripts.
Impact (TA0040)
T1486 Data Encrypted for Impact
Adversaries may use ransomware to encrypt data on a target system.
Software/Tool
S0029 PsExec
Free Microsoft tool that can remotely execute programs on a target system.
S0591 ConnectWise
A legitimate remote administration tool that has been used since at least 2016 by threat actors.
S0638 Babuk
Babuk is a Ransomware-as-a-service (RaaS) malware that has been used since at least 2021. The operators of Babuk employ a “Big Game Hunting” approach to targeting major enterprises and operate a leak site to post stolen data as part of their extortion scheme.
S1199 LockBit
LockBit is an affiliate-based RaaS that has been in use since at least June 2021. LockBit has versions capable of infecting Windows and VMware ESXi virtual machines, and has been observed targeting multiple industry verticals globally.
A SOC is where every second counts. Amidst a flood of alerts, false positives, and ever-short time, analysts face the daily challenge of identifying what truly matters — before attackers gain ground.
That’s where alert triage comes in: the essential first step in detecting, prioritizing, and responding to threats efficiently. Done right, it defines the overall effectiveness of a SOC or MSSP and determines how well an organization can defend itself.
Spoiler Alert About Alerts
Here’s your spoiler for today: good triage starts with great threat intelligence.
ANY.RUN’s Threat Intelligence Lookup doesn’t just enrich alerts — it rewrites the rules of triage by turning scattered IOCs into instant context. But we’ll get there. Let’s start from the analyst’s desk, where the real noise begins.
ANY.RUN’s Threat Intelligence Lookup: checks IOCs, instantly find out all that’s worth knowing
Why Triage Is the Heartbeat of the SOC
Behind every successful SOC, there’s a smooth triage flow that keeps chaos under control. It’s not just about filtering alerts. It’s about shaping the SOC’s rhythm and resilience.
When analysts perform triage effectively:
They build the first and strongest defense layer against real attacks.
They ensure human attention is spent where it matters most.
They create a foundation for accurate detection and response metrics like MTTD and MTTR.
They make security predictable and measurable, not reactive and random.
The Daily Puzzle: Making Sense of a Thousand Pings
The challenge is not a lack of data — it’s too much of it. The toughest barriers to effective triage include:
Alert overload — When every ping demands attention, focus becomes the first casualty.
False positives — Automation can cry wolf more often than it should.
Threat complexity —Today’s attackers employ sophisticated techniques designed to evade detection.
Context gaps — An IP is just an IP until you know its story.
Time compression — Analysts often have seconds, not minutes, to make judgment calls.
Data silos — TI feeds, SIEMs, and sandboxes don’t always talk to each other.
The result? Valuable threats risk getting buried under a pile of meaningless noise.
Speed, Precision, and the Numbers That Matter
In triage, speed without accuracy is chaos, and accuracy without speed is luxury. That’s why SOCs track their efficiency through key metrics. KPIs aren’t just for bosses—they’re your triage compass. Track these to benchmark progress and spot bottlenecks:
KPI
Description
Target Benchmark
Why It Matters for Triage
Mean Time to Detect (MTTD)
Average time from threat emergence to alert generation.
Measures triage speed in spotting signals amid noise.
Low rates mean better prioritization; high ones signal fatigue.
Alert Closure Rate
Alerts triaged per analyst per shift.
50-100
Gauges productivity without burnout.
Escalation Rate
% of alerts bumped to higher tiers.
Reflects triage accuracy—fewer escalations mean empowered Tier 1.
Wrong Verdict Rate
Misclassified alerts (internal audit).
Tracks skill gaps; aim for continuous improvement via training.
High-performing SOCs balance speed and certainty by using intelligence enrichment to cut decision time without cutting quality. Those KPIs are not just numbers; they’re the story of how well your triage works.
From Metrics to Meaning: Why Triage Drives Business Outcomes
Triage might look like a technical process, but its impact is strategic. Understanding how your triage work supports broader business objectives, helps you make better decisions, and communicate your value effectively.
For SOCs and MSSPs, efficient triage is a business differentiator:
Fewer false positives mean less analyst burnout and higher client capacity.
Faster incident validation means better SLA performance and client trust.
Smarter prioritization reduces wasted time and investigation costs.
Structured triage data improves long-term visibility and readiness.
In short, triage is where operational efficiency meets customer confidence — and where the SOC’s reputation is quietly built every day.
Turning Alerts into Insight: How ANY.RUN TI Lookup Changes the Game
ANY.RUN’s Threat Intelligence Lookup is a comprehensive threat intelligence service that provides instant access to detailed information about files, URLs, domains, and IP addresses. It enables analysts to explore IOCs, IOBs, and IOAs using over 40 search parameters, basic search operators, and wildcards. The data is derived from millions of live malware sandbox analyses run by a community of 15K corporate SOC teams.
Triage faster to stop attacks early Get instant IOC context via TI Lookup
When you encounter suspicious artifacts, you can query the service to obtain behavioral analysis, threat classification, and historical context — all within seconds.
Here’s what it brings to the triage table:
Instant IOC Enrichment
Drop in any hash, IP, or domain and see how it ties to known malware families, timelines, and campaigns — in seconds. Let’s take for example a suspicious IP spotted in the traffic:
Domain check: get a verdict, the context, and additional IOCs
In an instant, one knows that the domain is linked to several notorious trojans and has been spotted in recent incidents thus being certainly malicious and actively used.
Real-Time Malware Activity Stats
The “Malware Threats Statistics” feature spotlights live, active infrastructures, showing which malware families are truly circulating today.
Malware Statistics accessible in Threat Intelligence Lookup
This tab can also be a source of recent IOCs for monitoring and detection.
Behavioral Pivoting
With one click, analysts can move from static enrichment to dynamic ANY.RUN sandbox reports, verifying behavior firsthand.
Sandbox analyses of malware samples using the looked-up domain
Risk-Based Prioritization
TI Lookup reveals which alerts link to active C2s or payloads, helping teams focus on what’s actually dangerous.
For example, certain malware families are known to use specific DGA-domains implementations. The following query targets these associations:
Faster Triage: Two-second access to millions of past analyses confirms if an IOC belongs to a threat, cutting triage time.
Smarter Response: Indicator enrichment with behavioral context and TTPs guide precise containment strategies.
Fewer Escalations: Tier 1 analysts can make decisions independently, reducing escalations to Tier 2.
Shared Knowledge, Unified Context
Lookup data can feed SIEMs or case systems, keeping the entire SOC aligned on the same intelligence. For native seamless integrations and connections to SIEM solutions try ANY.RUN’s Threat Intelligence Feeds.
Building Your Expert Triage Practice
Beyond tools and technology, developing expert triage skills requires deliberate practice and continuous improvement. Here are strategies to enhance your capabilities:
Develop Pattern Recognition
Over time, you’ll begin recognizing patterns in threats and false positives. Certain types of alerts consistently prove benign, while others frequently indicate genuine threats. Document these patterns and share them with your team to build collective knowledge. Keep TI Lookup at hand to check alerts in case you are not sure and calibrate your threat radar.
Create Decision Trees
For common alert types, develop decision trees that guide your triage process. It’ll reduce cognitive load, freeing mental resources for complex cases.
Maintain a Knowledge Base
Document your triage decisions, especially for ambiguous or challenging cases. Include the reasoning behind your decisions and the outcomes.
Continuous Learning
The threat landscape evolves constantly, requiring ongoing education. Dedicate time to reading threat intelligence reports, studying new attack techniques, and learning from post-incident reviews. This investment in knowledge pays dividends in improved triage accuracy.
Take Care of Yourself
Analyst fatigue is real and impacts your performance. Take regular breaks, maintain work-life balance, and don’t hesitate to ask for support when workload becomes overwhelming. Your long-term effectiveness depends on sustainability, not short-term heroics.
Turn every IOC into actionable insight for fast containment
Conclusion: Mastering the Art and Science of Triage
Alert triage combines technical skills, analytical thinking, and sound judgment. As an analyst, you’re not just processing alerts. You’re making critical decisions that protect your organization from sophisticated threats while managing resource constraints and time pressure.
The challenges you face are significant: overwhelming alert volumes, persistent false positives, complex threats, and the ever-present risk of fatigue. However, by understanding these challenges and leveraging solutions like ANY.RUN’s Threat Intelligence Lookup, you can transform your triage practice from reactive firefighting to proactive threat hunting.
The future of security operations depends on analysts who can work both fast and smart. With the right approach, tools, and mindset, you can meet the challenges of modern threat detection while building a rewarding and sustainable career in cybersecurity.
About ANY.RUN
ANY.RUN helps more than 500,000 cybersecurity professionals worldwide. Our Interactive Sandbox simplifies malware analysis of threats that target both Windows, Linux, and Android systems.
Combined with Threat Intelligence Lookup and Feeds, businesses can expand threat coverage, speed up triage, and reduce security risks.
We’ve previously written about why neural networks are not the best choice for private conversations. Popular chatbots like ChatGPT, DeepSeek, and Gemini collect user data for training by default, so developers can see all our secrets: every chat you have with the chatbot is stored on company servers. This is precisely why it’s essential to understand what data each neural network collects, and how to set them up for maximum privacy.
In our previous post, we covered configuring ChatGPT’s privacy and security in abundant detail. Today, we examine the privacy settings in China’s answer to ChatGPT — DeepSeek. Curiously, unlike in ChatGPT, there aren’t that many at all.
What data DeepSeek collects
All data from your interactions with the chatbot, images and videos included
Details you provide in your account
IP address and approximate location
Information about your device: type, model, and operating system
The browser you’re using
Information about errors
What’s troubling is that the company doesn’t specify how long it keeps personal data, operating instead on the principle of “retain it as long as needed”. The privacy policy states that the data retention period varies depending on why the data is collected, yet no time limit is mentioned. Is this not another reason to avoid sharing sensitive information with these neural networks? After all, dataset leaks containing users’ personal data have become an everyday occurrence in the world of AI.
If you want to keep your IP address private while you work with DeepSeek, use a Kaspersky Security Cloud. Be wary of free VPN apps: threat actors frequently use them to create botnets (networks of compromised devices). Your smartphone or computer, and by extension, you yourself, could thus become unwitting accomplices in actual crimes.
Who gets your data
DeepSeek is a company under Chinese jurisdiction, so not only the developers but also Chinese law enforcement — as required by local laws — may have access to your chats. Researchers have also discovered that some of the data ends up on the servers of China Mobile — the country’s largest mobile carrier.
However, DeepSeek is hardly an outlier here: ChatGPT, Gemini, and other popular chatbots just as easily and casually share user data upon a request from law enforcement.
Disabling DeepSeek’s training on your data
The first thing to do — a now-standard step when setting up any chatbots — is to disable training on your data. Why could this pose a threat to your privacy? Sometimes, large language models (LLMs) can accidentally disclose real data from the training set to other users. This happens because neural networks don’t distinguish between confidential and non-confidential information. Whether it’s a name, an address, a password, a piece of code, or a photo of kittens — it makes little difference to the AI. Although DeepSeek’s developers claim to have taught the chatbot not to disclose personal data to other users, there’s no guarantee this will never happen. Furthermore, the risk of dataset leaks is always there.
The web-based version and the mobile app for DeepSeek have different settings, and the available options vary slightly. First of all, note that the web version only offers three interface languages: English, Chinese, and System. The System option is supposed to use the language set as the default in your browser or operating system. Unfortunately, this doesn’t always work reliably with all languages. Therefore, if you need the ability to switch DeepSeek’s interface to a different language, we recommend using the mobile app, which has no issues displaying the selected user interface language. It’s important to note that your choice of UI language doesn’t affect the language you use to communicate with DeepSeek. You can chat with the bot in any language it supports. The chatbot itself proudly claims to support more than 100 languages — from common to rare.
DeepSeek web version settings
To access the data management settings, open the left sidebar, click the three dots next to your name at the bottom, select Settings, and then navigate to the Data tab in the window that appears. We suggest you disable the option labeled Improve the model for everyone to reduce the likelihood that your chats with DeepSeek will end up in its training datasets. If you want the model to stop learning from the data you shared with it before turning off this option, you’ll need to email privacy@deepseek.com, and specify the exact data or chats.
Disabling DeepSeek training on your data in the web-based version
DeepSeek mobile app settings
In the DeepSeek mobile app, you also open the left sidebar, click the three dots next to your name at the bottom, and reveal the Settings menu. In the menu, open the Data controls section and turn off Improve the model for everyone.
Disabling DeepSeek training on your data in the app
Managing DeepSeek chats
All your chats with DeepSeek — both in the web version and in the mobile app — are collected in the left sidebar. You can rename any chat by giving it a descriptive title, share it with anyone by creating a public link, or delete a specific chat entirely.
Sharing DeepSeek chats
The ability to share a chat might seem extremely convenient, but remember that it poses risks to your privacy. Let’s say you used DeepSeek to plan a perfect vacation, and now you want to share the itinerary with your travel companions. You could certainly create a public link in DeepSeek and send it to your friends. However, anyone who gets hold of that link can read your plan and learn, among other things, that you’ll be away from home on specific dates. Are you sure this is what you want?
If you’re using the chatbot for confidential projects (which is not advisable in the first place, as it’s better to use a locally running version of DeepSeek for this kind of data, but more on this later), sharing the chat, even with a colleague, is definitely not a good idea. In the case of ChatGPT, similar shared chats were at one point indexed by search engines — allowing anyone to find and read them.
If you absolutely must send the content of a chat to someone else, it’s easier to copy it by clicking the designated button below the message in the chat window, and then to use a conventional method like email or a messaging app to send it, rather than share it with the entire world.
If, despite our warnings, you still wish to share your conversation via a public link, this is currently only possible in the web version of DeepSeek. To create a link to a chat, click the three dots next to the chat name in the left sidebar, select Share, and then, on the main chat board, check the boxes next to the messages you want to share, or check the Select all box at the bottom. After this, click Create public link.
Sharing DeepSeek chats in the web version
You can view all the chats you have shared and, if necessary, delete their public links in the web version, by going to Settings → Data → Shared links → Manage.
Managing shared DeepSeek chats in the web version
Deleting old DeepSeek chats
Why should you delete old DeepSeek chats? The fewer chats you have saved, the lower the risk that your confidential data could become accessible to unauthorized parties if your account is compromised, or if there’s a bug in the LLM itself. Unlike ChatGPT, DeepSeek doesn’t remember or use data from your past chats in new ones, so deleting them won’t impact your future use of the neural network.
However, you can resume a specific chat with DeepSeek at any time by selecting it in the sidebar. Therefore, before deleting a chat, consider whether you might need it again later.
To delete a specific chat: in the web version, click the three dots next to the chat in the left sidebar; in the mobile app, press and hold the chat name. In the window that appears, select Delete.
To delete your entire conversation history: in the web version, go to Settings → Data → Delete all chats → Delete all; in the application, go to Settings → Data controls → Delete all chats. Bear in mind that this only removes the chats from your account without deleting your data from DeepSeek’s servers.
If you want to save the results of your chats with DeepSeek, in the web version, first go to Settings → Data → Export data → Export. Wait for the archive to be prepared, and then download it. All data is exported in the JSON format. This feature is not available in the mobile app.
Managing your DeepSeek account
When you first access DeepSeek, you have two options: either sign up with your email and create a password, or log in with a Google account. From a security and privacy standpoint, the first option is better — especially if you create a strong, unique password for your account: you can use a tool like Kaspersky Password Manager to generate and safely store one.
You can subsequently log in with the same account in other browsers and on different devices. Your chat history will be accessible from any device linked to your account. So, if someone learns or steals your DeepSeek credentials, they’ll be able to review all your chats. Sadly, DeepSeek doesn’t yet support two-factor authentication or passkeys.
If you’ve even the slightest suspicion that your DeepSeek account credentials have been compromised, we recommend taking the following steps. Start by logging out of your account on all devices. In the web version, navigate to Settings → Profile → Log out of all devices → Log out. In the app, the path is Settings → Data controls → Log out of all devices. Next, you need to change your password, but DeepSeek doesn’t offer a direct path to do so once you’re logged in. To reset your password, go to the DeepSeek web version or mobile app, select the password login option, and click Forgot password?. DeepSeek will request your email address, send a verification code to that email, and allow you to reset the old password and create a new one.
Deploying DeepSeek locally
Privacy settings for the DeepSeek web version and mobile app are extremely limited and leave much to be desired. Fortunately, DeepSeek is an open-source language model. This means anyone can deploy the neural network locally on their computer. In this scenario, the AI won’t train on your data, and your information won’t end up on the company’s servers or with third parties. However, there’s a significant downside: when running the AI locally, you’ll be limited to the pre-trained model, and won’t be able to ask the chatbot to find up-to-date information online.
The simplest way to deploy DeepSeek locally is by using the LM Studio application. It allows you to work with models offline, and doesn’t collect any information from your chats with the AI. Download the application, click the search icon, and look for the model you need. The application will likely offer many different versions of the same model.
Searching LM Studio for DeepSeek models
These versions differ in the number of parameters, denoted by the letter B. The more parameters a model has, the more mathematical computations it can perform, and the better it performs; consequently, the more resources it requires to run smoothly. For comparison, a modern laptop with 16–32GB of RAM is sufficient for lighter models (7B–13B), but for the largest version, with 70 billion parameters, you’d need to own an entire data center.
LM Studio will alert you if the model is too heavy for your device.
LM Studio warning you that the model may be too large for your device
It’s important to understand that local AI use is not a panacea in terms of privacy and security. It doesn’t hurt to periodically check that LM Studio (or another similar application) is not connecting to external servers. For example, you can use the netstat command for that. If you’re not familiar with netstat, simply ask the chatbot to tell you about today’s news. If the chatbot is running locally as designed, the response definitely won’t include any current events.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-21 18:06:492025-10-21 18:06:49How to use DeepSeek both privately and securely | Kaspersky official blog
Not long ago we reported a spike in phishing attacks that use an SVG file as the delivery vector. One striking detail was how the SVG embeds JavaScript that rebuilds the payload with XOR and then executes it directly via eval() to redirect victims to a phishing page.
A quick look at the indicators we found showed that nearly all related cases used the same exfiltration addresses. Even more telling: the client-side logic and obfuscation techniques were unchanged across samples, and the communication with the C2 servers was implemented in several steps, with validation of the victim’s current authorization state at each stage.
All this suggests the threat has a certain level of maturity; it’s not just an unusual delivery method, but something that behaves like a phishing kit.
To test that hypothesis, measure the scale of the problem, and be able to tell this threat apart from others, we performed a technical analysis of the samples and labeled the family Tykit (Typical phishing kit). Here’s what we found.
Key Takeaways
The first samples appeared in the ANY.RUN’s Interactive Sandbox in May 2025, with peak activity observed in September–October 2025.
It mimics Microsoft 365 login pages, targeting corporate account credentials of numerous organizations.
The threat utilizes various evasion tactics like hiding code in SVGs or layering redirects.
The client-side code executes in several stages and uses basic anti-detection techniques.
The most affected industries include construction, professional services, IT, finance, government, telecom, real estate, education, and others across US, Canada, LATAM, EMEA, SE Asia and Middle East.
Discovery & Pivoting: How ANY.RUN Detected the Threat
Beginning with the analysis session in the ANY.RUN Sandbox, we quickly found the artifacts needed to expand the context:
The same SVG image was used for redirection (SHA256: a7184bef39523bef32683ef7af440a5b2235e83e7fb83c6b7ee5f08286731892
Fig. 1 Redirecting SVG image
The fake Microsoft 365 login page was hosted on the domain loginmicr0sft0nlineeckaf[.]52632651246148569845521065[.]cc; the URL contained the parameter /?s=, which could be useful for further searching.
A POST request was sent to the server segy2[.]cc, targeting the URL /api/validate and containing data in the request body.
Detect threats faster with ANY.RUN’s Interactive Sandbox See full attack chain in seconds for immediate response
The result was encouraging: 189 related analysis sessions, most of them with a Malicious verdict. The earliest analysis containing the searched indicators dates back to May 7, 2025:
Bingo! The same activity was observed several months earlier; phishing campaigns featuring URLs with the parameter /?s=, and requests sent to the server segy[.]cc, whose domain name is almost identical to the original one.
A search using domainName:”^segy.” revealed a few more related domains:
Fig. 4: Additional segy domains*
With several hundred submissions recorded between May and October 2025, all sharing nearly identical patterns, this could hardly be a coincidence. The template-based infrastructure, identical attack scenarios, and a set of URLs resembling C2 API endpoints; could this be a phishing kit?
It was necessary to analyze the JavaScript code from the phishing pages to see whether there were any recurring elements across samples, how sophisticated the code was, how many execution stages it included, and whether it implemented any mechanisms to prevent analysis.
Catch attacks early with instant IOC enrichment in TI Lookup Power your proactive defense with data from 15K SOCs
Let’s look at another analysis session that reproduces the credentials-entry stage; a critical phase, because most phishing kits reveal themselves fully at the point of exfiltration:
Step 1: SVG as the delivery vector
The attack vector remains an SVG image that redirects the browser. The image uses the same design, but this time includes a working check-stub that prompts the user to “Enter the last 4 digits of your phone number” (in reality any value is accepted).
Fig. 6: SVG file with the “check”
Step 2: Trampoline and CAPTCHA stage
After the check is submitted, the page redirects to a trampoline script, which then forwards the browser to the main phishing page.
The value of the s= parameter is the victim’s email encoded in Base64.
Fig. 7: Trampoline code that forwards to the main phishing page
Next, a page with a CAPTCHA loads; the site uses the Cloudflare Turnstile widget as anti-bot protection.
Fig. 8: Anti-bot protection on the phishing page using Cloudflare Turnstile
It’s worth noting that the client-side code includes basic anti-debugging measures, for example, it blocks key combinations that open DevTools and disables the context menu.
Fig. 9: Basic anti-debug protections in the page source
Step 3: Credential capture and C2 logic
After the CAPTCHA is passed, the page reloads and renders a fake Microsoft 365 sign-in page.
At the same time, a background POST request is sent to the C2 server at ‘/api/validate’. The request body contains JSON with the following fields:
“key”: a session key, or possibly a “license” key for the phishing kit.
“redirect”: the URL to which the victim should be redirected.
“email”: the victim’s email address, decoded; present if the s= parameter was populated earlier.
The logic for sending the request, validating the response, and retrieving the next stage of the payload is implemented in an obfuscated portion of the page; after deobfuscation, it looks like this:
Fig. 10: Logic for sending and validating the victim’s email
The C2 server responds with a JSON object that contains:
“status”: the C2 verdict — “success” or “error”.
“message”: the next stage, provided as HTML.
“data”: {“email”}: the victim’s email address.
The next stage presents the password-entry form. The returned HTML also embeds obfuscated JavaScript that implements the logic for exfiltrating the stolen credentials to the C2 endpoint ‘/api/login’ and for deciding the page’s next actions (for example: show a prompt “Incorrect password”, redirect the user to a legitimate site to hide the fraud, etc.).
A couple of notable snippets illustrate this behavior:
Fig. 11: Exfiltration of the victim’s login and password
The JSON sent in the POST /api/login request contains the following fields:
“key”: The key (see above for possible meaning).
“redierct”: The redirect URL (note the misspelling in the field name).
“token”: An authorization JWT. Notably, the sample token eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJiZjk5M2NkZS1mOTdiLTQyYTctODcxYy1lOTk1MDgzMmM5NjgiLCJleHAiOjE2OTkxNzc0NjF9.p9-OI0LCYcOjaU1I3TMZTjNSos50txbV3_Mi1jk1u8c decodes to an expired token; the exp claim is 1699177461, which corresponds to Sunday, November 5, 2023, 09:44:21 GMT.
“server”: The C2 server domain name.
“email”: The victim’s email address.
“password”: The victim’s password.
These fields are then used by the server response to control what the victim sees next and whether additional actions (debugging hooks, logging, further redirects) are triggered.
The response to the POST /api/login request is a JSON object with the following fields:
“status”: “success” | “info” | “error”
“d”: “<HTML payload to be shown to the user>”
“message”: “Text such as ‘Incorrect password’ when the user enters the wrong password”
“data”: { “email”: “<victim email>” }
Behavior depends on the value of status:
“success”: Render the HTML payload found in “d” to the user.
“info”: Send a (likely debugging) POST request to /x.php on the C2 server. The logic for this flow is shown in the figure below.
“error”: Display an error message (for example, “Incorrect password”).
Fig. 12: Decision logic after the /api/login request
At this point the execution chain of the phishing page ends. In sum, the page implements a fairly involved execution mechanism: the payload is obfuscated, there are basic (nonetheless effective) anti-debugging measures, and the exfiltration logic runs through several staged steps.
Detection Rules for Identifying Tykit Activity
After analyzing the structure of the Tykit phishing payload and the requests sent during the attack, we developed a set of rules that allow detecting the threat at different stages of its implementation.
SVG files
Let’s start with the SVG images themselves. While embedding JavaScript in SVGs can enable legitimate functionality (for example, interactive survey forms, animations, or dynamic UI mockups), it’s frequently abused by threat actors to hide malicious payloads.
One common way to distinguish benign code from malicious is the presence of obfuscation; techniques that hinder triage and signature-based analysis by security tools and SOC analysts.
To improve detection rates for this vector (even without attributing samples to a specific actor), monitor for:
General signs of code obfuscation, e.g. frequent calls to atob(), parseInt(), charCodeAt(), fromCodePoint(), and generated variable names like var _0xABCDEF01 = … often produced by tools such as Obfuscator.io.
Use of the unsafe eval() call, which executes arbitrary code.
Script logic that redirects or alters the current document; calls to window.location.* or manipulation of hrefattributes.
Below is a code snippet taken from an SVG used to load Tykit’s phishing page:
Fig. 13: Malicious redirect code from an SVG that loads the Tykit phishing page
Domains
In nearly all cases linked to Tykit, the operators used templated domain names. For exfiltration servers we observed domains matching the ^segy?.* pattern, for example:
segy[.]zip
segy[.]xyz
segy[.]cc
segy[.]shop
segy2[.]cc
For the main servers hosting the phishing pages, aside from abuse of cloud and object-storage services, the operators frequently registered domains that appear to be generated by a DGA (domain-generation algorithm). These domains match a pattern like: ^loginmicr(o|0)s.*?.([a-z]+)?d+.cc$
To collect all IOCs and perform a detailed case analysis, see the TI Lookup query:
Finally, the main distinction between Tykit and many other phishing campaigns is the set of HTTP requests sent to the C2 that determine the next actions and handle exfiltration of victim data.
After analyzing the JavaScript used across samples, we identified the following requests:
GET /?s=<b64-encoded victim email>
A series of initial requests used to pass Cloudflare Turnstile and load the phishing page; the s parameter may be empty.
POST /api/validate
The first C2 request, used to validate the supplied email. The request body contains JSON with fields (see earlier):
“key”
“redirect”
“email”
The server responds with JSON containing:
“status”
“message” (next stage, as HTML)
“data”: {“email”}
POST /api/validate (variant)
A second variant of the validation request whose JSON body includes:
“key”
“redirect”
“token”
“server”
“email”
The response has the same structure as above.
POST /api/login
The data-exfiltration request. The JSON body contains:
“key”
“redierct” (sic — note the misspelling)
“token”
“server”
“email”
“password”
The response JSON instructs how to change the state of the phishing page and includes:
“status”
“d” (HTML payload to render)
“message”
“data”: {“email”}
POST /x.php
Likely a debugging/logging endpoint triggered when the previous /api/login response contains “status”: “info”. The JSON body includes:
“id”
“key”
“config”
The format of the server’s response to this request was not determined during the investigation.
Who’s Being Targeted
We collected several signals about the industries and countries targeted by Tykit.
Most affected countries:
United States
Canada
South-East Asia
LATAM / South America
EU countries
Middle East
Targeted industries:
Construction
Professional services
IT
Agriculture
Commerce / Retail
Real estate
Education
Design
Finance
Government & military
Telecom
There are no unusual TTPs to call out; this is another wave of spearphishing aimed at stealing Microsoft 365 credentials, featuring a multi-stage execution chain and the capability for AitM interception.
Taken together, given the wide geographic and industry spread and the TTPs that match standard phishing kit behavior, the threat has been active for quite some time. It appears to be a typical PhaaS-style framework (hence the name TYpical phishKIT, or Tykit). Time will tell how it evolves.
How Tykit Affects Organizations
Tykit is a credential-theft campaign that targets Microsoft 365 accounts via a multi-stage phishing flow. Successful compromises can lead to:
Data exfiltration from mailboxes, drives, and connected SaaS apps.
Lateral movement inside environments where cloud identities map to internal resources.
AitM interception of MFA or session tokens, increasing the chance of bypassing second-factor protections.
Operational and reputational damage (incident response costs, regulatory exposure, loss of client trust).
Sectors at higher risk reflect the campaign’s targeting: construction, professional services, IT, finance, government, telecom, real estate, education, and others across US, Canada, LATAM, EMEA, SE Asia and Middle East.
How to Prevent Tykit Attacks
Tykit doesn’t reinvent phishing, but it shows how small technical tweaks, like hiding code in SVGs or layering redirects, can make attacks harder to catch. Still, with better visibility and the right tools, teams can stop it before credentials are stolen.
Strengthen email and file security
SVG files may look safe but can hide JavaScript that executes in the browser. Ensure your security gateway actually inspects SVG content, not just extensions. Use sandbox detonation and Content Disarm & Reconstruction (CDR) to uncover hidden payloads. The ANY.RUN Sandbox is particularly effective for detonating such files and exposing their redirects, scripts, and network calls in seconds.
Use phishing-resistant MFA
Tykit highlights how traditional MFA can be bypassed. Switch to phishing-resistant methods like FIDO2 or certificate-based MFA, disable legacy protocols, and enforce Conditional Access in Microsoft 365. Reviewing OAuth app consents and token lifetimes regularly helps minimize exposure.
Monitor for key indicators
Watch for outbound requests to domains such as segy* or loginmicr(o|0)s.*.cc, and POST requests to /api/validate, /api/login, or /x.php. ANY.RUN’s Threat Intelligence Lookup can quickly connect these IOCs to other related phishing activity, giving analysts context in minutes.
Automate detection and threat hunting
Configure your SIEM or XDR to alert on suspicious Base64 query parameters (like /?s=) or requests following Tykit’s structure. Integrating ANY.RUN’s Threat Intelligence Feeds ensures new indicators, fresh domains, hashes, and URL patterns, are automatically available for detection.
Educate and respond fast
Regular awareness training helps users recognize that even “image” files can trigger phishing chains. If an incident occurs, isolate affected accounts, revoke sessions, and reset credentials.
Using ANY.RUN’s Interactive Sandbox during incident response can accelerate this process: analysts can safely replay the infection chain, confirm what data was exfiltrated, and extract accurate IOCs within minutes. This shortens MTTR and helps strengthen detections for the next wave of similar campaigns.
Conclusion: Lessons from a “Typical” Phishing Kit
We reviewed another sobering example of how phishing remains front and center in the cyber-threat landscape, and how regularly new tools appear to carry out these attacks; each one differing from its predecessors in some way.
We labeled this example Tykit, examined its technical details, and derived several detection and hunting rules that, taken together, will help detect new samples and monitor the campaign’s evolution.
Tykit doesn’t include a full arsenal of evasion and anti-detection techniques, but, like its more mature counterparts, it implements AitM-style data interception and methods to bypass multi-factor protections. It also relies on a quasi-distributed network architecture: servers are assigned dynamic domain names and roles are separated between “delivery” and “exfiltration.”
Empowering Faster Analysis with ANY.RUN
Investigating campaigns like Tykit can be time-consuming, from detecting a single suspicious SVG to uncovering the entire phishing infrastructure behind it. ANY.RUN helps analysts turn hours of manual work into minutes of interactive analysis.
Here’s how:
See the full attack chain in under 60 seconds. Detonate SVGs, phishing pages, or any other file type in real time and instantly observe redirects, scripts, and payload execution.
Reduce investigation time. With live network mapping, script deobfuscation, and dynamic IOCs, analysts can skip static triage and focus directly on what matters.
Cut MTTR by more than 20 minutes per case. Quick visibility into C2 communications, credential-capture logic, and data exfiltration flows allows teams to respond faster and with higher confidence.
Boost proactive defense. Using ANY.RUN Threat Intelligence Lookup, SOC teams can pivot from a single domain or hash to hundreds of related submissions, revealing shared infrastructure and campaign patterns to enrich detection rules for catching future attacks.
Strengthen detections with fresh intelligence. Automatically enrich your security tools with new indicators with TI Feeds sourced from live sandbox analyses and community contributions.
For SOC teams, MSSPs, and threat researchers, ANY.RUN provides the speed, depth, and context needed to stay ahead of campaigns like Tykit, and the next one that follows.
Over 500,000 cybersecurity professionals and 15,000+ companies in finance, manufacturing, healthcare, and other sectors rely on ANY.RUN to streamline malware investigations worldwide.
Speed up triage and response by detonating suspicious files in ANY.RUN’s Interactive Sandbox, observing malicious behavior in real time, and gathering insights for faster, more confident security decisions. Paired with Threat Intelligence Lookup and Threat Intelligence Feeds, it provides actionable data on cyberattacks to improve detection and deepen your understanding of evolving threats.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-21 11:06:402025-10-21 11:06:40Tykit Analysis: New Phishing Kit Stealing Hundreds of Microsoft Accounts in Finance