ANY.RUN & ThreatQ: Boost Detection Rate, Turbocharge Response Speed 

Fresh, actionable IOCs from the latest malware attacks are now available to all security teams using the ThreatQ TIP. ANY.RUN’s Threat Intelligence Feeds integrate seamlessly with the platform, enabling SOCs and MSSPs to boost detection rates, expand threat coverage, and streamline response.  

Here’s how you can benefit from it. 

Real-Time Visibility of the Current Threat Landscape 

As a leading Threat Intelligence Platform (TIP) widely adopted in enterprise SOCs, ThreatQ serves as a powerful indicator management solution, facilitating response. 

TI Feeds are extracted from live sandbox analyses of the latest threats 

ANY.RUN’s Threat Intelligence Feeds connector for ThreatQ provides a real-time stream of fresh, filtered, low-noise network IOC from sandbox investigations of the latest attacks by 15,000+ companies and 500,000+ analysts.  

Follow a guide to implement in your SOC → 

With STIX/TAXII support, TI Feeds can be made a part of SOCs’ security infrastructure without additional custom development or costs, helping them maximize the value of their existing ThreatQ setup. As a result, security teams can achieve: 

  • Early Detection: IOCs added to TI Feeds as soon as they emerge from live sandbox analyses, enabling proactive identification of new threats in your SOC. 
  • Expanded Threat Coverage: 99% unique indicators from global attacks 
(e.g., phishing, malware) provide visibility into threats traditional feeds miss. 
  • Informed Response: Each IOC comes enriched with a link to a sandbox report, showing the full attack being detonated on a live system, providing SOCs with actionable context for fast mitigation. 
  • Reduced Workload: Filtered for malicious alerts, cutting Tier 1 analysis time spent on false positives. 

Expand threat coverage. Slash MTTR. Identify incidents early.Try TI Feeds in your SOC and see instant results. 



Request access now 


How SOC Teams Can Use TI Feeds: Case Example 

TI Feeds help SOCs boost key security metrics  

Threat Intelligence Feeds improve and simplify the core operations of security teams, delivering measurable results. Here’s a possible case for using our fresh threat intelligence to spot and contain attacks: 

  • Easy Setup for Fast IOC Delivery 
    Connect ANY.RUN’s TI Feeds to ThreatQ via STIX/TAXII in minutes. Choose hourly, daily, or custom schedules to get real-time IOCs from global incidents, keeping your SOC ahead of new threats
  • Power SOC Analysis with Actionable Data 
    ANY.RUN’s TI Feeds flow into ThreatQ, providing fresh IOCs to analyze alerts, investigate incidents, or enrich SIEM/EDR systems. This speeds up threat detection and strengthens your defense strategy
  • Streamline Response and Prevention 
    Use ANY.RUN’s IOCs in ThreatQ to automate threat blocking, isolate risks, or enhance playbooks and visualizations. SOC analysts and threat hunters can respond faster and prevent attacks, saving time and reducing breach risks

How to Implement  

The connector operates through the STIX/TAXII protocol, allowing clients to configure feed schedules within ThreatQ’s flexible options: hourly, every 6 hours, daily, bi-daily, bi-weekly, or monthly updates. 

ThreatQ leverages ANY.RUN data for real-time or scheduled analysis as a malicious indicator source for alert and incident investigation. With additional connectors, organizations can optionally forward intelligence to their SIEM/EDR systems. 

Workflow Capabilities 

Depending on configuration settings, the system supports: 

  • Manual or automated response actions (isolation, blocking, escalation) 
  • Investigation enrichment and new rule/playbook configuration 
  • Advanced visualizations for analysts and threat hunters 

Quick Setup Guide with 5 Easy Steps: 

1. Open ThreatQ and click My Integrations in the Integrations tab. 

2. Click Add New Integration. 

3. Configure TAXII Connection: go to the tab Add New TAXII Feed, fill out the configuration form.

For detailed information, see ANY.RUN’s TAXII connection documentation.  

4. After adding the TAXII feed, click on the settings button in the created connector card. Set switch to Enabled, and you’re all set up. 

5. After finalizing the configuration, use the retrieved indicators to: 

  • Export them to SIEM/SOAR to automate detection and blocking of threats 
  • Prioritize high-risk threats to stay focused on the most critical incidents 
  • Combine them with data from other sources to gain full visibility into attacks 
  • Enrich and accelerate threat hunting and investigations with actionable intelligence 
  • Launch playbooks for automated response to threats.  

About ANY.RUN  

ANY.RUN is trusted by more than 500,000 cybersecurity professionals and 15,000+ organizations across finance, healthcare, manufacturing, and other critical industries. Our platform helps security teams investigate threats faster and with more clarity.   

Speed up incident response with our Interactive Sandbox: analyze suspicious files in real time, observe behavior as it unfolds, and make faster, more informed decisions.   

Strengthen detection with Threat Intelligence Lookup and TI Feeds: give your team the context they need to stay ahead of today’s most advanced threats.   

Want to see it in action? Start your 14-day trial of ANY.RUN today →  

The post ANY.RUN & ThreatQ: Boost Detection Rate, Turbocharge Response Speed  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

IR Trends Q3 2025: ToolShell attacks dominate, highlighting criticality of segmentation and rapid response

IR Trends Q3 2025: ToolShell attacks dominate, highlighting criticality of segmentation and rapid response

Threat actors predominately exploited public-facing applications for initial access this quarter, with this tactic appearing in over 60 percent of Cisco Talos Incident Response (Talos IR) engagements – a notable increase from less than 10 percent last quarter. This spike is largely attributable to a wave of engagements involving ToolShell, an attack chain that targets on-premises Microsoft SharePoint servers through exploitation of vulnerabilities that were publicly disclosed in July. We also saw an increase in post-exploitation phishing campaigns launched from compromised valid accounts this quarter, a trend we noted last quarter, with threat actors using this technique to expand their attack both within the compromised organizations as well as to external partner entities.  

Ransomware incidents made up only approximately 20 percent of engagements this quarter, a decrease from 50 percent last quarter, despite ransomware remaining one of the most persistent threats to organizations. Talos IR responded to Warlock, Babuk, and Kraken ransomware variants for the first time, while also responding to previously seen families Qilin and LockBit. We observed an attack we attributed with moderate confidence to the threat actor that Microsoft tracks as China-based group Storm-2603 based on overlapping tactics, techniques, and procedures (TTPs). As part of their attack chain, the actors leveraged open-source digital forensics and incident response (DFIR) platform Velociraptor for persistence, a tool that has not been previously seen in ransomware attacks or associated with Storm-2603. We also responded to more Qilin ransomware engagements than last quarter, supporting our assessment from last quarter that the threat group is likely accelerating the cadence of their attacks.

ToolShell attacks underscore importance of robust segmentation and rapid patching 

As mentioned above, threat actors exploited public-facing applications for initial access in over 60 percent of engagements this quarter. Almost 40 percent of all engagements involved ToolShell activity, majorly contributing to this tactic’s rise in popularity.  

Starting in mid-July 2025, threat actors began actively exploiting two path traversal vulnerabilities affecting on-premises SharePoint servers: CVE-2025-53770 and CVE-2025-53771. These two vulnerabilities are related to CVE-2025-49704 and CVE-2025-49706, which had been previously featured in Microsoft Patch Tuesday updates in early July. One of the key features of the older vulnerabilities was that the adversary needed to be authenticated to obtain a valid signature by extracting the ValidationKey from memory or configuration. With the discovery of the newer vulnerabilities, attackers managed to eliminate the need to be authenticated to obtain a valid signature, resulting in unauthenticated remote code execution. 

This quarter’s ToolShell activity highlights the importance of network segmentation, as attackers demonstrated how they can exploit poorly segmented environments once a single server is compromised to move laterally within a targeted network. For example, in one engagement, the victim organization was impacted by ToolShell exploitation against a SharePoint server, then experienced a ransomware attack a few weeks later. In the latter attack, Talos IR analysis indicated the actors transferred credential stealing malware from the affected public-facing SharePoint server to a SharePoint database server on the victim’s internal network, demonstrating how they leveraged the trusted relationship between the two servers to expand their foothold. 

The wave of ToolShell attacks also shows how quickly threat actors mobilize when significant zero-day vulnerabilities are disclosed and/or proof-of-concepts appear. Active exploitation of the ToolShell vulnerabilities was first observed in the wild on July 18, a day before Microsoft issued its emergency advisory. Almost all Talos IR engagements responding to ToolShell activity kicked off within the following 10 days. Automated scanning enables attackers to rapidly discover and exploit vulnerable hosts while defenders race to test and deploy patches across diverse environments. Patching as soon as possible is key in narrowing that window of exposure, in addition to building safeguards through robust segmentation as mentioned above.

Post-exploitation phishing attacks from compromised accounts persist 

Consistent with findings from last quarter, threat actors continued to launch phishing campaigns after their initial compromise by leveraging compromised internal email accounts to expand their attack both within the compromised organization as well as externally to partner entities. This tactic appeared in a third of all engagements this quarter, an increase from last quarter’s 25 percent. Last quarter, we predominately saw this tactic used when phishing was also used for initial access. This quarter, however, we also saw it appear in engagements where other methods, such as valid accounts, were used for initial access. 

The follow-on phishing campaigns were primarily oriented towards credential harvesting. For example, in one engagement, the adversary used a compromised Microsoft Office 365 account to send almost 3,000 emails to internal and external partners. To evade detection, the adversary modified the email management rules to hide the sent phishing emails and any replies. Almost 30 employees of the targeted organization received the adversary’s phishing email and at least three clicked on the malicious credential harvesting link that was included; it is unknown how many users at external organizations were impacted. In another engagement, the adversary used a compromised email account to send internal phishing emails containing a link that directed to a credential harvesting page. The malicious site mimicked an Office 365 login page that was configured to redirect to the targeted organization’s legitimate login page upon the user entering their credentials, enhancing the attack’s legitimacy.   

Looking forward, as defenses against phishing attacks improve, adversaries are seeking ways to enhance these emails’ legitimacy, likely leading to the increased use of compromised accounts post-exploitation that we have observed recently. Defenders should seek to improve identification and protection capabilities against internal phishing campaigns, with actions such as providing stronger authentication methods for users’ email accounts, enhancing analysis of users’ email patterns and notifying on anomalies, and improving user awareness training.

Ransomware trends 

Ransomware incidents made up approximately 20 percent of engagements this quarter, a decrease from 50 percent last quarter, though we assess this dip is likely not indicative of any larger downward trend in the ransomware threat environment. Talos IR responded to Warlock, Babuk and Kraken ransomware variants for the first time, while also responding to previously seen families Qilin and LockBit.

Open-source DFIR tool Velociraptor adopted into ransomware toolkit  

We responded to a ransomware engagement this quarter that we assessed with moderate confidence was attributable to the Storm-2603 threat group based on overlapping TTPs, such as the deployment of both LockBit and Warlock ransomware. Storm-2603 is a suspected China-based threat actor that was first seen in July 2025 when they engaged in ToolShell activity. While LockBit is widely deployed by various ransomware actors, Warlock was first advertised in June 2025 and has since been heavily used by Storm-2603. Notably, we also observed evidence of Babuk ransomware files on the customer’s network in this engagement, which has not been previously deployed by Storm-2603 according to public reporting, though it failed to encrypt and only renamed files. The incident severely impacted the customer’s IT environment, including connected Operational Support Systems (OSS), a critical component of telecommunication infrastructure that allows for remote management and monitoring of day-to-day operations. 

We discovered the actors installed an older version of open-source DFIR platform Velociraptor on five servers to maintain persistence and launched the tool several times even after the host was isolated. Velociraptor is a legitimate tool that we have not previously observed being abused in ransomware attacks. It is a free product designed to help with investigations, data collection, and remediation during and after security incidents and it provides real-time or near real-time visibility into the activities occurring on monitored endpoints. The version of Velociraptor observed in this incident was outdated (version 0.73.4.0) and exposed to a privilege escalation vulnerability (CVE-2025-6264), which may have been leveraged for persistence as this vulnerability can lead to arbitrary command execution and endpoint takeover. The addition of this tool in the ransomware playbook is in line with findings from Talos’ 2024 Year in Review, which highlights the increasing variety of commercial and open-source products leveraged by threat actors.

Qilin ransomware operators likely accelerate pace of attacks    

We saw an increased number of Qilin ransomware engagements kick off this quarter compared to last quarter, when we encountered it for the first time. We predicted last quarter the group was accelerating their operational tempo, based on an increase in disclosures on their data leak site since February 2025. We observed Qilin operators use TTPs consistent with last quarter, including valid compromised credentials for initial access, a binary encryptor customized to the victim, and file transfer tool CyberDuck for exfiltration. In one Qilin engagement this quarter we were able to determine the adversaries’ dwell time as well, finding that the ransomware was executed two days after the attack first began. The steady increase in Qilin activity indicates it will very likely continue to be a top ransomware threat through at least the remainder of 2025, pending any disruption or intervention.

Targeting 

For the first time since we began documenting analysis of Talos IR engagements in 2021, public administration was the most targeted industry vertical this quarter. Though it hasn’t been the top targeted vertical before, it is often amongst the most seen, making this observation not entirely unexpected. Organizations within the public administration sector are attractive targets as they are often underfunded and still using legacy equipment. Additionally, the organizations targeted this quarter were largely local governments, which also typically oversee and support public school districts and county-run hospitals or clinics. As such, these entities often have access to sensitive data as well as a low downtime tolerance, making them attractive to financially motivated and espionage-focused threat groups, both of which we observed during these engagements.

IR Trends Q3 2025: ToolShell attacks dominate, highlighting criticality of segmentation and rapid response

Initial access 

As mentioned, the most observed means of gaining initial access this quarter was exploitation of public-facing applications, largely due to ToolShell activity. Other observed means of achieving initial access included phishing, valid accounts, and drive-by compromise.

IR Trends Q3 2025: ToolShell attacks dominate, highlighting criticality of segmentation and rapid response

Recommendations for addressing top security weaknesses

IR Trends Q3 2025: ToolShell attacks dominate, highlighting criticality of segmentation and rapid response

Implement detections to identify MFA abuse and strong MFA policies for impossible travel scenarios 

Almost a third of engagements this quarter involved multifactor authentication (MFA) abuse, including MFA bombing and MFA bypass — a slight decrease from approximately 40 percent last quarter. MFA bombing, also known as an MFA fatigue attack, involves an attacker repeatedly sending MFA requests to a user’s device, aiming to overwhelm them into inadvertently approving an unauthorized login attempt. MFA bypass encompasses a range of techniques leveraged by attackers to circumvent or disable MFA mechanisms and gain unauthorized access. Talos IR recommends defenders implement detections to identify when MFA has been bypassed, such as deploying products that use behavior analytics to identify new device logins and policies to generate alerts when detected.  

Talos IR also encountered numerous engagements this quarter that involved impossible travel scenarios, and recommended organizations implement strong MFA policies when these are detected. An example of an impossible travel scenario would be if a user logs into their account from New York, then the adversary logs into the same account three minutes later from Tokyo.   

Configure centralized logging capabilities across the environment  

Insufficient logging hindered investigation and response in approximately a third of engagements, a slight increase from 25 percent last quarter, due to issues such as log retention limitations, logs that were encrypted or deleted during attacks, and lack of logs due to disablement by the adversary. Understanding the full context and chain of events performed by an adversary on a targeted host is vital not only for remediation but also for enhancing defenses and addressing any system vulnerabilities for the future. Talos IR recommends that organizations implement a Security Information and Event Management (SIEM) solution for centralized logging. In the event an adversary deletes or modifies logs on the host, the SIEM will contain the original logs to support forensic investigation. 

Conduct robust patch management  

Finally, vulnerable/unpatched infrastructure was exploited in approximately 15 percent of engagements this quarter. Targeted infrastructure included unpatched development servers and unpatched SharePoint servers that remained vulnerable weeks after the ToolShell patches were released — we did not include SharePoint servers that were vulnerable before the release of the patches in this category. Exploitation of vulnerable infrastructure enabled adversaries’ lateral movement, emphasizing the importance of patch management.

Top-observed MITRE ATT&CK techniques

The table below represents the MITRE ATT&CK techniques observed in this quarter’s Talos IR engagements. Given that some techniques can fall under multiple tactics, we grouped them under the most relevant tactic in which they were leveraged. Please note this is not an exhaustive list.  

Key findings from the MITRE ATT&CK framework include:  

  • Related to the internal phishing campaigns observed this quarter, we saw adversaries leveraging email hiding rules in numerous engagements, hiding certain inbound and outbound emails in the compromised user’s mailbox to evade detection. We also saw user execution of malicious links that directed to credential harvesting pages in these campaigns.
  • We observed web shells deployed for persistence in the ToolShell engagements this quarter. The most observed web shell, “spinstall0.aspx”, was used to extract sensitive cryptographic keys from compromised servers.

Tactic 

Technique 

Example 

Reconnaissance (TA0043) 

T1595.002 Active Scanning: Vulnerability Scanning 

It is likely the majority of vulnerable SharePoint servers targeted in the ToolShell engagements this quarter were identified via adversaries’ active scanning methods.   

Initial Access (TA0001) 

T1190 Exploit Public-Facing Application 

Adversaries may exploit a vulnerability to gain access to a target system. 

 

T1598.003 Phishing for Information: Spear phishing Link 

Adversaries may send spear phishing messages with a malicious link to elicit sensitive information that can be used during targeting. 

 

 T1078 Valid Accounts 

Adversaries may use compromised credentials to access valid accounts during their attack. 

 

T1190 Exploit in Public-Facing Application 

Adversaries may exploit a vulnerability to gain access to a target system. 

 

T1189 Drive-by Compromise 

Adversaries may gain access to a system through a user visiting a website over the normal course of browsing. 

Execution (TA0002)  

T1204.001 User Execution: Malicious Link 

An adversary may rely upon a user clicking a malicious link in order to gain execution. Users may be subjected to social engineering to get them to click on a link that will lead to code execution. 

 

T1059.001 Command and Scripting Interpreter: PowerShell 

Adversaries may abuse PowerShell to execute commands or scripts throughout their attack. 

 

T1078 Valid Accounts   

Adversaries may obtain and abuse credentials of existing accounts to access systems within the network and execute their payload. 

 

T1021.004 Remote Services: SSH   

Adversaries may use valid accounts to log into remote machines using Secure Shell (SSH). The adversary may then perform actions as the logged-on user. 

Persistence (TA0003) 

T1505.003 Server Software Component: Web Shell   

Adversaries may backdoor web servers with web shells to establish persistent access to systems. 

 

T1136 Create Account   

Adversaries may create an account to maintain access to victim systems. 

 

T1053 Scheduled Task/Job   

Adversaries may abuse task scheduling functionality to facilitate initial or recurring execution of malicious code. 

 

T1021.001 Remote Services: Remote Desktop Protocol   

Adversaries may use valid accounts to log into a computer via RDP, then perform actions as the logged-on user. 

 

T1078 Valid Accounts 

The adversary may compromise a valid account to move through the network to additional systems. 

 

T1547.001 Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder 

Adversaries may achieve persistence by adding a program to a startup folder or referencing it with a Registry run key. Adding an entry to the “run keys” in the Registry or startup folder will cause the program referenced to be executed when a user logs in. 

Defense Evasion (TA0005)  

T1564.008 Hide Artifacts: Email Hiding Rules 

Adversaries may use email rules to hide inbound or outbound emails in a compromised user’s mailbox. 

 

T1562 Impair Defenses 

Adversaries may maliciously modify components of a victim environment in order to hinder or disable defensive mechanisms. 

 

T1070 Indicator Removal   

Adversaries may delete or modify artifacts generated within systems to remove evidence of their presence or hinder defenses. 

Credential Access (TA0006)  

T1111 Multi-Factor Authentication Interception   

Adversaries may target MFA mechanisms, (i.e., smart cards, token generators, etc.) to gain access to credentials that can be used to access systems, services, and network resources. 

 

T1621 Multi-factor Authentication Request Generation 

Adversaries may attempt to bypass MFA mechanisms and gain access to accounts by generating MFA requests sent to users. 

 

T1110.003 Brute Force: Password spraying 

Adversaries may use a single or small list of commonly used passwords against many different accounts to attempt to acquire valid account credentials. 

Discovery (TA0007) 

T1078 Valid Accounts 

An adversary may use compromised credentials for reconnaissance against principle accounts. 

 

T1083 File and Directory Discovery   

Adversaries may enumerate files and directories or may search in specific locations of a host or network share for certain information within a file system. 

 

T1087 Account Discovery   

Adversaries may attempt to get a listing of valid accounts, usernames, or email addresses on a system or within a compromised environment. 

 

T1135 Network Share Discovery 

Adversaries may look for folders and drives shared on remote systems as a means of identifying sources of information to gather. 

Lateral Movement (TA0008)  

T1021.001 Remote Services: Remote Desktop Protocol 

Adversaries may use Valid Accounts to log into a computer using the Remote Desktop Protocol (RDP). The adversary may then perform actions as the logged-on user. 

 

T1033 System Owner/User Discovery 

Adversaries may attempt to identify the primary user, currently logged in user, set of users that commonly uses a system, or whether a user is actively using the system. 

Command and Control (TA0011)  

T1219 Remote Access Tools 

An adversary may use legitimate remote access tools to establish an interactive command and control channel within a network. 

 

T1071.001 Application Layer Protocol: Web Protocols   

Adversaries may communicate using application layer protocols associated with web traffic to avoid detection/network filtering by blending in with existing traffic. 

Exfiltration (TA0010)  

T1059.001 Command and Scripting Interpreter: PowerShell 

Adversaries may abuse PowerShell commands and scripts. 

Impact (TA0040)  

T1486 Data Encrypted for Impact 

Adversaries may use ransomware to encrypt data on a target system.   

Software/Tool  

S0029 PsExec 

Free Microsoft tool that can remotely execute programs on a target system. 

 

S0591 ConnectWise 

A legitimate remote administration tool that has been used since at least 2016 by threat actors. 

 

S0638 Babuk 

Babuk is a Ransomware-as-a-service (RaaS) malware that has been used since at least 2021. The operators of Babuk employ a “Big Game Hunting” approach to targeting major enterprises and operate a leak site to post stolen data as part of their extortion scheme. 

 

S1199 LockBit 

LockBit is an affiliate-based RaaS that has been in use since at least June 2021. LockBit  has versions capable of infecting Windows and VMware ESXi virtual machines, and has been observed targeting multiple industry verticals globally. 

Cisco Talos Blog – ​Read More

No Threats Left Behind: SOC Analyst’s Guide to Expert Triage 

 A SOC is where every second counts. Amidst a flood of alerts, false positives, and ever-short time, analysts face the daily challenge of identifying what truly matters — before attackers gain ground. 

That’s where alert triage comes in: the essential first step in detecting, prioritizing, and responding to threats efficiently. Done right, it defines the overall effectiveness of a SOC or MSSP and determines how well an organization can defend itself.

Spoiler Alert About Alerts 

Here’s your spoiler for today: good triage starts with great threat intelligence. 

ANY.RUN’s Threat Intelligence Lookup doesn’t just enrich alerts — it rewrites the rules of triage by turning scattered IOCs into instant context. But we’ll get there. Let’s start from the analyst’s desk, where the real noise begins. 

ANY.RUN’s Threat Intelligence Lookup: checks IOCs, instantly find out all that’s worth knowing 

Why Triage Is the Heartbeat of the SOC 

Behind every successful SOC, there’s a smooth triage flow that keeps chaos under control. It’s not just about filtering alerts. It’s about shaping the SOC’s rhythm and resilience. 

When analysts perform triage effectively: 

  • They build the first and strongest defense layer against real attacks. 
  • They ensure human attention is spent where it matters most. 
  • They create a foundation for accurate detection and response metrics like MTTD and MTTR. 
  • They make security predictable and measurable, not reactive and random. 

The Daily Puzzle: Making Sense of a Thousand Pings 

The challenge is not a lack of data — it’s too much of it. The toughest barriers to effective triage include: 

  • Alert overload — When every ping demands attention, focus becomes the first casualty. 
  • False positives — Automation can cry wolf more often than it should. 
  • Threat complexity —Today’s attackers employ sophisticated techniques designed to evade detection. 
  • Context gaps — An IP is just an IP until you know its story. 
  • Time compression — Analysts often have seconds, not minutes, to make judgment calls. 
  • Data silos — TI feeds, SIEMs, and sandboxes don’t always talk to each other. 

The result? Valuable threats risk getting buried under a pile of meaningless noise.

Speed, Precision, and the Numbers That Matter 

In triage, speed without accuracy is chaos, and accuracy without speed is luxury. That’s why SOCs track their efficiency through key metrics. KPIs aren’t just for bosses—they’re your triage compass. Track these to benchmark progress and spot bottlenecks: 

KPI  Description  Target Benchmark  Why It Matters for Triage 
Mean Time to Detect (MTTD)  Average time from threat emergence to alert generation.  Measures triage speed in spotting signals amid noise. 
Mean Time to Respond (MTTR)  Time from alert to containment/remediation.  Highlights routing efficiency—faster triage feeds faster responses. 
False Positive Rate  Percentage of alerts dismissed as non-threats.  Low rates mean better prioritization; high ones signal fatigue. 
Alert Closure Rate  Alerts triaged per analyst per shift.  50-100  Gauges productivity without burnout. 
Escalation Rate  % of alerts bumped to higher tiers.  Reflects triage accuracy—fewer escalations mean empowered Tier 1. 
Wrong Verdict Rate  Misclassified alerts (internal audit).  Tracks skill gaps; aim for continuous improvement via training. 

 
High-performing SOCs balance speed and certainty by using intelligence enrichment to cut decision time without cutting quality. Those KPIs are not just numbers; they’re the story of how well your triage works. 
 

From Metrics to Meaning: Why Triage Drives Business Outcomes 

Triage might look like a technical process, but its impact is strategic. Understanding how your triage work supports broader business objectives, helps you make better decisions, and communicate your value effectively. 
 
For SOCs and MSSPs, efficient triage is a business differentiator: 

  • Fewer false positives mean less analyst burnout and higher client capacity. 
  • Faster incident validation means better SLA performance and client trust. 
  • Smarter prioritization reduces wasted time and investigation costs. 
  • Structured triage data improves long-term visibility and readiness. 

In short, triage is where operational efficiency meets customer confidence — and where the SOC’s reputation is quietly built every day.

Turning Alerts into Insight: How ANY.RUN TI Lookup Changes the Game 

ANY.RUN’s Threat Intelligence Lookup is a comprehensive threat intelligence service that provides instant access to detailed information about files, URLs, domains, and IP addresses. It enables analysts to explore IOCs, IOBs, and IOAs using over 40 search parameters, basic search operators, and wildcards. The data is derived from millions of live malware sandbox analyses run by a community of 15K corporate SOC teams.  

Triage faster to stop attacks early 
Get instant IOC context via TI Lookup



Sign up to start 


When you encounter suspicious artifacts, you can query the service to obtain behavioral analysis, threat classification, and historical context — all within seconds. 
 
Here’s what it brings to the triage table:  

Instant IOC Enrichment 

Drop in any hash, IP, or domain and see how it ties to known malware families, timelines, and campaigns — in seconds. Let’s take for example a suspicious IP spotted in the traffic:  
 
domainName:”23.ip.gl.ply.gg” 

Domain check: get a verdict, the context, and additional IOCs 

In an instant, one knows that the domain is linked to several notorious trojans and has been spotted in recent incidents thus being certainly malicious and actively used.  

Real-Time Malware Activity Stats 

The “Malware Threats Statistics” feature spotlights live, active infrastructures, showing which malware families are truly circulating today. 

Malware Statistics accessible in Threat Intelligence Lookup 

This tab can also be a source of recent IOCs for monitoring and detection.  

Behavioral Pivoting 

With one click, analysts can move from static enrichment to dynamic ANY.RUN sandbox reports, verifying behavior firsthand. 

Sandbox analyses of malware samples using the looked-up domain  

Risk-Based Prioritization 

TI Lookup reveals which alerts link to active C2s or payloads, helping teams focus on what’s actually dangerous. 

For example, certain malware families are known to use specific DGA-domains implementations. The following query targets these associations:  

(threatName:”redline” OR threatName:”lumma”) AND domainName:”.” AND destinationIpAsn:”cloudflare” 

CloudFlare domains used by known malware families 

Analyst Efficiency Background 

With TI Lookup, teams unlock the next level:  

  • Faster Triage: Two-second access to millions of past analyses confirms if an IOC belongs to a threat, cutting triage time. 
  • Smarter Response: Indicator enrichment with behavioral context and TTPs guide precise containment strategies. 
  • Fewer Escalations: Tier 1 analysts can make decisions independently, reducing escalations to Tier 2. 

Shared Knowledge, Unified Context 

Lookup data can feed SIEMs or case systems, keeping the entire SOC aligned on the same intelligence. For native seamless integrations and connections to SIEM solutions try ANY.RUN’s Threat Intelligence Feeds.

Building Your Expert Triage Practice 

Beyond tools and technology, developing expert triage skills requires deliberate practice and continuous improvement. Here are strategies to enhance your capabilities: 

Develop Pattern Recognition 

Over time, you’ll begin recognizing patterns in threats and false positives. Certain types of alerts consistently prove benign, while others frequently indicate genuine threats. Document these patterns and share them with your team to build collective knowledge. Keep TI Lookup at hand to check alerts in case you are not sure and calibrate your threat radar.  

Create Decision Trees 

For common alert types, develop decision trees that guide your triage process. It’ll reduce cognitive load, freeing mental resources for complex cases. 

Maintain a Knowledge Base 

Document your triage decisions, especially for ambiguous or challenging cases. Include the reasoning behind your decisions and the outcomes.  

Continuous Learning 

The threat landscape evolves constantly, requiring ongoing education. Dedicate time to reading threat intelligence reports, studying new attack techniques, and learning from post-incident reviews. This investment in knowledge pays dividends in improved triage accuracy. 

Take Care of Yourself 

Analyst fatigue is real and impacts your performance. Take regular breaks, maintain work-life balance, and don’t hesitate to ask for support when workload becomes overwhelming. Your long-term effectiveness depends on sustainability, not short-term heroics. 

 
Turn every IOC into actionable insight for fast containment



Try TI Lookup


Conclusion: Mastering the Art and Science of Triage 

Alert triage combines technical skills, analytical thinking, and sound judgment. As an analyst, you’re not just processing alerts. You’re making critical decisions that protect your organization from sophisticated threats while managing resource constraints and time pressure. 

The challenges you face are significant: overwhelming alert volumes, persistent false positives, complex threats, and the ever-present risk of fatigue. However, by understanding these challenges and leveraging solutions like ANY.RUN’s Threat Intelligence Lookup, you can transform your triage practice from reactive firefighting to proactive threat hunting. 

The future of security operations depends on analysts who can work both fast and smart. With the right approach, tools, and mindset, you can meet the challenges of modern threat detection while building a rewarding and sustainable career in cybersecurity. 

About ANY.RUN 

ANY.RUN helps more than 500,000 cybersecurity professionals worldwide. Our Interactive Sandbox simplifies malware analysis of threats that target both Windows, Linux, and Android systems.  

Combined with Threat Intelligence Lookup and Feeds, businesses can expand threat coverage, speed up triage, and reduce security risks. 

Request trial of ANY.RUN’s services to test them in your organization → 

The post No Threats Left Behind: SOC Analyst’s Guide to Expert Triage  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

How to use DeepSeek both privately and securely | Kaspersky official blog

We’ve previously written about why neural networks are not the best choice for private conversations. Popular chatbots like ChatGPT, DeepSeek, and Gemini collect user data for training by default, so developers can see all our secrets: every chat you have with the chatbot is stored on company servers. This is precisely why it’s essential to understand what data each neural network collects, and how to set them up for maximum privacy.

In our previous post, we covered configuring ChatGPT’s privacy and security in abundant detail. Today, we examine the privacy settings in China’s answer to ChatGPT — DeepSeek. Curiously, unlike in ChatGPT, there aren’t that many at all.

What data DeepSeek collects

  • All data from your interactions with the chatbot, images and videos included
  • Details you provide in your account
  • IP address and approximate location
  • Information about your device: type, model, and operating system
  • The browser you’re using
  • Information about errors

What’s troubling is that the company doesn’t specify how long it keeps personal data, operating instead on the principle of “retain it as long as needed”. The privacy policy states that the data retention period varies depending on why the data is collected, yet no time limit is mentioned. Is this not another reason to avoid sharing sensitive information with these neural networks? After all, dataset leaks containing users’ personal data have become an everyday occurrence in the world of AI.

If you want to keep your IP address private while you work with DeepSeek, use a Kaspersky Security Cloud. Be wary of free VPN apps: threat actors frequently use them to create botnets (networks of compromised devices). Your smartphone or computer, and by extension, you yourself, could thus become unwitting accomplices in actual crimes.

Who gets your data

DeepSeek is a company under Chinese jurisdiction, so not only the developers but also Chinese law enforcement — as required by local laws — may have access to your chats. Researchers have also discovered that some of the data ends up on the servers of China Mobile — the country’s largest mobile carrier.

However, DeepSeek is hardly an outlier here: ChatGPT, Gemini, and other popular chatbots just as easily and casually share user data upon a request from law enforcement.

Disabling DeepSeek’s training on your data

The first thing to do — a now-standard step when setting up any chatbots — is to disable training on your data. Why could this pose a threat to your privacy? Sometimes, large language models (LLMs) can accidentally disclose real data from the training set to other users. This happens because neural networks don’t distinguish between confidential and non-confidential information. Whether it’s a name, an address, a password, a piece of code, or a photo of kittens — it makes little difference to the AI. Although DeepSeek’s developers claim to have taught the chatbot not to disclose personal data to other users, there’s no guarantee this will never happen. Furthermore, the risk of dataset leaks is always there.

The web-based version and the mobile app for DeepSeek have different settings, and the available options vary slightly. First of all, note that the web version only offers three interface languages: English, Chinese, and System. The System option is supposed to use the language set as the default in your browser or operating system. Unfortunately, this doesn’t always work reliably with all languages. Therefore, if you need the ability to switch DeepSeek’s interface to a different language, we recommend using the mobile app, which has no issues displaying the selected user interface language. It’s important to note that your choice of UI language doesn’t affect the language you use to communicate with DeepSeek. You can chat with the bot in any language it supports. The chatbot itself proudly claims to support more than 100 languages — from common to rare.

DeepSeek web version settings

To access the data management settings, open the left sidebar, click the three dots next to your name at the bottom, select Settings, and then navigate to the Data tab in the window that appears. We suggest you disable the option labeled Improve the model for everyone to reduce the likelihood that your chats with DeepSeek will end up in its training datasets. If you want the model to stop learning from the data you shared with it before turning off this option, you’ll need to email privacy@deepseek.com, and specify the exact data or chats.

Disabling DeepSeek training on your data in the web-based version

Disabling DeepSeek training on your data in the web-based version

DeepSeek mobile app settings

In the DeepSeek mobile app, you also open the left sidebar, click the three dots next to your name at the bottom, and reveal the Settings menu. In the menu, open the Data controls section and turn off Improve the model for everyone.

Disabling DeepSeek training on your data in the app

Disabling DeepSeek training on your data in the app

Managing DeepSeek chats

All your chats with DeepSeek — both in the web version and in the mobile app — are collected in the left sidebar. You can rename any chat by giving it a descriptive title, share it with anyone by creating a public link, or delete a specific chat entirely.

Sharing DeepSeek chats

The ability to share a chat might seem extremely convenient, but remember that it poses risks to your privacy. Let’s say you used DeepSeek to plan a perfect vacation, and now you want to share the itinerary with your travel companions. You could certainly create a public link in DeepSeek and send it to your friends. However, anyone who gets hold of that link can read your plan and learn, among other things, that you’ll be away from home on specific dates. Are you sure this is what you want?

If you’re using the chatbot for confidential projects (which is not advisable in the first place, as it’s better to use a locally running version of DeepSeek for this kind of data, but more on this later), sharing the chat, even with a colleague, is definitely not a good idea. In the case of ChatGPT, similar shared chats were at one point indexed by search engines — allowing anyone to find and read them.

If you absolutely must send the content of a chat to someone else, it’s easier to copy it by clicking the designated button below the message in the chat window, and then to use a conventional method like email or a messaging app to send it, rather than share it with the entire world.

If, despite our warnings, you still wish to share your conversation via a public link, this is currently only possible in the web version of DeepSeek. To create a link to a chat, click the three dots next to the chat name in the left sidebar, select Share, and then, on the main chat board, check the boxes next to the messages you want to share, or check the Select all box at the bottom. After this, click Create public link.

Sharing DeepSeek chats in the web version

Sharing DeepSeek chats in the web version

You can view all the chats you have shared and, if necessary, delete their public links in the web version, by going to Settings → Data → Shared links → Manage.

Managing shared DeepSeek chats in the web version

Managing shared DeepSeek chats in the web version

Deleting old DeepSeek chats

Why should you delete old DeepSeek chats? The fewer chats you have saved, the lower the risk that your confidential data could become accessible to unauthorized parties if your account is compromised, or if there’s a bug in the LLM itself. Unlike ChatGPT, DeepSeek doesn’t remember or use data from your past chats in new ones, so deleting them won’t impact your future use of the neural network.

However, you can resume a specific chat with DeepSeek at any time by selecting it in the sidebar. Therefore, before deleting a chat, consider whether you might need it again later.

To delete a specific chat: in the web version, click the three dots next to the chat in the left sidebar; in the mobile app, press and hold the chat name. In the window that appears, select Delete.

To delete your entire conversation history: in the web version, go to Settings → Data → Delete all chats → Delete all; in the application, go to Settings → Data controls → Delete all chats. Bear in mind that this only removes the chats from your account without deleting your data from DeepSeek’s servers.

If you want to save the results of your chats with DeepSeek, in the web version, first go to Settings → Data → Export data → Export. Wait for the archive to be prepared, and then download it. All data is exported in the JSON format. This feature is not available in the mobile app.

Managing your DeepSeek account

When you first access DeepSeek, you have two options: either sign up with your email and create a password, or log in with a Google account. From a security and privacy standpoint, the first option is better — especially if you create a strong, unique password for your account: you can use a tool like Kaspersky Password Manager to generate and safely store one.

You can subsequently log in with the same account in other browsers and on different devices. Your chat history will be accessible from any device linked to your account. So, if someone learns or steals your DeepSeek credentials, they’ll be able to review all your chats. Sadly, DeepSeek doesn’t yet support two-factor authentication or passkeys.

If you’ve even the slightest suspicion that your DeepSeek account credentials have been compromised, we recommend taking the following steps. Start by logging out of your account on all devices. In the web version, navigate to Settings → Profile → Log out of all devices → Log out. In the app, the path is Settings → Data controls → Log out of all devices. Next, you need to change your password, but DeepSeek doesn’t offer a direct path to do so once you’re logged in. To reset your password, go to the DeepSeek web version or mobile app, select the password login option, and click Forgot password?. DeepSeek will request your email address, send a verification code to that email, and allow you to reset the old password and create a new one.

Deploying DeepSeek locally

Privacy settings for the DeepSeek web version and mobile app are extremely limited and leave much to be desired. Fortunately, DeepSeek is an open-source language model. This means anyone can deploy the neural network locally on their computer. In this scenario, the AI won’t train on your data, and your information won’t end up on the company’s servers or with third parties. However, there’s a significant downside: when running the AI locally, you’ll be limited to the pre-trained model, and won’t be able to ask the chatbot to find up-to-date information online.

The simplest way to deploy DeepSeek locally is by using the LM Studio application. It allows you to work with models offline, and doesn’t collect any information from your chats with the AI. Download the application, click the search icon, and look for the model you need. The application will likely offer many different versions of the same model.

Searching LM Studio for DeepSeek models

Searching LM Studio for DeepSeek models

These versions differ in the number of parameters, denoted by the letter B. The more parameters a model has, the more mathematical computations it can perform, and the better it performs; consequently, the more resources it requires to run smoothly. For comparison, a modern laptop with 16–32GB of RAM is sufficient for lighter models (7B–13B), but for the largest version, with 70 billion parameters, you’d need to own an entire data center.

LM Studio will alert you if the model is too heavy for your device.

LM Studio warning you that the model may be too large for your device

LM Studio warning you that the model may be too large for your device

It’s important to understand that local AI use is not a panacea in terms of privacy and security. It doesn’t hurt to periodically check that LM Studio (or another similar application) is not connecting to external servers. For example, you can use the netstat command for that. If you’re not familiar with netstat, simply ask the chatbot to tell you about today’s news. If the chatbot is running locally as designed, the response definitely won’t include any current events.

Furthermore, you mustn’t forget about protecting the devices themselves: malware on your computer can intercept your data. Use Kaspersky Premium: it allows you to examine and block hidden connections, and will alert you to the presence of malicious software.

More on secure AI use:

Kaspersky official blog – ​Read More

Tykit Analysis: New Phishing Kit Stealing Hundreds of Microsoft Accounts in Finance 

Not long ago we reported a spike in phishing attacks that use an SVG file as the delivery vector. One striking detail was how the SVG embeds JavaScript that rebuilds the payload with XOR and then executes it directly via eval() to redirect victims to a phishing page. 

A quick look at the indicators we found showed that nearly all related cases used the same exfiltration addresses. Even more telling: the client-side logic and obfuscation techniques were unchanged across samples, and the communication with the C2 servers was implemented in several steps, with validation of the victim’s current authorization state at each stage. 

All this suggests the threat has a certain level of maturity; it’s not just an unusual delivery method, but something that behaves like a phishing kit

To test that hypothesis, measure the scale of the problem, and be able to tell this threat apart from others, we performed a technical analysis of the samples and labeled the family Tykit (Typical phishing kit). Here’s what we found. 

Key Takeaways 

  • It mimics Microsoft 365 login pages, targeting corporate account credentials of numerous organizations. 
  • The threat utilizes various evasion tactics like hiding code in SVGs or layering redirects. 
  • The client-side code executes in several stages and uses basic anti-detection techniques. 
  • The most affected industries include construction, professional services, IT, finance, government, telecom, real estate, education, and others across US, Canada, LATAM, EMEA, SE Asia and Middle East. 

Discovery & Pivoting: How ANY.RUN Detected the Threat 

Beginning with the analysis session in the ANY.RUN Sandbox, we quickly found the artifacts needed to expand the context: 

View analysis session 

The same SVG image was used for redirection (SHA256: a7184bef39523bef32683ef7af440a5b2235e83e7fb83c6b7ee5f08286731892 

Fig. 1 Redirecting SVG image 

The fake Microsoft 365 login page was hosted on the domain loginmicr0sft0nlineeckaf[.]52632651246148569845521065[.]cc; the URL contained the parameter /?s=, which could be useful for further searching. 

A POST request was sent to the server segy2[.]cc, targeting the URL /api/validate and containing data in the request body. 

Detect threats faster with ANY.RUN’s Interactive Sandbox 
See full attack chain in seconds for immediate response   



Get started with business email 


Fig. 2: Possible request to the C2 server 

Let’s try pivoting this, using a Threat Intelligence Lookup query: 

sha256:”a7184bef39523bef32683ef7af440a5b2235e83e7fb83c6b7ee5f08286731892″ OR domainName:”^loginmicr*.cc$” OR domainName:”^segy*” 

The result was encouraging: 189 related analysis sessions, most of them with a Malicious verdict. The earliest analysis containing the searched indicators dates back to May 7, 2025

View the earliest session with TYkit 

Fig. 3: Search results using TI Query 

Bingo! The same activity was observed several months earlier; phishing campaigns featuring URLs with the parameter /?s=, and requests sent to the server segy[.]cc, whose domain name is almost identical to the original one. 

A search using domainName:”^segy.” revealed a few more related domains: 

Fig. 4: Additional segy domains* 

With several hundred submissions recorded between May and October 2025, all sharing nearly identical patterns, this could hardly be a coincidence. The template-based infrastructure, identical attack scenarios, and a set of URLs resembling C2 API endpoints; could this be a phishing kit? 

It was necessary to analyze the JavaScript code from the phishing pages to see whether there were any recurring elements across samples, how sophisticated the code was, how many execution stages it included, and whether it implemented any mechanisms to prevent analysis. 

Catch attacks early with instant IOC enrichment in TI Lookup 
Power your proactive defense with data from 15K SOCs 



Start investigation 


Technical Analysis: How the Attack Unfolds 

Fig. 5:  Execution chain of Tykit attack  

Let’s look at another analysis session that reproduces the credentials-entry stage; a critical phase, because most phishing kits reveal themselves fully at the point of exfiltration: 

Step 1: SVG as the delivery vector 

The attack vector remains an SVG image that redirects the browser. The image uses the same design, but this time includes a working check-stub that prompts the user to “Enter the last 4 digits of your phone number” (in reality any value is accepted). 

Fig. 6: SVG file with the “check” 

Step 2: Trampoline and CAPTCHA stage 

After the check is submitted, the page redirects to a trampoline script, which then forwards the browser to the main phishing page. 

Example: hxxps://o3loginrnicrosoftlogcu02re[.]1uypagr[.]com/?s= 

The value of the s= parameter is the victim’s email encoded in Base64. 

Fig. 7: Trampoline code that forwards to the main phishing page 

Next, a page with a CAPTCHA loads; the site uses the Cloudflare Turnstile widget as anti-bot protection.  

Fig. 8:  Anti-bot protection on the phishing page using Cloudflare Turnstile 

It’s worth noting that the client-side code includes basic anti-debugging measures, for example, it blocks key combinations that open DevTools and disables the context menu. 

Fig. 9: Basic anti-debug protections in the page source 

Step 3: Credential capture and C2 logic 

After the CAPTCHA is passed, the page reloads and renders a fake Microsoft 365 sign-in page. 

At the same time, a background POST request is sent to the C2 server at ‘/api/validate’. The request body contains JSON with the following fields: 

  • “key”: a session key, or possibly a “license” key for the phishing kit. 
  • “redirect”: the URL to which the victim should be redirected. 
  • “email”: the victim’s email address, decoded; present if the s= parameter was populated earlier. 

The logic for sending the request, validating the response, and retrieving the next stage of the payload is implemented in an obfuscated portion of the page; after deobfuscation, it looks like this: 

Fig. 10: Logic for sending and validating the victim’s email 

The C2 server responds with a JSON object that contains: 

  • “status”: the C2 verdict — “success” or “error”. 
  • “message”: the next stage, provided as HTML. 
  • “data”: {“email”}: the victim’s email address. 

The next stage presents the password-entry form. The returned HTML also embeds obfuscated JavaScript that implements the logic for exfiltrating the stolen credentials to the C2 endpoint ‘/api/login’ and for deciding the page’s next actions (for example: show a prompt “Incorrect password”, redirect the user to a legitimate site to hide the fraud, etc.). 

A couple of notable snippets illustrate this behavior: 

Fig. 11: Exfiltration of the victim’s login and password 

The JSON sent in the POST /api/login request contains the following fields: 

  • “key”: The key (see above for possible meaning). 
  • “redierct”: The redirect URL (note the misspelling in the field name). 
  • “token”: An authorization JWT. Notably, the sample token 
    eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJiZjk5M2NkZS1mOTdiLTQyYTctODcxYy1lOTk1MDgzMmM5NjgiLCJleHAiOjE2OTkxNzc0NjF9.p9-OI0LCYcOjaU1I3TMZTjNSos50txbV3_Mi1jk1u8c 
    decodes to an expired token; the exp claim is 1699177461, which corresponds to Sunday, November 5, 2023, 09:44:21 GMT
  • “server”: The C2 server domain name. 
  • “email”: The victim’s email address. 
  • “password”: The victim’s password. 

These fields are then used by the server response to control what the victim sees next and whether additional actions (debugging hooks, logging, further redirects) are triggered. 

The response to the POST /api/login request is a JSON object with the following fields: 

  • “status”: “success” | “info” | “error” 
  • “d”: “<HTML payload to be shown to the user>” 
  • “message”: “Text such as ‘Incorrect password’ when the user enters the wrong password” 
  • “data”: { “email”: “<victim email>” } 

Behavior depends on the value of status: 

  • “success”: Render the HTML payload found in “d” to the user. 
  • “info”: Send a (likely debugging) POST request to /x.php on the C2 server. The logic for this flow is shown in the figure below. 
  • “error”: Display an error message (for example, “Incorrect password”). 
Fig. 12: Decision logic after the /api/login request 

At this point the execution chain of the phishing page ends. In sum, the page implements a fairly involved execution mechanism: the payload is obfuscated, there are basic (nonetheless effective) anti-debugging measures, and the exfiltration logic runs through several staged steps. 

Detection Rules for Identifying Tykit Activity 

After analyzing the structure of the Tykit phishing payload and the requests sent during the attack, we developed a set of rules that allow detecting the threat at different stages of its implementation. 

SVG files 

Let’s start with the SVG images themselves. While embedding JavaScript in SVGs can enable legitimate functionality (for example, interactive survey forms, animations, or dynamic UI mockups), it’s frequently abused by threat actors to hide malicious payloads. 

One common way to distinguish benign code from malicious is the presence of obfuscation; techniques that hinder triage and signature-based analysis by security tools and SOC analysts. 

To improve detection rates for this vector (even without attributing samples to a specific actor), monitor for: 

  • General signs of code obfuscation, e.g. frequent calls to atob(), parseInt(), charCodeAt(), fromCodePoint(), and generated variable names like var _0xABCDEF01 = … often produced by tools such as Obfuscator.io. 
  • Use of the unsafe eval() call, which executes arbitrary code. 
  • Script logic that redirects or alters the current document; calls to window.location.* or manipulation of hrefattributes. 

Below is a code snippet taken from an SVG used to load Tykit’s phishing page: 

Fig. 13: Malicious redirect code from an SVG that loads the Tykit phishing page 

Domains 

In nearly all cases linked to Tykit, the operators used templated domain names. For exfiltration servers we observed domains matching the ^segy?.* pattern, for example: 

  • segy[.]zip 
  • segy[.]xyz 
  • segy[.]cc 
  • segy[.]shop 
  • segy2[.]cc 

For the main servers hosting the phishing pages, aside from abuse of cloud and object-storage services, the operators frequently registered domains that appear to be generated by a DGA (domain-generation algorithm). These domains match a pattern like: ^loginmicr(o|0)s.*?.([a-z]+)?d+.cc$ 
 

To collect all IOCs and perform a detailed case analysis, see the TI Lookup query: 

domainName:”^loginmicr?s*.*.cc$” 

C2 & Exfiltration Logic 

Finally, the main distinction between Tykit and many other phishing campaigns is the set of HTTP requests sent to the C2 that determine the next actions and handle exfiltration of victim data. 

After analyzing the JavaScript used across samples, we identified the following requests: 

  1. GET /?s=<b64-encoded victim email> 

A series of initial requests used to pass Cloudflare Turnstile and load the phishing page; the s parameter may be empty. 

  1. POST /api/validate 

The first C2 request, used to validate the supplied email. The request body contains JSON with fields (see earlier): 

  • “key” 
  • “redirect” 
  • “email” 

The server responds with JSON containing: 

  • “status” 
  • “message” (next stage, as HTML) 
  • “data”: {“email”} 
  1. POST /api/validate (variant) 

A second variant of the validation request whose JSON body includes: 

  • “key” 
  • “redirect” 
  • “token” 
  • “server” 
  • “email” 

The response has the same structure as above. 

  1. POST /api/login 

The data-exfiltration request. The JSON body contains: 

  • “key” 
  • “redierct” (sic — note the misspelling) 
  • “token” 
  • “server” 
  • “email” 
  • “password” 

The response JSON instructs how to change the state of the phishing page and includes: 

  • “status” 
  • “d” (HTML payload to render) 
  • “message” 
  • “data”: {“email”} 
  1. POST /x.php 

Likely a debugging/logging endpoint triggered when the previous /api/login response contains “status”: “info”. The JSON body includes: 

  • “id” 
  • “key” 
  • “config” 

The format of the server’s response to this request was not determined during the investigation. 

Who’s Being Targeted 

We collected several signals about the industries and countries targeted by Tykit. 

Most affected countries: 

  • United States 
  • Canada 
  • South-East Asia 
  • LATAM / South America 
  • EU countries 
  • Middle East 

Targeted industries: 

  • Construction 
  • Professional services 
  • IT 
  • Agriculture 
  • Commerce / Retail 
  • Real estate 
  • Education 
  • Design 
  • Finance 
  • Government & military 
  • Telecom 

There are no unusual TTPs to call out;  this is another wave of spearphishing aimed at stealing Microsoft 365 credentials, featuring a multi-stage execution chain and the capability for AitM interception. 

Taken together, given the wide geographic and industry spread and the TTPs that match standard phishing kit behavior, the threat has been active for quite some time. It appears to be a typical PhaaS-style framework (hence the name TYpical phishKIT, or Tykit). Time will tell how it evolves. 

How Tykit Affects Organizations 

Tykit is a credential-theft campaign that targets Microsoft 365 accounts via a multi-stage phishing flow. Successful compromises can lead to: 

  • Account takeover (email, collaboration tools, identity tokens) enabling persistent access. 
  • Data exfiltration from mailboxes, drives, and connected SaaS apps. 
  • Lateral movement inside environments where cloud identities map to internal resources. 
  • AitM interception of MFA or session tokens, increasing the chance of bypassing second-factor protections. 
  • Operational and reputational damage (incident response costs, regulatory exposure, loss of client trust). 

Sectors at higher risk reflect the campaign’s targeting: construction, professional services, IT, finance, government, telecom, real estate, education, and others across US, Canada, LATAM, EMEA, SE Asia and Middle East. 

How to Prevent Tykit Attacks 

Tykit doesn’t reinvent phishing, but it shows how small technical tweaks, like hiding code in SVGs or layering redirects, can make attacks harder to catch. Still, with better visibility and the right tools, teams can stop it before credentials are stolen. 

Strengthen email and file security 

SVG files may look safe but can hide JavaScript that executes in the browser. Ensure your security gateway actually inspects SVG content, not just extensions. Use sandbox detonation and Content Disarm & Reconstruction (CDR) to uncover hidden payloads. The ANY.RUN Sandbox is particularly effective for detonating such files and exposing their redirects, scripts, and network calls in seconds. 

Use phishing-resistant MFA 

Tykit highlights how traditional MFA can be bypassed. Switch to phishing-resistant methods like FIDO2 or certificate-based MFA, disable legacy protocols, and enforce Conditional Access in Microsoft 365. Reviewing OAuth app consents and token lifetimes regularly helps minimize exposure. 

Monitor for key indicators 

Watch for outbound requests to domains such as segy* or loginmicr(o|0)s.*.cc, and POST requests to /api/validate, /api/login, or /x.php. ANY.RUN’s Threat Intelligence Lookup can quickly connect these IOCs to other related phishing activity, giving analysts context in minutes. 

Automate detection and threat hunting 

Configure your SIEM or XDR to alert on suspicious Base64 query parameters (like /?s=) or requests following Tykit’s structure. Integrating ANY.RUN’s Threat Intelligence Feeds ensures new indicators, fresh domains, hashes, and URL patterns, are automatically available for detection. 

Educate and respond fast 

Regular awareness training helps users recognize that even “image” files can trigger phishing chains. If an incident occurs, isolate affected accounts, revoke sessions, and reset credentials.  

Using ANY.RUN’s Interactive Sandbox during incident response can accelerate this process: analysts can safely replay the infection chain, confirm what data was exfiltrated, and extract accurate IOCs within minutes. This shortens MTTR and helps strengthen detections for the next wave of similar campaigns. 

Conclusion: Lessons from a “Typical” Phishing Kit 

We reviewed another sobering example of how phishing remains front and center in the cyber-threat landscape, and how regularly new tools appear to carry out these attacks; each one differing from its predecessors in some way. 

We labeled this example Tykit, examined its technical details, and derived several detection and hunting rules that, taken together, will help detect new samples and monitor the campaign’s evolution. 

Tykit doesn’t include a full arsenal of evasion and anti-detection techniques, but, like its more mature counterparts, it implements AitM-style data interception and methods to bypass multi-factor protections. It also relies on a quasi-distributed network architecture: servers are assigned dynamic domain names and roles are separated between “delivery” and “exfiltration.” 

Empowering Faster Analysis with ANY.RUN 

Investigating campaigns like Tykit can be time-consuming, from detecting a single suspicious SVG to uncovering the entire phishing infrastructure behind it. ANY.RUN helps analysts turn hours of manual work into minutes of interactive analysis. 

Here’s how: 

  1. See the full attack chain in under 60 seconds. 
    Detonate SVGs, phishing pages, or any other file type in real time and instantly observe redirects, scripts, and payload execution. 
  1. Reduce investigation time. 
    With live network mapping, script deobfuscation, and dynamic IOCs, analysts can skip static triage and focus directly on what matters. 
  1. Cut MTTR by more than 20 minutes per case. 
    Quick visibility into C2 communications, credential-capture logic, and data exfiltration flows allows teams to respond faster and with higher confidence. 
  1. Boost proactive defense. 
    Using ANY.RUN Threat Intelligence Lookup, SOC teams can pivot from a single domain or hash to hundreds of related submissions, revealing shared infrastructure and campaign patterns to enrich detection rules for catching future attacks. 
  1. Strengthen detections with fresh intelligence. 
    Automatically enrich your security tools with new indicators with TI Feeds sourced from live sandbox analyses and community contributions. 

For SOC teams, MSSPs, and threat researchers, ANY.RUN provides the speed, depth, and context needed to stay ahead of campaigns like Tykit, and the next one that follows. 

See every stage of the attack. Strengthen your detections. Try ANY.RUN now 

About ANY.RUN  

Over 500,000 cybersecurity professionals and 15,000+ companies in finance, manufacturing, healthcare, and other sectors rely on ANY.RUN to streamline malware investigations worldwide.   

Speed up triage and response by detonating suspicious files in ANY.RUN’s Interactive Sandbox, observing malicious behavior in real time, and gathering insights for faster, more confident security decisions. Paired with Threat Intelligence Lookup and Threat Intelligence Feeds, it provides actionable data on cyberattacks to improve detection and deepen your understanding of evolving threats.   

Explore more ANY.RUN’s capabilities during 14-day trial→ 

IOCs: 

SHA256 of observed SVG files: 

  • ECD3C834148D12AF878FD1DECD27BBBE2B532B5B48787BAD1BDE7497F98C2CC8 
  • A7184BEF39523BEF32683EF7AF440A5B2235E83E7FB83C6B7EE5F08286731892 

Observed domains & domain patterns: 

  • segy[.]zip 
  • segy[.]xyz 
  • segy[.]cc 
  • segy[.]shop 
  • segy2[.]cc 
  • ^loginmicr(o|0)s.*?.([a-z]+)?d+.cc$ 

Observed URLs: 

  • GET /?s=<b64_victim_email> 
  • POST /api/validate 
  • POST /api/login 
  • POST /x.php 

The post Tykit Analysis: New Phishing Kit Stealing Hundreds of Microsoft Accounts in Finance  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Reducing abuse of Microsoft 365 Exchange Online’s Direct Send

Overview 

Reducing abuse of Microsoft 365 Exchange Online’s Direct Send

Microsoft 365 Exchange Online’s Direct Send is designed to solve an enterprise-scale operational challenge: certain devices and legacy applications such as multifunction printers, scanners, building systems, and older line‑of‑business apps, need to send email into the tenant but lack the ability to properly authenticate. Direct Send preserves business workflows by allowing messages from these appliances to bypass more rigorous authentication and security checks. 

Unfortunately, Direct Send’s ability for content to bypass standard security checks makes it an attractive target for exploitation. Cisco Talos has observed increased activity by malicious actors leveraging Direct Send as part of phishing campaigns and business email compromise (BEC) attacks. Public research from the broader community, including reporting by Varonis, Abnormal Security, Ironscales, Proofpoint, Barracuda, Mimecast, Arctic Wolf, and others, agree with Cisco Talos findings: Adversaries have actively targeted corporations using Direct Send in recent months. 

Microsoft Inc., for its part, has already introduced a Public Preview of the RejectDirectSend control and signaled future improvements, such as Direct Send-specific usage reports and an eventual “default‑off” posture for new tenants. These ongoing enhancements, layered with existing security controls, are helping organizations strengthen their defenses while still supporting the business-critical workflows that Direct Send was designed to enable.

How Direct Send is exploited 

Direct Send abuse is the opportunistic exploitation of a trusted pathway. Adversaries emulate device or application traffic and send unauthenticated messages that appear to originate from internal accounts and trusted systems. The research cited above describes recurring techniques, such as: 

  • Impersonating internal users, executives, or IT help desks (e.g., observed by  Abnormal and Varonis) 
  • Business-themed lures, such as task approvals, voicemail or service notifications, and wire or payment prompts (e.g., Proofpoint’s observations about social engineering payloads) 
  • QR codes embedded in PDFs and low-content or empty-body messages carrying obfuscated attachments used to bypass traditional content filters and drive the user to credential harvesting pages (e.g., highlighted in Ironscales, Barracuda, and Mimecast reporting) 
  • Use of trusted Exchange infrastructure and legitimate SMTP flows to inherit implicit trust and decrease payload scrutiny

“What happens when a feature built for convenience becomes an attacker’s perfect disguise?” – Abnormal Security, framing the dual‑use nature of Direct Send.

Legitimate dependencies still exist. Many enterprises have not fully migrated older scanning or workflow systems to authenticated submission (SMTP AUTH) or to partner connectors. A hasty blanket disablement without visibility and change planning can disrupt invoice processing, document distribution, or facilities notifications. That’s precisely why Microsoft is building reporting to help administrators sequence risk reduction without accidental business impact.

Examples

The examples in Figure 1 (victim information redacted) demonstrate very obvious attacks that were presumed to be internal messages and therefore bypassed sender verification that could have convicted these threats. 

Direct Send bypasses sender verification 

There are three key elements to email domain sender verification: 

  1. DomainKeys-Identified Mail (DKIM) is a cryptographic signature of message headers and content. This can verify that the message was sent by a server with a key authorized by the owner of the sending domain. 
  2. Sender Policy Framework (SPF) specifies a list of IP ranges that are authorized to send on behalf of the domain. 
  3. Domain-based Message Authentication, Reporting and Conformance (DMARC) defines what to do with a domain’s noncompliant mail when it lacks a DKIM signature and SPF authorization. Senders can choose a DMARC policy that instructs recipients to reject this mail. This is increasingly common, especially with banks.

Had the previous examples in Figure 1 been scanned with DMARC, DKIM, and SPF, they would have been rejected. However, Direct Send prevents this sort of inspection.

Mitigation and recommendations 

With Direct Send abuse becoming more prevalent, it is critical for organizations to review their security posture related to Direct Send. Aligning with Microsoft’s guidance and community findings, Talos recommends: 

  1. Disable or restrict Direct Send where feasible. 
    1. Inventory current reliance. Although forthcoming Microsoft reporting should make this more streamlined, creating or reviewing internal device inventories, SPF records, and connector configs. 
    2. Enable Set-OrganizationConfig -RejectDirectSend $true once you’ve validated mailflows for legitimate internal traffic.
  2. Migrate devices to authenticated SMTP. 
    1. Prefer authenticated SMTP client submission (port 587) for devices and applications that can store modern credentials or leverage app-specific identities (Microsoft documentation). 
    2. Use SMTP relays with tightly scoped source IP restrictions only for devices that are unable to use authenticated submission.
  3. Implement partner/inbound connectors for approved senders. 
    1. Establish certificate or IP-based partner connectors for third-party services legitimately sending with your accepted domains.
  4. Strengthen authentication and alignment. 
    1. Maintain SPF with required authorized sending IPs; adopt Soft Fail (~all) per guidance from the Messaging, Malware and Mobile Anti-Abuse Working Group (M³AAWG) as well as Microsoft. 
    2. Enforce DKIM signing and monitor DMARC aggregate reports for anomalous internal-looking unauthenticated traffic.
  5. Strengthen policy, access, and monitoring. 
    1. Restrict egress on port 25 from general user segments; only designated hosts should originate SMTP traffic. 
    2. Use Conditional Access or equivalent policies to block legacy authentication paths that are no longer justified. 
    3. Alert on unexpected internal domain messages lacking authentication.

“You can’t block what you don’t see.” – Ironscales, on visibility as a prerequisite to confident enforcement

These defenses layer on Microsoft’s platform controls, reducing attacker dwell time and shortening the detection-to-remediation window.

How Talos protects against Direct Send abuse 

Talos leverages advanced AI and machine learning to continuously analyze global email telemetry, campaign infrastructure, and evolving social engineering tactics — ensuring our customers stay ahead of emerging threats. Our security platform goes far beyond basic header checks, using behavioral analytics, deep content inspection, and continually adapting models to identify and neutralize sophisticated malicious actors before they target your organization. 

Contact Cisco Talos Incident Response to learn more about everything from proactively securing critical communications and endpoint protection, to security auditing and incident management. 

Acknowledgments: We appreciate the sustained efforts of Microsoft’s engineering and security teams and the broader research community whose transparent publications inform defenders worldwide. 

Cisco Talos Blog – ​Read More

How to configure privacy and security in ChatGPT | Kaspersky official blog

When we interact with artificial intelligence, we often share a significant amount of personal information without giving it much thought. This information can range from dietary preferences and marital status to our home address and even social security number. To ensure the security and privacy of this highly sensitive information, it’s essential to understand exactly what the AI does with your data: where it stores it and whether it uses it for training.

In this post, we take a closer look at the data collection policy of one of the most popular AI apps, ChatGPT, and explain how to configure it to maximize your privacy and security to the extent that OpenAI allows it. This is a long guide — but a comprehensive one.

Table of contents

What data ChatGPT collects about you

OpenAI, the owner and developer of ChatGPT, maintains two privacy policies. The specific policy that applies to users depends on the region where that individual registered their account:

Because these policies are similar, we’ll first cover the common elements, and then discuss the differences.

By default, OpenAI collects an extensive array of personal information and technical data about devices from all ChatGPT users.

  • Account information: name, login credentials, date of birth, billing information, and transaction history
  • User content: prompts as well as uploaded files, images, and audio
  • Communication information: contact details the user provided when reaching out to OpenAI via email or social media
  • Log data: IP address, browser type and settings, request date and time, and details about how the user interacts with OpenAI services
  • Usage data: information about the user’s interaction with OpenAI services, such as content viewed, features used, actions taken, and technical details like country, time zone, device, and connection type
  • Device information: device name, operating system, device identifiers, and the browser used
  • Location information: the region determined by the IP address, rather than the exact location
  • Cookies and similar technologies: necessary for service operation, user authentication, enabling specific features, and ensuring security; the complete list of cookies and their respective retention periods is available here

What exactly OpenAI does with the data it collects from individual users will be discussed in the next part of this post. Here, we indicate the key difference between the privacy policies for users from the European Economic Area (EEA) and those from other regions. European users have the right to object to the use of their personal data for direct marketing. They may also challenge data processing where the company justifies this by its “legitimate interests”, such as internal administration or improvements to services.

Note that OpenAI’s handling of data for business accounts is governed by separate rules that apply to ChatGPT Business and ChatGPT Enterprise subscriptions, as well as API access.

What OpenAI does with your data, and whether ChatGPT is trained on your chats

By default, ChatGPT can train its models on user prompts and the content that users upload. This policy applies to users of both the free version and the Plus and Pro subscriptions.

For business accounts — specifically ChatGPT Enterprise, ChatGPT Business, and API access — training on user data is disabled by default. However, in the case of the API (the application programming interface that connects OpenAI models to other applications and services — the simplest use case being ChatGPT-based customer support bots), the company provides developers with the option to voluntarily enable data transmission.

OpenAI outlines a comprehensive list of primary purposes for processing users’ personal information:

  • To maintain services: to respond to queries and assist users
  • To improve and develop services: to add new features and conduct research
  • To communicate with users: to notify users about changes and events
  • To protect the security of services: to prevent fraud and ensure security
  • To comply with legal obligations: to protect the rights of users, OpenAI, and third parties

The company also states that it may anonymize users’ personal data, though it does not obligate itself to do so. Furthermore, OpenAI reserves the right to transfer user data to third parties — specifically its contractors, partners, or government agencies — if such transfer is necessary for service operation, compliance with the law, or the protection of rights and security.

As the company notes on its website: “In some cases, models may learn from personal information to understand how elements like names and addresses function in language, or to recognize public figures and well-known entities”.

It’s important to note that all user data is processed and stored on OpenAI servers in the United States. Although the level of personal information protection may vary from country to country, the company asserts that it applies uniform security measures to all users.

How to prevent ChatGPT from using your data for AI training

To disable the collection of your data within the app, click your account name in the lower left corner of the screen. Select Settings, then navigate to Data controls.

ChatGPT application settings for macOS

In the Data controls section of the ChatGPT settings, you can disable the use of your prompts for model training

In Data controls, turn off the toggles next to the following items:

  • Improve the model for everyone: disabling this option prevents the use of your prompts and uploads (text, files, images) for model training. Turning this off deactivates the two items below it
  • Include your audio recordings: disabling this option prevents the voice messages from the dictation feature from being used for model training. It’s disabled by default
  • Include your video recordings: this refers to the feature that allows you to include a video stream from your camera during a voice chat in the ChatGPT apps for iOS and Android. This video stream may also be used for model training. You can also disable this option through the web application. It’s disabled by default

By turning off these settings, you prevent the use of new data for model training. However, it’s important to realize: if your prompts or content were already used for training before you disabled the option, it’s impossible to remove them from the trained model.

In this same section, you can delete or archive all chats, and also request to Export Data from your account. This allows you to check what information OpenAI stores about you. A data archive will be sent to your email. Please note that preparing an export may take some time.

The Delete account option is also available here. When your account is deleted, only your personal data is erased; information already used for model training remains.

Beyond the in-app settings, you can manage your data through the OpenAI Privacy Portal. On the portal, you can:

  • Request and download all your data stored by OpenAI
  • Completely delete your custom GPTs, as well as your ChatGPT account and the personal data associated with it
  • Ask OpenAI not to train the AI on your data. If OpenAI approves your request, the AI will stop training on the data you provided before you disabled the Improve the model for everyone option in the settings
  • Sometimes ChatGPT may also train on personal data from public sources — you can submit a request to stop this as well
  • Request the deletion of personal data from specific conversations or prompts

Users from the European Economic Area, the UK, and Switzerland have additional rights under the GDPR. The law is in effect in European countries and regulates how companies collect and use personal data. These rights are not directly displayed on the OpenAI Privacy Portal, but they can be exercised by submitting a request through the portal, or by writing to dsar@openai.com.

How to clear your data from ChatGPT’s memory

Another critical element of privacy protection is ChatGPT’s memory. Unlike chat history, memory allows the model to recall specific details about you, such as your name, interests, preferences, and communication style. This data persists across sessions and is used to personalize the AI’s responses.

To review exactly what the AI remembers within the app, click your account name in the lower-left corner of the screen. Choose Settings, then navigate to Personalization, and select Manage memories.

ChatGPT memory settings under Personalization

Under Personalization, you can manage saved memories, temporarily disable memory, or prevent the model from referring to chat history when responding

This section displays all stored information. If you wish for ChatGPT to forget a specific detail, click the trash can icon next to that memory. Important: for a memory to be completely erased, you need to also delete the specific chat the information was saved from. If you delete only the chat but not the memory, the data remains stored.

In Personalization, you can also configure what data ChatGPT will store about you in future conversations. To do this, you should familiarize yourself with the  two types of memory available in the AI:

  • Saved memories are fixed recollections about you, such as your name, interests, or communication style, which remain in the system until you manually delete them. These are created when you explicitly ask the chat to remember something
  • Chat history is the model’s ability to consider specific details from past conversations to produce more personalized responses. In this case, ChatGPT doesn’t store every detail; instead, it selects only fragments that it deems useful. These types of memories can change and adapt over time

You can disable one or both of these memory types in the ChatGPT settings. To deactivate saved memories, turn off the toggle next to Reference saved memories. To do the same for chat history, turn off the toggle next to Reference chat history.

Disabling these features doesn’t delete previously saved information. The data remains within the system, but the model ceases to reference it in new responses. To completely delete saved memories, go to the Manage memories section as described above.

The Personalization menu in the web-based version of ChatGPT is slightly different, with an additional option: Record mode. This allows the AI to reference transcripts of your past recordings when generating responses. It is possible to disable this feature within the web interface.

In addition, the web version displays a memory usage indicator, such as 87% full, which shows how much space is occupied by memories.

Memory settings in the web version of ChatGPT

The web version of ChatGPT also includes a memory usage indicator under Personalization

For sensitive conversations, you can utilize special Temporary Chats, which the AI won’t remember.

How to use Temporary Chats in ChatGPT

Temporary Chats in ChatGPT are designed to resemble incognito mode in a web browser. If you want to discuss something particularly intimate or confidential with the AI, this mode helps reduce the risks. The chats are not saved in the history, they don’t become part of the memory, and they’re not used to train the models. This last point holds true for all Temporary Chats regardless of the settings selected in the Data controls section, which was discussed above.

Once a session ends, its contents disappear and cannot be recovered. This means Temporary Chats won’t appear in your history, and ChatGPT won’t remember their content. However, OpenAI warns that for security purposes, a copy of the Temporary Chat may be stored on the company’s servers for up to 30 days.

In June 2025, a court ordered OpenAI to preserve all user chats with ChatGPT indefinitely. The decision has already taken effect, and even though the company plans to appeal it, at the time of this publication, OpenAI is compelled to store Temporary Chat data permanently in a special secure repository that “can only be accessed under strict legal protocols”. This largely nullifies the entire concept of “Temporary Chats”, and confirms the old adage, “There’s nothing more permanent than the temporary”.

It’s important to note that when creating a Temporary Chat, you’re starting a conversation with the AI from a blank slate: the chatbot won’t remember any information from its previous chats with you.

To initiate a Temporary Chat in the web-based version of ChatGPT, open a new chat and click Turn on temporary chat button in the upper right corner of the page.

Button for launching a Temporary Chat in ChatGPT

In the web version of ChatGPT, the Turn on temporary chat button is located in the upper right corner of the screen, and launches a new chat that won’t save any history or memory

To activate a Temporary Chat in the ChatGPT applications for macOS and Windows, click the AI model selection, and a Temporary Chat toggle will appear at the bottom of the window that opens.

Temporary Chat toggle in the ChatGPT app for macOS

In the ChatGPT app for macOS, Temporary Chat activation is available in the model selection menu

After a Temporary Chat is activated, a special screen will open, which will look slightly different in the desktop and web versions. If you see this screen, it means things are working correctly.

What the Temporary Chat interface looks like in ChatGPT

Temporary Chats are not saved in history, used to update memory, or utilized for model training

Integrating ChatGPT with your device applications

The ChatGPT application includes a feature named Work with Apps. This allows you to interact with the AI beyond the ChatGPT interface itself, extending its functionality into other apps on your device. Specifically, the model can connect to text editors and various development environments.

When you utilize this feature, you can receive AI suggestions and make edits directly within those apps, eliminating the need to copy text to a separate chat window. The core concept is to embed the AI into your existing, familiar workflows.

However, along with the convenience, this feature introduces privacy risks. By connecting to applications, ChatGPT gains access to the content of the files you’re working on. These files may include personal documents, work projects or reports, notes containing confidential information, and other similar content. A portion of this data may be sent to OpenAI’s servers for analysis and response generation.

Therefore, the more applications you grant access to, the higher the probability that sensitive information will be exposed to OpenAI.

At the time of this post, the ChatGPT application for macOS can connect to the following applications:

  • Text-editing and note-taking apps: Apple Notes, Notion, TextEdit, Quip
  • Development environments: Xcode, Script Editor, VS Code (Code, Code Insiders, VSCodium, Cursor, Windsurf)
  • JetBrains IDEs: Android Studio, IntelliJ IDEA, PyCharm, WebStorm, PHPStorm, CLion, Rider, RubyMine, AppCode, GoLand, DataGrip
  • Command-line interfaces: Terminal, iTerm, Warp, Prompt

No comparable list has been published for the Windows version of the app yet.

To check if this feature is currently enabled on your device, click your account name in the lower-left corner of the screen. Select Settings and scroll down to Work with Apps. If the toggle switch next to Enable Work with Apps is on, the feature is turned on.

Work with Apps in the ChatGPT settings

In Work with Apps, you can check if the feature is enabled, and manage connections to installed apps

It’s important to emphasize that enabling the feature doesn’t immediately give the ChatGPT app access to the applications on your device. For ChatGPT to analyze and make changes to content in other apps, the user must explicitly grant a separate permission to each individual app.

If you’re unsure whether you’ve granted ChatGPT any access permissions, you can verify this within the same section. To do this, select Manage Apps. The window that opens will display every app on your device that ChatGPT can potentially interact with. If each app shows Requires permissions underneath it, and Enable Permission on the right, it signifies that ChatGPT currently has no access to any apps.

List of applications for ChatGPT to connect to

Manage Apps displays the apps ChatGPT can potentially access

On macOS, should you choose to grant ChatGPT access to an application, you must also enable the AI app to control your computer via the accessibility features in the system settings. This permission grants ChatGPT extensive extra capabilities: monitoring your activities, managing other applications, simulating keystrokes, and interacting with the user interface. For this very reason, these permissions are granted only manually and require the user’s explicit confirmation.

If you’re concerned about the uncontrolled sharing of your data with ChatGPT, we recommend you disable the Enable Work with Apps toggle switch and forgo using this feature.

However, if you want ChatGPT to be able to work with applications on your device, you should pay attention to the following three features, and configure them according to your personal balance of privacy and convenience:

  • Automatically pair with apps from chat bar allows ChatGPT to automatically connect to supported applications directly from the chat UI without requiring manual selection each time. This speeds up your workflow, but increases the risk that the model will gain access to an application that the user didn’t intend to connect it to
  • Generate suggested edits allows ChatGPT to propose changes to text or code within the connected application, but you’ll need to apply those changes manually. This is the safer option because the user retains control over changes being made
  • Automatically apply suggested edits allows the model to immediately implement changes to files. While this maximizes process automation, it carries additional risks, as modifications could be applied without confirmation — potentially affecting important documents or work projects

How to connect ChatGPT to third-party online services

ChatGPT can also be connected to third-party online services for greater customization: this allows the AI to offer more precise answers and execute tasks better by considering, for example, your email correspondence in Gmail or schedule in Google Calendar.

Unlike Work with Apps, which enables ChatGPT to interact with locally installed applications, this feature involves external online platforms like GitHub, Gmail, Google Calendar, Teams, and many others.

The exact list of available services depends on your plan. The most extensive selection is available in the Business, Enterprise, and Edu tiers; a slightly more limited set is found in Pro; and the roster of services is significantly more modest in Plus. Free users have no access to this feature. Some regional restrictions also apply. You can view the full list for all plans by following the link.

When connecting to third-party services, it’s crucial to understand exactly what data OpenAI will process, how, and for what purposes. If you haven’t disabled training on your data, information received from connected services may also be used for model training. Furthermore, with the memory option enabled, ChatGPT is capable of remembering details obtained from third-party services and utilizing them in future chats.

To view the list of online services available for connection, click your account name in the bottom left corner of the screen. Then, select Settings and, in the Account section, navigate to Connectors.

Connectors available in the ChatGPT settings

Connectors available in the ChatGPT settings

Under Connectors, you’ll see services that are already connected, as well as those that are available for activation. To disconnect ChatGPT from a service, select the service and click Disconnect.

Connected service settings in ChatGPT

The settings for each connector allow you to disable ChatGPT’s access to the service, view the date when it was connected, and allow or disallow the automatic use of its data in chats

To mitigate privacy risks, we recommend connecting only the absolutely necessary services, and configuring the memory and data controls within ChatGPT in advance.

How to set up secure login to ChatGPT

If you are a frequent ChatGPT user, the service likely stores significantly more information about you than even social media. Therefore, if your account is compromised, attackers could gain access to data they can use for doxing, blackmail, fraud, theft of funds, and other types of attacks.

To mitigate these risks, it’s essential to set a complex password, and enable two-factor authentication for logging in to ChatGPT. What we have in mind when we say “complex” is a password that meets all of the following criteria:

  • A minimum length of 16 characters
  • A combination of uppercase and lowercase letters, numbers, and special characters
  • Ideally, no dictionary words, no simple sequences like “12345” or “qwerty”, and no repeating characters
  • Uniqueness: a different password for each website or online service

If your current ChatGPT password doesn’t satisfy these criteria, we strongly recommend you change it. While there’s no option to change the password as such in the ChatGPT settings, you can use the password reset procedure. To do this, log out of your account, select Forgot password? on the login screen, and follow the instructions to set up a new password.

You may be tempted to use the AI model itself to generate a password. However, we don’t recommend this: our research suggests that chatbots are often not very effective at this task, and frequently generate highly insecure passwords. Furthermore, even if you explicitly ask the neural network to create a random password, it won’t be truly random, and will therefore be more vulnerable.

For additional account protection, we also recommend enabling two-factor authentication: navigate to Settings, select Security, and turn on the Multi-factor authentication toggle switch. After this, scan the QR code in an authenticator application, or manually enter the secret key that appears on the screen, and verify the action with the one-time code.

In the Security section of the web version, you can also log out of all active sessions on all devices, including your current one. Unfortunately, you cannot view the login history. We recommend using this feature if you suspect that someone may have gained unauthorized access to your account.

Security section in the ChatGPT settings

In the web version’s security settings, you can enable multi-factor authentication, and also log out of ChatGPT on all devices

Final tips to secure your data

When using AI chatbots, it’s important to remember that these applications create new privacy challenges. To protect our data, we now must account for things that were not a concern when setting up accounts in traditional apps and web services, or even in social media and messaging apps. We hope that this comprehensive guide to privacy and security settings in ChatGPT will help you with this tricky task.

Also, please remember to safeguard your ChatGPT account against hijacking. The best way to do this is by using an app that generates and securely stores strong passwords, while also managing two-factor authentication codes.

Kaspersky Password Manager helps you create unique, complex passwords, autofill them when logging in, and generate one-time codes for two-factor authentication. Passwords, one-time codes, and other data encrypted in Kaspersky Password Manager can be synchronized across all your devices. This will help provide robust protection for your account in ChatGPT and other online services.

If you’re looking for more information on the secure use of artificial intelligence, here are some more useful posts:

Kaspersky official blog – ​Read More

Links to porn and online casinos hidden inside corporate websites

If your corporate website’s search engine rankings suddenly drop for no obvious reason, or if clients start complaining that their security software is blocking access or flagging your site as a source of unwanted content, you might be hosting a hidden block of links. These links typically point to shady websites, such as pornography or online casinos. While these links are invisible to regular users, search engines and security solutions scan and factor them in when judging your website’s authority and safety. Today, we explain how these hidden links harm your business, how attackers manage to inject them into legitimate websites, and how to protect your website from this unpleasantness.

Why hidden links are a threat to your business

First and foremost, hidden links to dubious sites can severely damage your site’s reputation and lower its ranking, which will immediately impact your position in search results. This is because search engines regularly scan websites’ HTML code, and are quick to discover any lines of code that attackers may have added. Using hidden blocks is often viewed by search algorithms as a manipulative practice: a hallmark of black hat SEO (also known simply as black SEO). As a result, search engines lower the ranking of any site found hosting such links.

Another reason for a drop in search rankings is that hidden links typically point to websites with a low domain rating, and content irrelevant to your business. Domain rating is a measure of a domain’s authority — reflecting its prestige and the quality of information published on it. If your site links to authoritative industry-specific pages, it tends to rise in search results. If it links to irrelevant, shady websites, it sinks. Furthermore, search engines view hidden blocks as a sign of artificial link building, which, again, penalizes the victim site’s placement in search results.

The most significant technical issue is the manipulation of link equity. Your website has a certain reputation or authority, which influences the ranking of pages you link to. For example, when you post a helpful article on your site, and link to your product page or contacts section, you’re essentially transferring authority from that valuable content to those internal pages. The presence of unauthorized external links siphons off this link equity to external sites. Normally, every internal link helps search engines understand which pages on your site are most important — boosting their position. However, when a significant portion of this equity leaks to dubious external domains, your key pages receive less authority. This ultimately causes them to rank lower than they should — directly impacting your organic traffic and SEO performance.

In the worst cases, the presence of these links can even lead to conflicts with law enforcement, and entail legal liability for distributing illegal content. Depending on local laws, linking to websites with illegal content could result in fines or even the complete blocking of your site by regulatory bodies.

How to check your site for hidden links

The simplest way to check your website for blocks of hidden links is to view its source code. To do this, open the site in browser and press Ctrl+U (in Windows and Linux) or Cmd+Option+U (in macOS). A new tab will open with the page’s source code.

In the source code, look for the following CSS properties that can indicate hidden elements:

  • display:none
  • visibility:hidden
  • opacity:0
  • height:0
  • width:0
  • position:absolute

These elements relate to CSS properties that make blocks on the page invisible — either entirely hidden or reduced to zero size. Theoretically, these properties can be used for legitimate purposes — such as responsive design, hidden menus, or pop-up windows. However, if they’re applied to links or entire blocks of link code, it could be a strong sign of malicious tampering.

Additionally, you can search the code for keywords related to the content that hidden links most often point to, such as “porn”, “sex”, “casino”, “card”, and the like.

For a deep dive into the specific methods attackers use to hide their link blocks on legitimate sites, check out our separate, more technical Securelist post.

How do attackers inject their links into legitimate sites?

To add an invisible block of links to a website, attackers first need the ability to edit your pages. They can achieve this in several ways.

Compromising administrator credentials

The dark web is home to a whole criminal ecosystem dedicated to buying and selling compromised credentials. Initial-access brokers will provide anyone with credentials tied to virtually any company. Attackers obtain these credentials through phishing attacks or stealer Trojans, or simply by scouring publicly available data breaches from other websites in the hope that employees reuse the same login and password across multiple platforms. Additionally, administrators might use overly simple passwords, or fail to change the default CMS credentials. In these cases, attackers can easily bruteforce the login details.

Gaining access to an account with administrator privileges gives criminals broad control over the website. Specifically, they can edit the HTML code, or install their own malicious plugins.

Exploiting CMS vulnerabilities

We frequently discuss various vulnerabilities in CMS platforms and plugins on our blog. Attackers can leverage these security flaws to edit template files (such as header.php, footer.php, or index.php), or directly insert blocks of hidden links into arbitrary pages across the site.

Compromising the hosting provider

In some cases, it’s the hosting company that gets compromised rather than the website itself. If the server hosting your website code is poorly protected, attackers can breach it and gain control over the site. Another common scenario concerns a server that hosts sites for many different clients. If access privileges are configured incorrectly, compromising one client can give criminals the ability to reach other websites hosted on that same server.

Malicious code blocks in free templates

Not all webmasters write their own code. Budget-conscious and unwary web designers might try to find free templates online and simply customize them to fit the corporate style. The code in these templates can also contain covert blocks inserted by malicious actors.

How do you protect your site from hidden links?

To secure your website against the injection of hidden links and its associated consequences, we recommend taking the following steps:

  • Avoid using questionable third-party templates, themes, or any other unverified solutions to build your website.
  • Promptly update both your CMS engine and all associated themes and plugins to their latest versions.
  • Routinely audit your plugins and themes, and immediately delete the ones you don’t use.
  • Regularly create backups of both your website and database. This ensures you can quickly restore your website’s operation in the event of compromise.
  • Check for unnecessary user accounts and excessive access privileges.
  • Promptly delete outdated or unused accounts, and establish only the minimum necessary privileges for active ones.
  • Establish a strong password policy and mandatory two-factor authentication for all accounts with admin privileges.
  • Conduct regular training for employees on basic cybersecurity principles. The Kaspersky Automated Security Awareness Platform can help you automate this process.

Kaspersky official blog – ​Read More

Ransomware attacks and how victims respond

Ransomware attacks and how victims respond

Welcome to this week’s edition of the Threat Source newsletter. 

I count myself fortunate that I have never been on the receiving end of a ransomware attack. My experiences have been from research and response, never as a victim. It’s a tough scenario: One day you are working or minding your own business when suddenly, threatening notes appear on desktops and systems simply stop working. So much of our survival as humans is tied to our livelihoods, so the amount of stress incurred can be severe. I get it, truly.  

Consequently, I am endlessly academically fascinated at stress responses and how humans… well… human during moments of adversity. A ransomware attack most certainly qualifies as adverse, and my sympathies are with you if you’ve ever had to endure one. But there’s a science to both the personal response, and the business response and its impacts writ large. 

Over the past year, excellent research has been published on these facets of response to help answer some of these questions, and naturally I dove right into the research. One of the things that stuck out to me was that the impact of the attacks and its effect on small businesses as a victim segment. A notable quote from a small business in the U.K. government’s “The experiences and impacts of ransomware attacks on individuals and organisations” states: 

“I’ve started to rebuild, using personal funds and living off personal funds for the last 2 or 3 years… I’ve got 0 savings left… It’s had a total impact on me… I’ve gone from probably nearly a £250,000 business down to about a £20,000 business.”

This quote isn’t unique in its impacts. Anecdotally, I can tell you small businesses are a large swath of victims for ransomware operators. It makes sense — Small victims likely pay out less but likely have lower security standards and security knowledge to defend themselves with. They also do not have the cash reserves, legal teams, or dedicated IT security staff that a mid-sized or larger business have. Simply put, they are disproportionately vulnerable.  

So, what about the impacts to health and wellbeing? What, if anything, do we do? And why the hell should any business even care? To paraphrase the Royal United Services Institute (RUSI) report ‘Your Data is Stolen and Encrypted’: The Ransomware Victim Experience, ransomware victims experience trauma, exhaustion, and emotional harm that rival — and often outlast — the financial or operational damage. You can survive the battle of immediate operational harm of a cyber attack and recover your day-to-day business operations only to lose the war as your employees cope and process the trauma of the event and thus impact your business’ ability to compete and survive.  

A cyber attack is both a technical and psychological crisis. Business leadership would be wise to understand this. Lead with empathy and remember that your employees look to you for leadership, especially in these incidents. People follow calm, not commands. Have an incident response plan for how you respond to the technical crises, but also for how to take care of your people. You might find yourself that much stronger at the end, both with a company that handles adversity and employees that are cared for. 

The one big thing 

Cisco Talos discovered a new malware campaign linked to the North Korean threat group Famous Chollima, which targets job seekers with trojanized applications to steal credentials and cryptocurrency. The campaign features two primary tools, BeaverTail and OtterCookie, whose functionalities are merging and now include new modules for keylogging, screenshot capture, and clipboard monitoring. The attackers deliver these threats through malicious NPM packages and even a fake VS Code extension, making detection and prevention more challenging. 

Why do I care? 

This campaign highlights how attackers use social engineering and software supply chain attacks to compromise individuals and organizations, not just targeting companies directly. If you or your organization use development tools, npm packages, or receive unsolicited job offers, you could be at risk of credential or cryptocurrency theft. 

So now what? 

Be vigilant when installing NPM packages, browser extensions, or software from unofficial sources, and verify the legitimacy of job offer communications. Use layered security solutions, such as endpoint protection, multi-factor authentication, and network monitoring tools like those recommended by Cisco, to detect and block these threats. 

Top security headlines of the week 

Harvard is first confirmed victim of Oracle EBS zero-day hack 
Harvard was listed on the data leak website dedicated to victims of the Cl0p ransomware on October 12. The hackers have made available over 1.3 TB of archive files that allegedly contain Harvard data. (SecurityWeek

Two new Windows zero-days exploited in the wild 
Microsoft released fixes for 183 security flaws spanning its products, including three vulnerabilities that have come under active exploitation in the wild. One affects every version ever shipped. (The Hacker News

Officials crack down on Southeast Asia cybercrime networks, seize $15B 
The cryptocurrency seizure and sanctions targeting the Prince Group, associates and affiliated businesses mark the most extensive action taken against cybercrime operations in the region to date. (CyberScoop

Extortion group leaks millions of records from Salesforce hacks 
The leak occurred days after the group, an offshoot of the notorious Lapsus$, Scattered Spider, and ShinyHunters hackers, claimed the theft of data from 39 Salesforce customers, threatening to leak it unless the CRM provider pays a ransom. (SecurityWeek)

Can’t get enough Talos? 

Humans of Talos: Laura Faria and empathy on the front lines 
What does it take to lead through chaos and keep organizations safe in the digital age? Amy sits down with Laura Faria, Incident Commander at Cisco Talos Incident Response, to explore a career built on empathy, collaboration, and a passion for cybersecurity. 

Beers with Talos: Two Marshalls, one podcast 
Talos’ Vice President Christopher Marshall (the “real Marshall,” much to Joe’s displeasure) joins Hazel, Bill, and Joe for a very real conversation about leading people when the world won’t stop moving. 

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA256: d933ec4aaf7cfe2f459d64ea4af346e69177e150df1cd23aad1904f5fd41f44a 
MD5: 1f7e01a3355b52cbc92c908a61abf643  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=d933ec4aaf7cfe2f459d64ea4af346e69177e150df1cd23aad1904f5fd41f44a  
Example Filename: cleanup.bat  
Detection Name: W32.D933EC4AAF-90.SBX.TG 

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507  
MD5: 2915b3f8b703eb744fc54c81f4a9c67f  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507  
Example Filename: e74d9994a37b2b4c693a76a580c3e8fe_1_Exe.exe  
Detection Name: Win.Worm.Coinminer::1201 

SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974  
MD5: aac3165ece2959f39ff98334618d10d9  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974  
Example Filename: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974.exe 
Detection Name: W32.Injector:Gen.21ie.1201  

SHA256: 41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610  
MD5: 85bbddc502f7b10871621fd460243fbc  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610  
Example Filename:85bbddc502f7b10871621fd460243fbc.exe  
Detection Name: W32.41F14D86BC-100.SBX.TG 

SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91  
MD5: 7bdbd180c081fa63ca94f9c22c457376  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91  
Example Filename: e74d9994a37b2b4c693a76a580c3e8fe_3_Exe.exe  
Detection Name: Win.Dropper.Miner::95.sbx.tg

Cisco Talos Blog – ​Read More

Laura Faria: Empathy on the front lines

Laura Faria: Empathy on the front lines

What does it take to lead through chaos and keep organizations safe in the digital age? This week, Amy sat down with Laura Faria, an incident commander at Cisco Talos Incident Response, to explore a career built on empathy, collaboration, and a passion for cybersecurity.

Laura opens up about her journey through various cybersecurity roles, her leap into incident response, and what it feels like to support customers during their toughest moments — including high-stakes situations impacting critical infrastructure.

Amy Ciminnisi: Laura, it’s great to have you on. You’re an incident commander, like Alex from last episode. When did your time at Talos start, and what did your journey here look like?

Laura Faria: My entire career, I’ve worked in many large cybersecurity vendors – endpoint vendors, firewall vendors, RAV vendors… So I’ve been in a lot of different roles, but they were mostly in sales. I was actually a Cisco employee prior to joining Talos IR. I’ve been at Cisco for a little over a year now, and Talos is one of the best places to work in Cisco, in my opinion. They have a really high reputation because everyone knows the quality of research that Talos provides our customers with.

I’d never been an incident commander before, so it was a really new position to me. But it was definitely something I was interested in, and the more I learned about what the role entailed, the more I was excited to pursue it.

AC: This is a very high-pressure role, and I’m sure you have to deal with a lot of chaos, a lot of moving parts. How do you stay focused and motivated to keep going when you’re tackling these really serious incidents for our clients?

LF: A common phrase in Talos IR is “It’s never a good day when an incident happens.” During very serious episodes, being there to help the customer feels really good, especially if you’re a people person, and especially if you’re empathetic and a lot with people’s emotions.

Recently, I had a very difficult incident where a large health care facility was seeing a lot of outages in different locations throughout the nation. Every time we saw a site outage, it was devastating because we knew what that meant. We actually had people’s lives in our hands. Although it’s a very difficult job, taking the time to look at the big picture and understand the importance of your job is really what keeps you going.


Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos!

Cisco Talos Blog – ​Read More