Microsoft 365 Exchange Online’s Direct Send is designed to solve an enterprise-scale operational challenge: certain devices and legacy applications such as multifunction printers, scanners, building systems, and older line‑of‑business apps, need to send email into the tenant but lack the ability to properly authenticate. Direct Send preserves business workflows by allowing messages from these appliances to bypass more rigorous authentication and security checks.
Unfortunately, Direct Send’s ability for content to bypass standard security checks makes it an attractive target for exploitation. Cisco Talos has observed increased activity by malicious actors leveraging Direct Send as part of phishing campaigns and business email compromise (BEC) attacks. Public research from the broader community, including reporting by Varonis, Abnormal Security, Ironscales, Proofpoint, Barracuda, Mimecast, Arctic Wolf, and others, agree with Cisco Talos findings: Adversaries have actively targeted corporations using Direct Send in recent months.
Microsoft Inc., for its part, has already introduced a Public Preview of the RejectDirectSend control and signaled future improvements, such as Direct Send-specific usage reports and an eventual “default‑off” posture for new tenants. These ongoing enhancements, layered with existing security controls, are helping organizations strengthen their defenses while still supporting the business-critical workflows that Direct Send was designed to enable.
How Direct Send is exploited
Direct Send abuse is the opportunistic exploitation of a trusted pathway. Adversaries emulate device or application traffic and send unauthenticated messages that appear to originate from internal accounts and trusted systems. The research cited above describes recurring techniques, such as:
Impersonating internal users, executives, or IT help desks (e.g., observed by Abnormal and Varonis)
Business-themed lures, such as task approvals, voicemail or service notifications, and wire or payment prompts (e.g., Proofpoint’s observations about social engineering payloads)
QR codes embedded in PDFs and low-content or empty-body messages carrying obfuscated attachments used to bypass traditional content filters and drive the user to credential harvesting pages (e.g., highlighted in Ironscales, Barracuda, and Mimecast reporting)
Use of trusted Exchange infrastructure and legitimate SMTP flows to inherit implicit trust and decrease payload scrutiny
“What happens when a feature built for convenience becomes an attacker’s perfect disguise?” – Abnormal Security, framing the dual‑use nature of Direct Send.
Legitimate dependencies still exist. Many enterprises have not fully migrated older scanning or workflow systems to authenticated submission (SMTP AUTH) or to partner connectors. A hasty blanket disablement without visibility and change planning can disrupt invoice processing, document distribution, or facilities notifications. That’s precisely why Microsoft is building reporting to help administrators sequence risk reduction without accidental business impact.
Examples
Figure 1. Spoofed American Express dispute (left), fake ACH payment notice (right).
The examples in Figure 1 (victim information redacted) demonstrate very obvious attacks that were presumed to be internal messages and therefore bypassed sender verification that could have convicted these threats.
Direct Send bypasses sender verification
There are three key elements to email domain sender verification:
DomainKeys-Identified Mail (DKIM) is a cryptographic signature of message headers and content. This can verify that the message was sent by a server with a key authorized by the owner of the sending domain.
Sender Policy Framework (SPF) specifies a list of IP ranges that are authorized to send on behalf of the domain.
Domain-based Message Authentication, Reporting and Conformance (DMARC) defines what to do with a domain’s noncompliant mail when it lacks a DKIM signature and SPF authorization. Senders can choose a DMARC policy that instructs recipients to reject this mail. This is increasingly common, especially with banks.
Had the previous examples in Figure 1 been scanned with DMARC, DKIM, and SPF, they would have been rejected. However, Direct Send prevents this sort of inspection.
Mitigation and recommendations
With Direct Send abuse becoming more prevalent, it is critical for organizations to review their security posture related to Direct Send. Aligning with Microsoft’s guidance and community findings, Talos recommends:
Disable or restrict Direct Send where feasible.
Inventory current reliance. Although forthcoming Microsoft reporting should make this more streamlined, creating or reviewing internal device inventories, SPF records, and connector configs.
Enable Set-OrganizationConfig -RejectDirectSend $true once you’ve validated mailflows for legitimate internal traffic.
Migrate devices to authenticated SMTP.
Prefer authenticated SMTP client submission (port 587) for devices and applications that can store modern credentials or leverage app-specific identities (Microsoft documentation).
Use SMTP relays with tightly scoped source IP restrictions only for devices that are unable to use authenticated submission.
Implement partner/inbound connectors for approved senders.
Establish certificate or IP-based partner connectors for third-party services legitimately sending with your accepted domains.
Strengthen authentication and alignment.
Maintain SPF with required authorized sending IPs; adopt Soft Fail (~all) per guidance from the Messaging, Malware and Mobile Anti-Abuse Working Group (M³AAWG) as well as Microsoft.
Enforce DKIM signing and monitor DMARC aggregate reports for anomalous internal-looking unauthenticated traffic.
Strengthen policy, access, and monitoring.
Restrict egress on port 25 from general user segments; only designated hosts should originate SMTP traffic.
Use Conditional Access or equivalent policies to block legacy authentication paths that are no longer justified.
Alert on unexpected internal domain messages lacking authentication.
“You can’t block what you don’t see.” – Ironscales, on visibility as a prerequisite to confident enforcement
These defenses layer on Microsoft’s platform controls, reducing attacker dwell time and shortening the detection-to-remediation window.
How Talos protects against Direct Send abuse
Talos leverages advanced AI and machine learning to continuously analyze global email telemetry, campaign infrastructure, and evolving social engineering tactics — ensuring our customers stay ahead of emerging threats. Our security platform goes far beyond basic header checks, using behavioral analytics, deep content inspection, and continually adapting models to identify and neutralize sophisticated malicious actors before they target your organization.
Contact Cisco Talos Incident Response to learn more about everything from proactively securing critical communications and endpoint protection, to security auditing and incident management.
Acknowledgments: We appreciate the sustained efforts of Microsoft’s engineering and security teams and the broader research community whose transparent publications inform defenders worldwide.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-21 10:06:432025-10-21 10:06:43Reducing abuse of Microsoft 365 Exchange Online’s Direct Send
When we interact with artificial intelligence, we often share a significant amount of personal information without giving it much thought. This information can range from dietary preferences and marital status to our home address and even social security number. To ensure the security and privacy of this highly sensitive information, it’s essential to understand exactly what the AI does with your data: where it stores it and whether it uses it for training.
In this post, we take a closer look at the data collection policy of one of the most popular AI apps, ChatGPT, and explain how to configure it to maximize your privacy and security to the extent that OpenAI allows it. This is a long guide — but a comprehensive one.
OpenAI, the owner and developer of ChatGPT, maintains two privacy policies. The specific policy that applies to users depends on the region where that individual registered their account:
Because these policies are similar, we’ll first cover the common elements, and then discuss the differences.
By default, OpenAI collects an extensive array of personal information and technical data about devices from all ChatGPT users.
Account information: name, login credentials, date of birth, billing information, and transaction history
User content: prompts as well as uploaded files, images, and audio
Communication information: contact details the user provided when reaching out to OpenAI via email or social media
Log data: IP address, browser type and settings, request date and time, and details about how the user interacts with OpenAI services
Usage data: information about the user’s interaction with OpenAI services, such as content viewed, features used, actions taken, and technical details like country, time zone, device, and connection type
Device information: device name, operating system, device identifiers, and the browser used
Location information: the region determined by the IP address, rather than the exact location
Cookies and similar technologies: necessary for service operation, user authentication, enabling specific features, and ensuring security; the complete list of cookies and their respective retention periods is available here
What exactly OpenAI does with the data it collects from individual users will be discussed in the next part of this post. Here, we indicate the key difference between the privacy policies for users from the European Economic Area (EEA) and those from other regions. European users have the right to object to the use of their personal data for direct marketing. They may also challenge data processing where the company justifies this by its “legitimate interests”, such as internal administration or improvements to services.
Note that OpenAI’s handling of data for business accounts is governed by separate rules that apply to ChatGPT Business and ChatGPT Enterprise subscriptions, as well as API access.
What OpenAI does with your data, and whether ChatGPT is trained on your chats
By default, ChatGPT can train its models on user prompts and the content that users upload. This policy applies to users of both the free version and the Plus and Pro subscriptions.
For business accounts — specifically ChatGPT Enterprise, ChatGPT Business, and API access — training on user data is disabled by default. However, in the case of the API (the application programming interface that connects OpenAI models to other applications and services — the simplest use case being ChatGPT-based customer support bots), the company provides developers with the option to voluntarily enable data transmission.
OpenAI outlines a comprehensive list of primary purposes for processing users’ personal information:
To maintain services: to respond to queries and assist users
To improve and develop services: to add new features and conduct research
To communicate with users: to notify users about changes and events
To protect the security of services: to prevent fraud and ensure security
To comply with legal obligations: to protect the rights of users, OpenAI, and third parties
The company also states that it may anonymize users’ personal data, though it does not obligate itself to do so. Furthermore, OpenAI reserves the right to transfer user data to third parties — specifically its contractors, partners, or government agencies — if such transfer is necessary for service operation, compliance with the law, or the protection of rights and security.
As the company notes on its website: “In some cases, models may learn from personal information to understand how elements like names and addresses function in language, or to recognize public figures and well-known entities”.
It’s important to note that all user data is processed and stored on OpenAI servers in the United States. Although the level of personal information protection may vary from country to country, the company asserts that it applies uniform security measures to all users.
How to prevent ChatGPT from using your data for AI training
To disable the collection of your data within the app, click your account name in the lower left corner of the screen. Select Settings, then navigate to Data controls.
In the Data controls section of the ChatGPT settings, you can disable the use of your prompts for model training
In Data controls, turn off the toggles next to the following items:
Improve the model for everyone: disabling this option prevents the use of your prompts and uploads (text, files, images) for model training. Turning this off deactivates the two items below it
Include your audio recordings: disabling this option prevents the voice messages from the dictation feature from being used for model training. It’s disabled by default
Include your video recordings: this refers to the feature that allows you to include a video stream from your camera during a voice chat in the ChatGPT apps for iOS and Android. This video stream may also be used for model training. You can also disable this option through the web application. It’s disabled by default
By turning off these settings, you prevent the use of new data for model training. However, it’s important to realize: if your prompts or content were already used for training before you disabled the option, it’s impossible to remove them from the trained model.
In this same section, you can delete or archive all chats, and also request to Export Data from your account. This allows you to check what information OpenAI stores about you. A data archive will be sent to your email. Please note that preparing an export may take some time.
The Delete account option is also available here. When your account is deleted, only your personal data is erased; information already used for model training remains.
Beyond the in-app settings, you can manage your data through the OpenAI Privacy Portal. On the portal, you can:
Request and download all your data stored by OpenAI
Completely delete your custom GPTs, as well as your ChatGPT account and the personal data associated with it
Ask OpenAI not to train the AI on your data. If OpenAI approves your request, the AI will stop training on the data you provided before you disabled the Improve the model for everyone option in the settings
Sometimes ChatGPT may also train on personal data from public sources — you can submit a request to stop this as well
Request the deletion of personal data from specific conversations or prompts
Users from the European Economic Area, the UK, and Switzerland have additional rights under the GDPR. The law is in effect in European countries and regulates how companies collect and use personal data. These rights are not directly displayed on the OpenAI Privacy Portal, but they can be exercised by submitting a request through the portal, or by writing to dsar@openai.com.
How to clear your data from ChatGPT’s memory
Another critical element of privacy protection is ChatGPT’s memory. Unlike chat history, memory allows the model to recall specific details about you, such as your name, interests, preferences, and communication style. This data persists across sessions and is used to personalize the AI’s responses.
To review exactly what the AI remembers within the app, click your account name in the lower-left corner of the screen. Choose Settings, then navigate to Personalization, and select Manage memories.
Under Personalization, you can manage saved memories, temporarily disable memory, or prevent the model from referring to chat history when responding
This section displays all stored information. If you wish for ChatGPT to forget a specific detail, click the trash can icon next to that memory. Important: for a memory to be completely erased, you need to also delete the specific chat the information was saved from. If you delete only the chat but not the memory, the data remains stored.
In Personalization, you can also configure what data ChatGPT will store about you in future conversations. To do this, you should familiarize yourself with the two types of memory available in the AI:
Saved memories are fixed recollections about you, such as your name, interests, or communication style, which remain in the system until you manually delete them. These are created when you explicitly ask the chat to remember something
Chat history is the model’s ability to consider specific details from past conversations to produce more personalized responses. In this case, ChatGPT doesn’t store every detail; instead, it selects only fragments that it deems useful. These types of memories can change and adapt over time
You can disable one or both of these memory types in the ChatGPT settings. To deactivate saved memories, turn off the toggle next to Reference saved memories. To do the same for chat history, turn off the toggle next to Reference chat history.
Disabling these features doesn’t delete previously saved information. The data remains within the system, but the model ceases to reference it in new responses. To completely delete saved memories, go to the Manage memories section as described above.
The Personalization menu in the web-based version of ChatGPT is slightly different, with an additional option: Record mode. This allows the AI to reference transcripts of your past recordings when generating responses. It is possible to disable this feature within the web interface.
In addition, the web version displays a memory usage indicator, such as 87% full, which shows how much space is occupied by memories.
The web version of ChatGPT also includes a memory usage indicator under Personalization
For sensitive conversations, you can utilize special Temporary Chats, which the AI won’t remember.
How to use Temporary Chats in ChatGPT
Temporary Chats in ChatGPT are designed to resemble incognito mode in a web browser. If you want to discuss something particularly intimate or confidential with the AI, this mode helps reduce the risks. The chats are not saved in the history, they don’t become part of the memory, and they’re not used to train the models. This last point holds true for all Temporary Chats regardless of the settings selected in the Data controls section, which was discussed above.
Once a session ends, its contents disappear and cannot be recovered. This means Temporary Chats won’t appear in your history, and ChatGPT won’t remember their content. However, OpenAI warns that for security purposes, a copy of the Temporary Chat may be stored on the company’s servers for up to 30 days.
In June 2025, a court orderedOpenAI to preserve all user chats with ChatGPT indefinitely. The decision has already taken effect, and even though the company plans to appeal it, at the time of this publication, OpenAI is compelled to store Temporary Chat data permanently in a special secure repository that “can only be accessed under strict legal protocols”. This largely nullifies the entire concept of “Temporary Chats”, and confirms the old adage, “There’s nothing more permanent than the temporary”.
It’s important to note that when creating a Temporary Chat, you’re starting a conversation with the AI from a blank slate: the chatbot won’t remember any information from its previous chats with you.
To initiate a Temporary Chat in the web-based version of ChatGPT, open a new chat and click Turn on temporary chat button in the upper right corner of the page.
In the web version of ChatGPT, the Turn on temporary chat button is located in the upper right corner of the screen, and launches a new chat that won’t save any history or memory
To activate a Temporary Chat in the ChatGPT applications for macOS and Windows, click the AI model selection, and a Temporary Chat toggle will appear at the bottom of the window that opens.
In the ChatGPT app for macOS, Temporary Chat activation is available in the model selection menu
After a Temporary Chat is activated, a special screen will open, which will look slightly different in the desktop and web versions. If you see this screen, it means things are working correctly.
Temporary Chats are not saved in history, used to update memory, or utilized for model training
Integrating ChatGPT with your device applications
The ChatGPT application includes a feature named Work with Apps. This allows you to interact with the AI beyond the ChatGPT interface itself, extending its functionality into other apps on your device. Specifically, the model can connect to text editors and various development environments.
When you utilize this feature, you can receive AI suggestions and make edits directly within those apps, eliminating the need to copy text to a separate chat window. The core concept is to embed the AI into your existing, familiar workflows.
However, along with the convenience, this feature introduces privacy risks. By connecting to applications, ChatGPT gains access to the content of the files you’re working on. These files may include personal documents, work projects or reports, notes containing confidential information, and other similar content. A portion of this data may be sent to OpenAI’s servers for analysis and response generation.
Therefore, the more applications you grant access to, the higher the probability that sensitive information will be exposed to OpenAI.
No comparable list has been published for the Windows version of the app yet.
To check if this feature is currently enabled on your device, click your account name in the lower-left corner of the screen. Select Settings and scroll down to Work with Apps. If the toggle switch next to Enable Work with Apps is on, the feature is turned on.
In Work with Apps, you can check if the feature is enabled, and manage connections to installed apps
It’s important to emphasize that enabling the feature doesn’t immediately give the ChatGPT app access to the applications on your device. For ChatGPT to analyze and make changes to content in other apps, the user must explicitly grant a separate permission to each individual app.
If you’re unsure whether you’ve granted ChatGPT any access permissions, you can verify this within the same section. To do this, select Manage Apps. The window that opens will display every app on your device that ChatGPT can potentially interact with. If each app shows Requires permissions underneath it, and Enable Permission on the right, it signifies that ChatGPT currently has no access to any apps.
Manage Apps displays the apps ChatGPT can potentially access
On macOS, should you choose to grant ChatGPT access to an application, you must also enable the AI app to control your computer via the accessibility features in the system settings. This permission grants ChatGPT extensive extra capabilities: monitoring your activities, managing other applications, simulating keystrokes, and interacting with the user interface. For this very reason, these permissions are granted only manually and require the user’s explicit confirmation.
If you’re concerned about the uncontrolled sharing of your data with ChatGPT, we recommend you disable the Enable Work with Apps toggle switch and forgo using this feature.
However, if you want ChatGPT to be able to work with applications on your device, you should pay attention to the following three features, and configure them according to your personal balance of privacy and convenience:
Automatically pair with apps from chat bar allows ChatGPT to automatically connect to supported applications directly from the chat UI without requiring manual selection each time. This speeds up your workflow, but increases the risk that the model will gain access to an application that the user didn’t intend to connect it to
Generate suggested edits allows ChatGPT to propose changes to text or code within the connected application, but you’ll need to apply those changes manually. This is the safer option because the user retains control over changes being made
Automatically apply suggested edits allows the model to immediately implement changes to files. While this maximizes process automation, it carries additional risks, as modifications could be applied without confirmation — potentially affecting important documents or work projects
How to connect ChatGPT to third-party online services
ChatGPT can also be connected to third-party online services for greater customization: this allows the AI to offer more precise answers and execute tasks better by considering, for example, your email correspondence in Gmail or schedule in Google Calendar.
Unlike Work with Apps, which enables ChatGPT to interact with locally installed applications, this feature involves external online platforms like GitHub, Gmail, Google Calendar, Teams, and many others.
The exact list of available services depends on your plan. The most extensive selection is available in the Business, Enterprise, and Edu tiers; a slightly more limited set is found in Pro; and the roster of services is significantly more modest in Plus. Free users have no access to this feature. Some regional restrictions also apply. You can view the full list for all plans by following the link.
When connecting to third-party services, it’s crucial to understand exactly what data OpenAI will process, how, and for what purposes. If you haven’t disabled training on your data, information received from connected services may also be used for model training. Furthermore, with the memory option enabled, ChatGPT is capable of remembering details obtained from third-party services and utilizing them in future chats.
To view the list of online services available for connection, click your account name in the bottom left corner of the screen. Then, select Settings and, in the Account section, navigate to Connectors.
Connectors available in the ChatGPT settings
Under Connectors, you’ll see services that are already connected, as well as those that are available for activation. To disconnect ChatGPT from a service, select the service and click Disconnect.
The settings for each connector allow you to disable ChatGPT’s access to the service, view the date when it was connected, and allow or disallow the automatic use of its data in chats
To mitigate privacy risks, we recommend connecting only the absolutely necessary services, and configuring the memory and data controls within ChatGPT in advance.
How to set up secure login to ChatGPT
If you are a frequent ChatGPT user, the service likely stores significantly more information about you than even social media. Therefore, if your account is compromised, attackers could gain access to data they can use for doxing, blackmail, fraud, theft of funds, and other types of attacks.
To mitigate these risks, it’s essential to set a complex password, and enable two-factor authentication for logging in to ChatGPT. What we have in mind when we say “complex” is a password that meets all of the following criteria:
A minimum length of 16 characters
A combination of uppercase and lowercase letters, numbers, and special characters
Ideally, no dictionary words, no simple sequences like “12345” or “qwerty”, and no repeating characters
Uniqueness: a different password for each website or online service
If your current ChatGPT password doesn’t satisfy these criteria, we strongly recommend you change it. While there’s no option to change the password as such in the ChatGPT settings, you can use the password reset procedure. To do this, log out of your account, select Forgot password? on the login screen, and follow the instructions to set up a new password.
You may be tempted to use the AI model itself to generate a password. However, we don’t recommend this: our research suggests that chatbots are often not very effective at this task, and frequently generate highly insecure passwords. Furthermore, even if you explicitly ask the neural network to create a random password, it won’t be truly random, and will therefore be more vulnerable.
For additional account protection, we also recommend enabling two-factor authentication: navigate to Settings, select Security, and turn on the Multi-factor authentication toggle switch. After this, scan the QR code in an authenticator application, or manually enter the secret key that appears on the screen, and verify the action with the one-time code.
In the Security section of the web version, you can also log out of all active sessions on all devices, including your current one. Unfortunately, you cannot view the login history. We recommend using this feature if you suspect that someone may have gained unauthorized access to your account.
In the web version’s security settings, you can enable multi-factor authentication, and also log out of ChatGPT on all devices
Final tips to secure your data
When using AI chatbots, it’s important to remember that these applications create new privacy challenges. To protect our data, we now must account for things that were not a concern when setting up accounts in traditional apps and web services, or even in social media and messaging apps. We hope that this comprehensive guide to privacy and security settings in ChatGPT will help you with this tricky task.
Also, please remember to safeguard your ChatGPT account against hijacking. The best way to do this is by using an app that generates and securely stores strong passwords, while also managing two-factor authentication codes.
Kaspersky Password Manager helps you create unique, complex passwords, autofill them when logging in, and generate one-time codes for two-factor authentication. Passwords, one-time codes, and other data encrypted in Kaspersky Password Manager can be synchronized across all your devices. This will help provide robust protection for your account in ChatGPT and other online services.
If you’re looking for more information on the secure use of artificial intelligence, here are some more useful posts:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-20 10:06:432025-10-20 10:06:43How to configure privacy and security in ChatGPT | Kaspersky official blog
If your corporate website’s search engine rankings suddenly drop for no obvious reason, or if clients start complaining that their security software is blocking access or flagging your site as a source of unwanted content, you might be hosting a hidden block of links. These links typically point to shady websites, such as pornography or online casinos. While these links are invisible to regular users, search engines and security solutions scan and factor them in when judging your website’s authority and safety. Today, we explain how these hidden links harm your business, how attackers manage to inject them into legitimate websites, and how to protect your website from this unpleasantness.
Why hidden links are a threat to your business
First and foremost, hidden links to dubious sites can severely damage your site’s reputation and lower its ranking, which will immediately impact your position in search results. This is because search engines regularly scan websites’ HTML code, and are quick to discover any lines of code that attackers may have added. Using hidden blocks is often viewed by search algorithms as a manipulative practice: a hallmark of black hat SEO (also known simply as black SEO). As a result, search engines lower the ranking of any site found hosting such links.
Another reason for a drop in search rankings is that hidden links typically point to websites with a low domain rating, and content irrelevant to your business. Domain rating is a measure of a domain’s authority — reflecting its prestige and the quality of information published on it. If your site links to authoritative industry-specific pages, it tends to rise in search results. If it links to irrelevant, shady websites, it sinks. Furthermore, search engines view hidden blocks as a sign of artificial link building, which, again, penalizes the victim site’s placement in search results.
The most significant technical issue is the manipulation of link equity. Your website has a certain reputation or authority, which influences the ranking of pages you link to. For example, when you post a helpful article on your site, and link to your product page or contacts section, you’re essentially transferring authority from that valuable content to those internal pages. The presence of unauthorized external links siphons off this link equity to external sites. Normally, every internal link helps search engines understand which pages on your site are most important — boosting their position. However, when a significant portion of this equity leaks to dubious external domains, your key pages receive less authority. This ultimately causes them to rank lower than they should — directly impacting your organic traffic and SEO performance.
In the worst cases, the presence of these links can even lead to conflicts with law enforcement, and entail legal liability for distributing illegal content. Depending on local laws, linking to websites with illegal content could result in fines or even the complete blocking of your site by regulatory bodies.
How to check your site for hidden links
The simplest way to check your website for blocks of hidden links is to view its source code. To do this, open the site in browser and press Ctrl+U (in Windows and Linux) or Cmd+Option+U (in macOS). A new tab will open with the page’s source code.
In the source code, look for the following CSS properties that can indicate hidden elements:
display:none
visibility:hidden
opacity:0
height:0
width:0
position:absolute
These elements relate to CSS properties that make blocks on the page invisible — either entirely hidden or reduced to zero size. Theoretically, these properties can be used for legitimate purposes — such as responsive design, hidden menus, or pop-up windows. However, if they’re applied to links or entire blocks of link code, it could be a strong sign of malicious tampering.
Additionally, you can search the code for keywords related to the content that hidden links most often point to, such as “porn”, “sex”, “casino”, “card”, and the like.
For a deep dive into the specific methods attackers use to hide their link blocks on legitimate sites, check out our separate, more technical Securelist post.
How do attackers inject their links into legitimate sites?
To add an invisible block of links to a website, attackers first need the ability to edit your pages. They can achieve this in several ways.
Compromising administrator credentials
The dark web is home to a whole criminal ecosystem dedicated to buying and selling compromised credentials. Initial-access brokers will provide anyone with credentials tied to virtually any company. Attackers obtain these credentials through phishing attacks or stealer Trojans, or simply by scouring publicly available data breaches from other websites in the hope that employees reuse the same login and password across multiple platforms. Additionally, administrators might use overly simple passwords, or fail to change the default CMS credentials. In these cases, attackers can easily bruteforce the login details.
Gaining access to an account with administrator privileges gives criminals broad control over the website. Specifically, they can edit the HTML code, or install their own malicious plugins.
Exploiting CMS vulnerabilities
We frequently discuss various vulnerabilities in CMS platforms and plugins on our blog. Attackers can leverage these security flaws to edit template files (such as header.php, footer.php, or index.php), or directly insert blocks of hidden links into arbitrary pages across the site.
Compromising the hosting provider
In some cases, it’s the hosting company that gets compromised rather than the website itself. If the server hosting your website code is poorly protected, attackers can breach it and gain control over the site. Another common scenario concerns a server that hosts sites for many different clients. If access privileges are configured incorrectly, compromising one client can give criminals the ability to reach other websites hosted on that same server.
Malicious code blocks in free templates
Not all webmasters write their own code. Budget-conscious and unwary web designers might try to find free templates online and simply customize them to fit the corporate style. The code in these templates can also contain covert blocks inserted by malicious actors.
How do you protect your site from hidden links?
To secure your website against the injection of hidden links and its associated consequences, we recommend taking the following steps:
Avoid using questionable third-party templates, themes, or any other unverified solutions to build your website.
Promptly update both your CMS engine and all associated themes and plugins to their latest versions.
Routinely audit your plugins and themes, and immediately delete the ones you don’t use.
Regularly create backups of both your website and database. This ensures you can quickly restore your website’s operation in the event of compromise.
Check for unnecessary user accounts and excessive access privileges.
Promptly delete outdated or unused accounts, and establish only the minimum necessary privileges for active ones.
Establish a strong password policy and mandatory two-factor authentication for all accounts with admin privileges.
Welcome to this week’s edition of the Threat Source newsletter.
I count myself fortunate that I have never been on the receiving end of a ransomware attack. My experiences have been from research and response, never as a victim. It’s a tough scenario: One day you are working or minding your own business when suddenly, threatening notes appear on desktops and systems simply stop working. So much of our survival as humans is tied to our livelihoods, so the amount of stress incurred can be severe. I get it, truly.
Consequently, I am endlessly academically fascinated at stress responses and how humans… well… human during moments of adversity. A ransomware attack most certainly qualifies as adverse, and my sympathies are with you if you’ve ever had to endure one. But there’s a science to both the personal response, and the business response and its impacts writ large.
Over the past year, excellent research has been published on these facets of response to help answer some of these questions, and naturally I dove right into the research. One of the things that stuck out to me was that the impact of the attacks and its effect on small businesses as a victim segment. A notable quote from a small business in the U.K. government’s “The experiences and impacts of ransomware attacks on individuals and organisations” states:
“I’ve started to rebuild, using personal funds and living off personal funds for the last 2 or 3 years… I’ve got 0 savings left… It’s had a total impact on me… I’ve gone from probably nearly a £250,000 business down to about a £20,000 business.”
This quote isn’t unique in its impacts. Anecdotally, I can tell you small businesses are a large swath of victims for ransomware operators. It makes sense — Small victims likely pay out less but likely have lower security standards and security knowledge to defend themselves with. They also do not have the cash reserves, legal teams, or dedicated IT security staff that a mid-sized or larger business have. Simply put, they are disproportionately vulnerable.
So, what about the impacts to health and wellbeing? What, if anything, do we do? And why the hell should any business even care? To paraphrase the Royal United Services Institute (RUSI) report ‘Your Data is Stolen and Encrypted’: The Ransomware Victim Experience, ransomware victims experience trauma, exhaustion, and emotional harm that rival — and often outlast — the financial or operational damage. You can survive the battle of immediate operational harm of a cyber attack and recover your day-to-day business operations only to lose the war as your employees cope and process the trauma of the event and thus impact your business’ ability to compete and survive.
A cyber attack is both a technical and psychological crisis.Business leadership would be wise to understand this. Lead with empathy and remember that your employees look to you for leadership, especially in these incidents. People follow calm, not commands. Have an incident response plan for how you respond to the technical crises, but also for how to take care of your people. You might find yourself that much stronger at the end, both with a company that handles adversity and employees that are cared for.
The one big thing
Cisco Talos discovered a new malware campaign linked to the North Korean threat group Famous Chollima, which targets job seekers with trojanized applications to steal credentials and cryptocurrency. The campaign features two primary tools, BeaverTail and OtterCookie, whose functionalities are merging and now include new modules for keylogging, screenshot capture, and clipboard monitoring. The attackers deliver these threats through malicious NPM packages and even a fake VS Code extension, making detection and prevention more challenging.
Why do I care?
This campaign highlights how attackers use social engineering and software supply chain attacks to compromise individuals and organizations, not just targeting companies directly. If you or your organization use development tools, npm packages, or receive unsolicited job offers, you could be at risk of credential or cryptocurrency theft.
So now what?
Be vigilant when installing NPM packages, browser extensions, or software from unofficial sources, and verify the legitimacy of job offer communications. Use layered security solutions, such as endpoint protection, multi-factor authentication, and network monitoring tools like those recommended by Cisco, to detect and block these threats.
Top security headlines of the week
Harvard is first confirmed victim of Oracle EBS zero-day hack Harvard was listed on the data leak website dedicated to victims of the Cl0p ransomware on October 12. The hackers have made available over 1.3 TB of archive files that allegedly contain Harvard data. (SecurityWeek)
Two new Windows zero-days exploited in the wild Microsoft released fixes for 183 security flaws spanning its products, including three vulnerabilities that have come under active exploitation in the wild. One affects every version ever shipped. (The Hacker News)
Officials crack down on Southeast Asia cybercrime networks, seize $15B The cryptocurrency seizure and sanctions targeting the Prince Group, associates and affiliated businesses mark the most extensive action taken against cybercrime operations in the region to date. (CyberScoop)
Extortion group leaks millions of records from Salesforce hacks The leak occurred days after the group, an offshoot of the notorious Lapsus$, Scattered Spider, and ShinyHunters hackers, claimed the theft of data from 39 Salesforce customers, threatening to leak it unless the CRM provider pays a ransom. (SecurityWeek)
Can’t get enough Talos?
Humans of Talos: Laura Faria and empathy on the front lines What does it take to lead through chaos and keep organizations safe in the digital age? Amy sits down with Laura Faria, Incident Commander at Cisco Talos Incident Response, to explore a career built on empathy, collaboration, and a passion for cybersecurity.
Beers with Talos: Two Marshalls, one podcast Talos’ Vice President Christopher Marshall (the “real Marshall,” much to Joe’s displeasure) joins Hazel, Bill, and Joe for a very real conversation about leading people when the world won’t stop moving.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-16 18:07:422025-10-16 18:07:42Ransomware attacks and how victims respond
What does it take to lead through chaos and keep organizations safe in the digital age? This week, Amy sat down with Laura Faria, an incident commander at Cisco Talos Incident Response, to explore a career built on empathy, collaboration, and a passion for cybersecurity.
Laura opens up about her journey through various cybersecurity roles, her leap into incident response, and what it feels like to support customers during their toughest moments — including high-stakes situations impacting critical infrastructure.
Amy Ciminnisi: Laura, it’s great to have you on. You’re an incident commander, like Alex from last episode. When did your time at Talos start, and what did your journey here look like?
Laura Faria: My entire career, I’ve worked in many large cybersecurity vendors – endpoint vendors, firewall vendors, RAV vendors… So I’ve been in a lot of different roles, but they were mostly in sales. I was actually a Cisco employee prior to joining Talos IR. I’ve been at Cisco for a little over a year now, and Talos is one of the best places to work in Cisco, in my opinion. They have a really high reputation because everyone knows the quality of research that Talos provides our customers with.
I’d never been an incident commander before, so it was a really new position to me. But it was definitely something I was interested in, and the more I learned about what the role entailed, the more I was excited to pursue it.
AC: This is a very high-pressure role, and I’m sure you have to deal with a lot of chaos, a lot of moving parts. How do you stay focused and motivated to keep going when you’re tackling these really serious incidents for our clients?
LF: A common phrase in Talos IR is “It’s never a good day when an incident happens.” During very serious episodes, being there to help the customer feels really good, especially if you’re a people person, and especially if you’re empathetic and a lot with people’s emotions.
Recently, I had a very difficult incident where a large health care facility was seeing a lot of outages in different locations throughout the nation. Every time we saw a site outage, it was devastating because we knew what that meant. We actually had people’s lives in our hands. Although it’s a very difficult job, taking the time to look at the big picture and understand the importance of your job is really what keeps you going.
Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos!
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-16 10:06:402025-10-16 10:06:40Laura Faria: Empathy on the front lines
Cisco Talos has uncovered a new attack linked to Famous Chollima, a threat group aligned with North Korea (DPRK). This group is known for impersonating hiring organizations to target job seekers, tricking them into installing information-stealing malware to obtain cryptocurrency and user credentials.
In this incident, although the organization was not directly targeted, one of its systems was compromised-likely because a user was deceived by a fake job offer and installed a trojanized Node.js application called “Chessfi.”
The malicious software was distributed via a Node.js package named “node-nvm-ssh” on the official NPM repository.
Famous Chollima often uses two malicious tools, BeaverTail and OtterCookie, which started as separate but complementary programs. Recent campaigns have seen their functions merging, and Talos has identified a new module for keylogging and taking screenshots.
While searching for related threats, Talos also found a malicious VS Code extension containing BeaverTail and OtterCookie code. Although attribution to Famous Chollima is not certain, this suggests the group may be testing new methods for delivering their malware.
Introduction
In a previous Cisco Talos blog post, we described one side of the Contagious Interview (Deceptive Development) campaigns, where the threat actor utilized fake employment websites, ClickFix social engineering techniques and payload variants of credential and cryptocurrency remote access trojans (RATs) known as GolangGhost and PylangGhost.
Talos is actively monitoring other clusters of these campaigns, which are attributed to the threat actor group Famous Chollima, a subgroup of Lazarus, and aligned with the economic interests of DPRK. This post discusses some of the tactics, techniques and procedures (TTPs) and changes in tooling developed over time by another large cluster of Contagious Interview activities. These campaigns center around tools known as BeaverTail and OtterCookie.
Famous Chollima frequently uses BeaverTail and OtterCookie, with many individual sub-clusters of activities installing InvisibleFerret, a Python based modular payload. Although BeaverTail and OtterCookie originated as separate-but-complementary entities, their functionality in some recent campaigns started to merge, along with the inclusion of new functional OtterCookie modules.
Talos detected a Famous Chollima campaign in an organization headquartered in Sri Lanka. The organization was not deliberately targeted by the attackers, but it had one of the systems on the network infected. It is likely that a user fell for a fake job offer instructing them to install a trojanised Node.js application called Chessfi as a part of a fake job interview process.
Once Talos conducted the initial analysis, we realized that the tools used to conduct it had characteristics of BeaverTail and of OtterCookie, blurring the distinction between the two. The code also contained some additional functionality we have not previously encountered.
BeaverTail and OtterCookie combine
This blog focuses on OtterCookie modules and will not provide a deep dive into well-known BeaverTail and OtterCookie functionality. While some of these modules are already known, at least one was not previously documented. The examples we show are already deobfuscated, and with the help of an LLM, the function and variable names are replaced by names that correspond to their actual functionality.
Keylogging and screenshotting module
Talos encountered a keylogging and screenshotting module in this campaign that has not been previously documented. We were able to find earlier OtterCookie samples containing the module that were uploaded to VirusTotal in April 2025.
The keylogging module uses the packages “node-global-key-listener” for keylogging, “screenshot-desktop” for taking desktop screenshots and “sharp” for converting the captured screenshots into web-friendly image formats.
The module configures the packages to listen for keystrokes and periodically takes a screenshot of the current desktop session to upload them to the OtterCookie command and control (C2) server.
Figure 1. The keylogger listens for the keyboard and mouse key presses and saves them into a file.
The keystrokes are saved in the user’s temporary sub-folder windows-cache with the file name “1.tmp” and screenshots are saved in the same sub-folder with the file name “2.jpeg”. While the keylogger runs in a loop and flushes the buffer every second, a screenshot is taken every four seconds.
Talos also discovered one instance of the module where the clipboard monitoring was included in the module code, extending its functionality to stealing clipboard content.
The keylogging data and the captured screenshots are uploaded to the OtterCookie C2 server at a specific TCP port 1478, using the URL “hxxp[://]172[.]86[.]88[.]188:1478/upload”.
Figure 2. Keystrokes saved as “1.tmp” and screenshots as “2.jpeg”, then uploaded to C2 server.
OtterCookie VS Code extension
During the search for similar samples on VirusTotal, Talos discovered a recently-uploaded VS Code extension, which may attempt to run OtterCookie if installed in the victim’s editor environment. The extension is a fake employment onboarding helper, supposedly allowing the user to track and manage candidate tests.
While Talos cannot attribute this VS Code extension to Famous Chollima with high confidence, this may indicate that the threat actor is experimenting with different delivery vectors. The extension could also be a result of experimentation from another actor, possibly even a researcher, who is not associated with Famous Chollima, as this stands out from their usual TTPs.
Figure 3. VS Code extension configuration pretends to be Mercer Onboarding Helper but contains OtterCookie code.
Other OtterCookie modules
The OtterCookie section of code starts with the definition of a JSON object that contains configuration values such as unique campaign ID and C2 server IP address. The OtterCookie portion of the code constructs additional modules from strings, which are executed as child processes. In the attack we analyzed, we observed three modules, but we also found one additional module while hunting for similar samples in our repositories and on VirusTotal.
Remote shell module
The first module is fundamental for OtterCookie and begins with the detection of the infected system platform and a virtual machine check, followed by reporting the collected user and host information to the OtterCookie C2 server.
Figure 4. Main Ottercookie module starts with machine checking and includes virtual machines check.
Once the system information is submitted, the module installs the “socket.io-client” package, which is used to connect to a specific port on the OtterCookie C2 server to wait for the commands and execute them in a loop. socket.io-client first uses HTTP and then switches to WebSocket protocol to communicate with the server, which we observed listening on the TCP port 1418.
Figure 5. socket.io-client package used for communication with C2 server.
Finally, depending on the operating system, this module periodically checks the clipboard content using the commands “pbpaste” on macOS or “powershell Get-Clipboard” on Windows. It sends the clipboard content to the C2 server URL specifically used for logging OtterCookie activities at “hxxp[://]172[.]86[.]88[.]188/api/service/makelog”.
File uploading module
This module enumerates all drives and traverses the file system in order to find files to be uploaded to the OtterCookie C2 IP address at a specific port and URL (in this case, “hxxp[://]172[.]86[.]88[.]188:1476/upload”).
This module contains a list of folder and file names to be excluded from the search, and another list with target file name extensions and file name search patterns to select files to be uploaded.
Figure 6. The list of excluded folders and patterns for files uploaded to C2.
The “interesting” file list contains the following search patterns:
While not present in the campaign Talos analyzed, this module was found while looking for similar files on VirusTotal. In addition to the targeting of cryptocurrency browser extensions by the BeaverTail code, this OtterCookie module targets extensions from a list that partially overlaps with the list of cryptocurrency wallet extensions from the BeaverTail part of the payload.
Table 1. Cryptocurrency modules targeted by OtterCookie.
The cryptocurrency module targets Google Chrome and Brave browsers. If any extensions are found in any of the browser profiles, the extension files as well as the saved Login and Web data are uploaded to a C2 server URL. In the discovered sample Talos found, the uploading C2 URL was “hxxp[://]138[.]201[.]50[.]5:5961/upload”.
OtterCookie evolution
OtterCookie malware samples were first observed by NTT Security Holdings around November 2024, leading to a blog article published in December 2024. However, it is believed that the malware has been in use since approximately September 2024. The motivation for using the name OtterCookie seems to come from the early samples that used content of HTTP response cookies to transfer the malicious code executed by the response handler. This remote code loading functionality evolved over time to include additional functionality.
However, in April 2025, Talos started seeing additional modules included within the OtterCookie code and the usage of the C2 server, mostly for downloading a simple OtterCookie configuration and uploading stolen data.
Figure 7. OtterCookie modules evolution timeline.
OtterCookie evolved from the initial basic data-gathering capabilities to more modular design for data theft and remote command execution techniques. The modules are stored within OtterCookie strings and executed on the fly.
The earliest versions, corresponding to what NTT researchers refer to as v1, contain code for remote command execution (RCE) and use a socket.IO package to communicate with a C2 server. Over time, OtterCookie modules evolved by adding code to steal and upload files, with the end goal of stealing cryptocurrency wallets from a list of hardcoded browser extensions and saved browser credentials. Targeted browsers include Brave, Google Chrome, Opera and Mozilla Firefox.
The next iteration, referred to as v2, included a clipboard stealing code using the Clipboardy package to send clipboard contents to the remote server. This version also handles the loading of Javascript code from the server slightly differently. Instead of evaluating the returned header cookie as v1, the server generates an error which gets handled by the error handler on the client side. The error handler simply passes the error response data to the eval function, where it gets executed. The loader code is small and easy to miss, and along with the risk of false positive detections, this may be why the detection of the OtterCookie loaders on VirusTotal is not very successful.
Figure 8. C2 server generates an error but the code is still executed by OtterCookie. Figure 9. OtterCookie loader error handler evaluates the response data.
The v3 variant, observed in February 2025, includes a function to send specific files (documents, image files and cryptocurrency-related files) to the C2 server. OtterCookie v4, observed since April 2025, includes a virtual environment detection code to help attackers discern logs from sandbox environments from those of actual infections, indicating a focus on evading analysis. The code also contains some anti-debugging and anti-logging functionality.
The v4 variant improves on the previous version’s code and updates the clipboard content-stealing method. It no longer uses the Clipboardy library and instead it uses standard macOS or Windows commands for retrieving clipboard content.
It is important to note that over time the difference between BeaverTail and OtterCookie became blurred and in some attacks their code was merged into a single tool.
OtterCookie v5
The campaign Talos observed in August 2025 uses the most recent version of OtterCookie, which we call v5, demonstrated by the addition of a keylogging module. The keylogging module contains code to capture screenshots, which are uploaded to the C2 server together with keyboard keystrokes.
Figure 10. Node-nvm-ssh infection path.
The initial infection vector was a modified Chessfi application hosted on Bitbucket. ChessFi is a web3-based multiplayer chess platform where players can challenge each other and bet cryptocurrency on the outcome of their matches. The choice of a cryptocurrency-related application to lure victims is consistent with previous reporting of Famous Chollima targeting.
The first sign of the attack was the user installing the source code of the application. Based on the folder name of the project, we assess with moderate confidence that the victim was approached by the threat actor through the freelance marketplace site Fiverr, which is consistent with the previous reporting. While hunting for similar samples we have also discovered code repositories that were uploaded for the victim as attachments to Discord conversations.
The infection process started with the victim running Git to clone the repository:
Figure 11. The initial infection vector.
The Development section of the application’s readme document gives instructions to developers on how to install and run the project. After cloning the repository, it states that the users should run npm install to install dependencies, which, in this campaign, also included a malicious npm package named “node-nvm-ssh”.
During the installation of dependencies, the malicious package is downloaded from the repository and installed. The npm installer parses the package.json file of the malicious package and finds instructions to run commands after the installation. This is executed by parsing the “postinstall” value of the JSON object named “scripts”. At the first glance, it seems like the postinstall scripts are there to run tests, transpile TypeScript files to Java script and possibly run other test scripts.
Figure 13. Malicious package.json file contains the instruction that will cause the malicious code to run.
However, the package.json module installation instruction “npm run skip” causes npm to call the command node test/fixtures/eval specified in the value “skip”. The default node.js loading conventions will try loading a number of file names if none of them are specifically mentioned, one of them being index.js.
The test/fixtures/eval/index.js content contains code to spawn a child process using the file “test/fixtures/eval/node_modules/file15.js”.
Figure 14. index.js spawning a child process to execute file15.js.
Eventually, file15.js loads the file test.list, which is the final payload. This somewhat complex process to reach the payload code makes it quite difficult for an unsuspecting software engineer to discover that the installation of the Chessfi application will eventually lead to execution of malicious code.
Figure 15. file15.js reads and calls eval on the content of the file test.list.
With test.list we have finally reached the last piece of the puzzle of how the malicious code is run. The test.list file is over 100KB long and obfuscated using Obfuscator.io. Thankfully, the obfuscation in this case is not configured to make the analysis very difficult and with the help of the deobfuscator and an LLM, Talos was able to deobfuscate most of its functionality, revealing a combination of BeaverTail and OtterCookie.
Standard BeaverTail functionality
There seem to be two distinguishable parts in the code. The first is associated with BeaverTail, including enumeration of various browser profiles and extensions as well as the download of a Python distribution and Python client payload from the C2 server “23.227.202[.]244” using the common BeaverTail/InvisibleFerret TCP port 1224. The second part of the code is associated with OtterCookie.
The BeaverTail portion starts with a function that disables the console logging, moving toward loading the required modules and calling functions in order to steal data from a list of browser extensions, cryptocurrency wallets and browser credentials storage.
BeaverTail has been observed since at least May 2023, and originally was a relatively small downloader component, designed to be included with Node.js based Javascript applications. BeaverTail was also used in supply chain attacks affecting packages in the NPM package repository, which was extensively covered in the previous research and it is outside of the scope of this post.
From the beginning, BeaverTail supported Windows, Linux and macOS, taking advantage of the fact that Node.js applications can be run on different operating system platforms.
Figure 16. Early BeaverTail OS platform check.
The other major functionalities within BeaverTail are the download of InvisibleFerret Python stealer payload modules and installation of a remote access module, typically an AnyDesk client, which would allow the attacker to take over the control of the infected machine remotely. Information stealing and remote access have remained recurring BeaverTail operational techniques over time.
Soon after the initial samples were discovered in June 2023, BeaverTail started to use simple base64 encoding of strings and renaming of variables to make the detection and analysis more difficult. This also included a scheme used to encode the C2 URL as a shuffled string whose slices are base64 decoded individually and then concatenated in a correct order to generate the final URL.
Figure 17. C2 URL encoding scheme used from early BeaverTail variants until the present.
Although BeaverTail is typically written in Javascript, Talos has also discovered several Javascript C2 IP server addresses. These were shared with C++ compiled binary variants created with the help of the Qt framework.
Figure 18. Qt based BeaverTail setting a Qthread parameters.
From the early beginnings in mid-2023, to the last quarter of 2024. BeaverTail C2 URL patterns stabilized around the most commonly-used TCP ports 1224 and 1244, rather than the port 3306 used by early variants. It seems that the threat actors quickly realized that most Windows installations do not come with preinstalled Python interpreters as Linux distributions and macOS. To tackle this issue, they included code which installs a Python distribution, typically from the “/pdown” URL path, required to run Python InvisibleFerret modules. This TTP remains until today.
In terms of detection evasion, Famous Chollima are using several methods to obfuscate code, most frequently utilzing different configurations of the free Javascript tool Obfuscator.io which does make the analysis and especially detection of the malicious code more challenging.
In addition to obfuscating the Javascript code they also regularly use various modes of XOR-based obfuscation of downloaded modules. XORed Python InvisibleFerret modules start with a unique user based string assignment followed by a reversed base64 encoded string, which contains the final Python module’s code that can also be XORed for obfuscation.
Figure 19. A typical InvisibleFerret self-decoding Python module.
Thankfully, by using the combination of a deobfuscating tool and an LLM to rename the variables and base64 decode encoded strings it is possible to analyse new samples with relative ease. However, the operational tempo of groups attributed to Famous Chollima is high and the detection of completely new samples and code on VirusTotal remains unreliable, allowing threat actors enough time to successfully attack some victims.
BeaverTail, OtterCookie and InvisibleFerret functional overlaps
All additional modules present in OtterCookie code correspond well to the functionality that is traditionally associated with InvisibleFerret and its Python-based modules, as well as some parts of the BeaverTail code. This move of the functionality to Javascript may allow the threat actors to remove the reliance on Python code, eliminating the requirement for installation of full Python distributions on Windows.
Table 3. Functional similarities between Famous Chollima tools.
Coverage
Ways our customers can detect and block this threat are listed below.
Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for freehere.
Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for freehere.
Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device.
Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.
Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles. Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work. Please
contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access.
Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.
Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.
Additional protections with context to your specific environment and threat data are available from theFirewall Management Center.
Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.
Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase onSnort.org.
Snort2 rules are available for this threat: 65336
The following Snort3 rules are also available to detect the threat: 301315, 65336
ClamAV detections are also available for this threat: Js.Infostealer.Ottercookie-10057842-0, Js.Malware.Ottercookie-10057860-0
IOCs
IOCs for this research can also be found at our GitHub repository here.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-16 10:06:392025-10-16 10:06:39BeaverTail and OtterCookie evolve with a new Javascript module
Modern server processors feature a trusted execution environment (TEE) for handling especially sensitive information. There are many TEE implementations, but two are most relevant to this discussion: Intel Software Guard eXtensions (SGX), and AMD Secure Encrypted Virtualization (SEV). Almost simultaneously, two separate teams of researchers — one in the U.S. and one in Europe — independently discovered very similar (though distinct) methods for exploiting these two implementations. Their goal was to gain access to encrypted data held in random access memory. The scientific papers detailing these results were published just days apart:
WireTap: Breaking Server SGX via DRAM Bus Interposition is the effort of U.S. researchers, which details a successful hack of the Intel Software Guard eXtensions (SGX) system. They achieved this by intercepting the data exchange between the processor and the DDR4 RAM module.
In Battering RAM, scientists from both Belgium and the UK also successfully compromise Intel SGX, as well as AMD’s comparable security system, SEV-SNP, by manipulating the data-transfer process between the processor and the DDR4 RAM module.
Hacking a TEE
Both the technologies mentioned — Intel SGX and AMD SEV — are designed to protect data even if the system processing it is completely compromised. Therefore, the researchers began with the premise that the attacker would have complete freedom of action: full access to both the server’s software and hardware, and the confidential data they seek residing, for instance, on a virtual machine running on that server.
In that scenario, certain limitations of both Intel SGX and AMD SEV become critical. One example is the use of deterministic encryption: an algorithm where a specific sequence of input data always produces the exact same sequence of encrypted output data. Since the attacker has full access to the software, they can input arbitrary data into the TEE. If the attacker also had access to the resulting encrypted information, comparing these two data sets would allow them to calculate the private key used. This, in turn, would enable them to decrypt other data encrypted by the same mechanism.
The challenge, however, is how to read the encrypted data. It resides in RAM, and only the processor has direct access to it. The theoretical malware only sees the original information before it gets encrypted in memory. This is the main challenge, which the researchers approached in different ways. One straightforward, head-on solution is hardware-level interception of the data being transmitted from the processor to the RAM module.
How does this work? The memory module is removed and then reinserted using an interposer, which is also connected to a specialized device: a logic analyzer. The logic analyzer intercepts the data streams traveling across all the data and address lines to the memory module. This is quite complex. A server typically has many memory modules, so the attacker must find a way to force the processor to write the target information specifically to the desired range. Next, the raw data captured by the logic analyzer must be reconstructed and analyzed.
But the problems don’t end there. Modern memory modules exchange data with the processor at tremendous speeds, performing billions of operations per second. Intercepting such a high-speed data flow requires high-end equipment. The hardware that was used to prove the feasibility of this type of attack in 2021 cost hundreds of thousands of dollars.
The features of WireTap
The U.S. researchers behind WireTap managed to slash the cost of their hack to just under a thousand dollars. Their setup for intercepting data from the DDR4 memory module looked like this:
Test system for intercepting the data exchange between the processor and the memory module Source
They spent half of the budget on an ancient, quarter-century-old logic analyzer, which they acquired through an online auction. The remainder covered the necessary connectors, and the interposer (the adapter into which the target memory module was inserted) was custom-soldered by the authors themselves. An obsolete setup like this could not possibly capture the data stream at its normal speed. However, the researchers made a key discovery: they could slow down the memory module’s operation. Instead of the standard DDR4 effective speeds of 1600–3200 megahertz, they managed to throttle the speed down to 1333 megahertz.
From there, the steps are… well, not really simple, but clear:
Ensure that the data from the target process was written to the hacked memory module and then intercept it, still encrypted at this stage.
Input a custom data set into Intel SGX for encryption.
Intercept the encrypted version of the known data, compare the known plaintext with the resulting ciphertext, and compute the encryption key.
Decrypt the previously captured data belonging to the target process.
In summary, WireTap work doesn’t fundamentally change our understanding of the inherent limitations of Intel SGX. It does however demonstrate that the attack can be made drastically cheaper.
The features of Battering RAM
Instead of the straightforward data-interception approach, the researchers from Belgium’s KU Leuven university and their UK colleagues sought a more subtle and elegant method to access encrypted information. But before we dive into the details, let’s look at the hardware component and compare it to the American team’s work:
The memory module interposer used in Battering RAMSource
In place of a tangle of wires and a bulky data analyzer, this setup features a simple board designed from scratch, into which the target memory module is inserted. The board is controlled by an inexpensive Raspberry Pi Pico microcomputer. The hardware budget is negligible: just 50 euros! Moreover, unlike the WireTap attack, Battering RAM can be conducted covertly; continuous physical access to the server isn’t needed. Once the modified memory module is installed, the required data can be stolen remotely.
What exactly does this board do? The researchers discovered that by grounding just two address lines (which dictate where information is written or read) at the right moment, they could create a data mirroring situation. This causes information to be written to memory cells that the attacker can access. The interposer board acts as a pair of simple switches controlled by the Raspberry Pi microcomputer. While manipulating contacts on live hardware typically leads to a system freeze or data corruption, the researchers achieved stable operation by disconnecting and reconnecting the address lines only at the precise moments required.
This method gave the authors the ability to select where their data was recorded. Crucially, this means they didn’t even need to compute the encryption key! They first captured the encrypted information from the target process. Next, they ran their own program within the same memory range and requested the TEE system to decrypt the previously captured information. This technique allowed them to hack not only Intel SGX but also AMD SEV. Furthermore, this control over data writing helped them circumvent AMD’s security extension called SEV-SNP. This extension, using Secure Nested Paging, was designed to protect the virtual machine from compromise by preventing data modification in memory. Circumventing SEV-SNP theoretically allows attackers not only to read encrypted data but also to inject malicious code into a compromised virtual machine.
The relevance of physical attacks on server infrastructure
It’s clear that while the practical application of such attacks is possible, they’re unlikely to be conducted in the wild. The value of the stolen data would need to be extremely high to justify hardware-level tampering. At least, this is the stance taken by both Intel and AMD regarding their security solutions: both chipmakers responded to the researchers by stating that physical attacks fall outside their security model. However, both the American and European research teams demonstrated that the cost of these attacks is not nearly as high as previously believed. This potentially expands the list of threat actors willing to utilize such complex vulnerabilities.
The proposed attacks do come with their own restrictions. As we already mentioned, the information theft was conducted on systems equipped with DDR4 standard memory modules. The newer DDR5 standard, finalized in 2020, has not yet been compromised, even for research purposes. This is due both to the revised architecture of the memory modules and their increased operating speeds. Nevertheless, it’s highly likely that researchers will eventually find vulnerabilities in DDR5 as well. And that’s a good thing: the declared security of TEE systems must be regularly subjected to independent audits. Otherwise, it could turn out at some point that a supposedly trusted protection system unexpectedly becomes completely useless.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-15 20:06:562025-10-15 20:06:56WireTap and Battering RAM: attacks on TEEs | Kaspersky official blog
Cisco Talos’ Vulnerability Discovery & Research team recently disclosed one vulnerability in the OpenPLC logic controller and four vulnerabilities in the Planet WGR-500 router.
For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets fromSnort.org, and our latest Vulnerability Advisories are always posted onTalos Intelligence’s website.
OpenPLC denial-of-service vulnerability
Discovered by a member of Cisco Talos.
OpenPLC is an open-source programmable logic controller intended to provide a low cost industrial solution for automation and research.
Talos researchers foundTALOS-2025-2223 (CVE-2025-53476), a denial-of-service vulnerability in the ModbusTCP server functionality of OpenPLC_v3. A specially crafted series of network connections can prevent the server from processing subsequent Modbus requests. An attacker can open a series of TCP connections to trigger this vulnerability.
Planet WGR-500 stack-based buffer overflow, OS command injection, format string vulnerabilities
Discovered by Francesco Benvenuto of Cisco Talos.
The Planet Networking & Communication WGR-500 is an industrial router designed for Internet of Things (IoT) networks, particularly industrial networks such as transportation, government buildings, and other public areas. Talos found four vulnerabilities in the router software.
TALOS-2025-2226 (CVE-2025-54399-CVE-2025-54402) includes multiple stack-based buffer overflow vulnerabilities in the formPingCmd functionality. A specially crafted series of HTTP requests can lead to stack-based buffer overflow.
TALOS-2025-2227 (CVE-2025-54403-CVE-2025-54404) includes multiple OS command injection vulnerabilities in the swctrl functionality. A specially crafted network request can lead to arbitrary command execution.
TALOS-2025-2228 (CVE-2025-48826) is a format string vulnerability in the formPingCmd functionality of Planet WGR-500. A specially crafted series of HTTP requests can lead to memory corruption.
TALOS-2025-2229 (CVE-2025-54405-CVE-2025-54406) includes multiple OS command injection vulnerabilities in the formPingCmd functionality. A specially crafted series of HTTP requests can lead to arbitrary command execution.
Cybersecurity is not just about defense, it is about protecting profits. Organizations without modern threat intelligence (TI) face escalating breach costs, wasted resources, and operational inefficiencies that hit the bottom line.
Here is how actionable intel can help businesses cut costs, optimize workflows, and neutralize risks before they escalate.
Key Takeaways
TI turns security into a cost-saving engine by preventing breaches that could otherwise drain millions in recovery and reputational damage.
Automation eliminates labor waste, allowing SOC teams to focus on high-value tasks instead of drowning in false positives.
TI drives faster response which minimizes disruptions, reducing downtime and the cascading financial losses that follow.
Continuous intelligence future-proofs defenses, keeping organizations ahead of evolving threats without constant manual updates.
Seamless integration protects existing investments, embedding TI into current workflows without costly overhauls.
3 Hidden Costs of Ignoring Threat Intelligence
1. SOC Inefficiency and Burnout
When SOC analysts lack high-fidelity, context-rich threat intelligence, they are forced to manually investigate thousands of alerts, many of which turn out to be false positives. This relentless cycle wastes time, drains budgets, increases turnover, and leaves critical threats unaddressed.
Without automation and precise data, teams operate in a constant state of reactive chaos, where even minor incidents consume disproportionate resources.
False positives cost enterprises $1.3M annually in wasted labor.
Analyst burnout makes people over two times more likely to look for a new job.
2. Undetected Threats Escalate into Financial Disasters
Lack of threat intelligence is just one of the drivers of low detection rates
Cyberattacks exploit gaps in visibility and slow response times. Organizations relying on outdated or generic TI feeds often miss targeted, evasive threats until it is too late. By the time a breach is detected, the damage in terms of downtime, regulatory fines, and lost customer trust has already begun. The financial effects of a single incident can cripple budgets and erode market position for years.
$4.4M is the average breach cost for companies today.
Regulatory bodies do not accept “we didn’t see it coming” as an excuse. Without real-time, comprehensive TI, organizations struggle to detect, document, and mitigate threats in ways that satisfy auditors. The result is hefty fines, legal battles, and mandatory security overhauls that could have been avoided with proactive intelligence.
5 Ways Threat Intelligence Saves Money and Resources
1. Helps Stop Breaches Before They Start
Threat Intelligence Feeds: data source, integration options
The financial impact of a cyberattack extends far beyond the immediate incident. Downtime, regulatory penalties, and reputational harm can accumulate into millions in losses even for a single event. Most organizations do not realize how many attacks slip through their defenses until it’s too late. The difference between a near-miss and a full-blown crisis often comes down to how quickly and accurately threats are identified.
ANY.RUN’s Threat Intelligence Feeds provide actionable, real-time intelligence needed to block threats at the earliest stage. Instead of reacting to breaches after the fact, teams can neutralize risks before they execute, turning potential disasters into routine intercepts.
How ANY.RUN Helps:
TI Feeds and Threat Intelligence Lookup deliver 24× more IOCs per incident from 15,000+ SOCs’ real-world investigations, offering instant, deep context on emerging threats, so analysts confirm and contain attacks in seconds.
Reduce MTTR and minimize risks with ANY.RUN’s solutions Request a quote or trial for your SOC
2. Eliminates Wasteful Spending on False Positives
SOC teams are overwhelmed by alert fatigue, with analysts spending hours each day chasing down irrelevant or duplicate threats. This becomes both a productivity issue and a financial drain, as organizations pay for overtime, burnout, and unnecessary tooling that does not address real risks. The problem compounds when teams lack the context to prioritize threats effectively, leading to misallocated resources and missed critical alerts.
ANY.RUN’s solutions filter out the noise, ensuring teams focus only on verified, high-impact threats. This shift saves time and redirects budgets from wasteful investigations to proactive and fast incident handling.
How ANY.RUN Helps:
TI Feeds cut through irrelevant alerts, delivering only filtered, malicious IOCs, which saves hours of work and speeds up response.
TI Lookup enriches alerts with threat context, including TTPs and additional indicators so teams prioritize based on actual risk.
3. Cuts Labor Costs with Automated Triage
ANY.RUN’s TI solutions can be implemented into existing workflows
Manual threat triage is one of the biggest hidden expenses in cybersecurity. Analysts stuck in repetitive, low-value tasks burn out and cost more in overtime and turnover. Delayed responses increase breach risks and force costly retraining.
Thanks to plug-and-play integrations and API/SDK support, ANY.RUN’s TI solutions connect seamlessly with SOCs’ current software and enhance existing workflows. This reduces unnecessary escalations from Tier 1 to Tier 2 analysts, cutting labor costs and increasing the alert handling capacity without extra hiring.
How ANY.RUN Helps:
TI Lookup can be used to automatically enrich alerts and artifacts, reducing triage time to seconds and giving analysts the context they need to act independently.
TI Feeds stream live IOCs via STIX/TAXII directly into SIEM/SOAR/firewall/EDR and other solutions, eliminating manual data entry.
Introduce TI Feeds into your ecosystem
Expand threat detection and improve SOC metrics
Request access to TI Feeds
4. Accelerates Response to Minimize Financial Fallout
Threat intelligence from ANY.RUN can be traced to sandbox analyses for full attack view
Every minute counts during a cyber incident. Slow detection and response prolong downtime and amplify financial losses, from regulatory fines to customer churn. Organizations without real-time, context-rich TI often struggle to collect actionable insights, delaying critical decisions and letting attacks spread unchecked.
ANY.RUN’s TI Lookup provides instant, deep context, including a full attack view based on a single indicator, so teams can quickly understand the threat they are dealing with and respond decisively without guesswork. Faster responses limit damage, preserve revenue, and protect customer trust, turning potential crises into manageable events.
How ANY.RUN Helps:
TI Lookup provides sandbox detonation context for threats, so SOCs can see how malware acts on a real system and use the findings to contain it in their infrastructure.
TI Feeds supply links to sandbox reports for each indicator, which immediately provides security teams with full visibility into the detected threat’s actions.
Catch attacks early with instant IOC enrichment in TI Lookup Power your response and proactive defense with data from 15K SOCs
5. Keeps SOCs Up-to-date on Evolving Threats without Manual Work
TI Lookup provides fresh indicators for the threats active right now
Cyber threats evolve daily, but most TI feeds update weekly or monthly, leaving gaps that attackers exploit. Organizations stuck with static, generic IOCs are forced into reactive, costly fixes every time a new attack emerges. This approach poses a direct financial risk, increasing the likelihood of malware slipping through outdated defenses.
ANY.RUN’s TI Feeds update continuously with data from live investigations by 500,000 security analysts, ensuring defenses adapt automatically to new threats. TI Lookup’s MITRE ATT&CK integration helps teams anticipate attacker moves, turning security from a cost center into a strategic advantage.
How ANY.RUN Helps:
TI Feeds are continuously updated in real time, delivering 99% unique IOCs, so SOCs can stay ahead of emerging threats and detect attacks that are missed by other tools.
TI Lookup’s Query Updates help SOCs to get new indicators and samples for threats of their interest to keep up with evolving infrastructure and enrich proactive defense.
Success Story: International Transport Company
Challenge
A transportation company faced constant cyber threats, especially through email phishing and malware attacks. Attackers frequently changed their infrastructure, making it hard to track and block threats in time. The security team struggled to manually monitor evolving attacks, which risked exposing sensitive communications and disrupting operations.
Solution
The company used ANY.RUN’s Threat Intelligence Lookup to automate threat tracking. They set up custom search queries for specific threats like geo-targeted attacks, CVEs, and phishing domains and subscribed to real-time updates. This allowed them to focus on active threats, convert new threat data into detection rules, and respond faster without manual searches.
Results
Faster Threat Detection: Automated alerts helped the team spot and block attacks like phishing and malware campaigns before they caused damage.
Better Resource Use: The team saved time by reducing manual research, letting them focus on high-priority threats and improve overall security.
Proactive Defense: Real-time updates on active threats allowed the company to strengthen defenses and stay ahead of attackers..
Conclusion
Threat intelligence solutions like ANY.RUN’s TI Feeds and TI Lookup both improve security and deliver measurable cost savings, resource optimization, and risk reduction. By automating triage, eliminating false positives, and accelerating response, businesses can:
Optimize security budgets (focus on high-impact threats).
Future-proof defenses (adapt to evolving attacks).
About ANY.RUN
ANY.RUN is built to help security teams detect threats faster and respond with greater confidence. Our Interactive Sandbox delivers real-time malware analysis and threat intelligence, giving analysts the clarity they need when it matters most.
With support for Windows, Linux, and Android environments, our cloud-based sandbox enables deep behavioral analysis without the need for complex setup. Paired with Threat Intelligence Lookup and TI Feeds, ANY.RUN provides rich context, actionable IOCs, and automation-ready outputs, all with zero infrastructure burden.
Microsoft has released its monthly security update for October 2025, addressing 175 Microsoft CVEs and 21 non-Microsoft CVEs. Among these, 17 vulnerabilities are considered critical and 11 are flagged as important and considered more likely to be exploited. Current intelligence shows that three of the important vulnerabilities have already been detected in the wild.
In the following notes we provide a concise overview of the most significant issues, focusing on the vulnerabilities that could impact the widest user base or carry the highest severity.
Exploited in the Wild
Three vulnerabilities were confirmed to have been exploited in the wild.
CVE‑2025‑24990: Windows Agere Modem Driver Elevation of Privilege Vulnerability Microsoft identified a flaw in the third‑party Agere Modem driver that ships with supported Windows operating systems. The driver was permanently removed in the October cumulative update. Users who rely on fax modem hardware that depends on this driver should uninstall any remaining components, as the affected driver is no longer supported.
CVE‑2025‑59230: Windows Remote Access Connection Manager Elevation of Privilege Vulnerability An improper access‑control check in Windows Remote Access Connection Manager allows an authorized attacker to gain elevated local privileges when accessing the service.
CVE‑2025‑47827: Secure Boot Bypass in IGEL OS before 11 This vulnerability permits a crafted root file-system to bypass Secure Boot on IGEL OS versions before 11 due to incorrect cryptographic signature verification performed by the igel-flash-driver module.
Critical Vulnerabilities
Microsoft marked 17 vulnerabilities as critical in this release. While these have not been observed exploited in the wild, their severity warrants prompt remediation.
CVE‑2025‑59287 Windows Server Update Service (WSUS) Remote Code Execution Vulnerability – Deserialization of untrusted data in WSUS allows an attacker to remotely execute code, potentially compromising the update service on vulnerable servers.
CVE‑2025‑59246, CVE‑2025‑59218 Azure Entra ID Elevation of Privilege Vulnerabilities – An attacker could exploit Azure Entra ID to elevate privileges, affecting the identity platform’s access control.
CVE‑2025‑0033 RMP Corruption During SNP Initialization – A race condition during Reverse Map Table initialization in AMD EPYC SEV‑SNP processors can allow a hypervisor with privileged control to modify RMP entries before they are locked. Azure Confidential Computing products contain multiple safeguards to prevent host compromise.
CVE‑2025‑59234 Microsoft Office Remote Code Execution Vulnerability – A use‑after‑free bug in Microsoft Office enables an attacker to execute code locally on an affected system, contingent on the presence of vulnerable content.
CVE‑2025‑49708 Microsoft Graphics Component Elevation of Privilege Vulnerability – An unauthenticated network attacker can manipulate the Graphics component through use‑after‑free logic to elevate privileges on a target machine.
CVE‑2025‑59291 Confidential Azure Container Instances Elevation of Privilege Vulnerability – External control of file names or paths in Confidential Azure Container Instances allows a privileged attacker to elevate privileges locally within the container environment.
CVE‑2025‑59292 Azure Compute Gallery Elevation of Privilege Vulnerability – Misuse of file names or paths can enable a privileged attacker to gain elevated rights in an Azure Compute Gallery context.
CVE‑2025‑59227 Microsoft Office Remote Code Execution Vulnerability – Exploitation of this vulnerability would allow remote execution on Office applications across multiple Windows versions.
CVE‑2025‑59247 Azure PlayFab Elevation of Privilege Vulnerability – PlayFab services can be manipulated by an unauthorized actor to elevate privileges, impacting the underlying Azure infrastructure.
CVE‑2025‑59252, CVE‑2025‑59272, CVE‑2025‑59286 Copilot Spoofing Vulnerabilities – Improper sanitization and encoding of user‑supplied data in Microsoft 365 Copilot leads to spoofing attacks.
CVE‑2025‑59271 Redis Enterprise Elevation of Privilege Vulnerability – Redis Enterprise servers may allow privileged escalation through a configuration oversight, impacting managed Azure Redis services.
CVE‑2025‑55321 Azure Monitor Log Analytics Spoofing Vulnerability – Cross‑site scripting (XSS) in Azure Monitor allows a network attacker to perform spoofing attacks within the Log Analytics portal.
CVE‑2025‑59236 Microsoft Excel Remote Code Execution Vulnerability – An unauthorized attacker could trigger a use‑after‑free in Microsoft Excel, causing local code execution on the target system.
CVE‑2016‑9535 LibTIFF Heap Buffer Overflow – The libtiff library contains a heap‑buffer‑overflow that can be triggered by malformed TIFF files, potentially allowing an attacker to execute arbitrary code under the user context.
Security teams are encouraged to examine the detailed advisory documents for each CVE to understand the exact scope and mitigations. A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page.
In response to these vulnerability disclosures, Talos is releasing a new Snort ruleset that detects attempts to exploit some of them. Please note that additional rules may be released at a future date, and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Ruleset customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.
Snort 2 rules included in this release that protect against the exploitation of many of these vulnerabilities are: 65391 – 65410, 64420 – 65422.
The following Snort 3 rules are also available: 301325 – 301334.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-10-14 21:06:282025-10-14 21:06:28Microsoft Patch Tuesday for October 2025 — Snort rules and prominent vulnerabilities