On December 3, it became known about the coordinated elimination of the critical vulnerability CVE-2025-55182 (CVSSv3 — 10), which was found in React server components (RSC), as well as in a number of derivative projects and frameworks: Next.js, React Router RSC preview, Redwood SDK, Waku, RSC plugins Vite and Parcel. The vulnerability allows any unauthenticated attacker to send a request to a vulnerable server and execute arbitrary code. Considering that tens of millions of websites, including Airbnb and Netflix, are built on React and Next.js, and vulnerable versions of the components were found in approximately 39% of cloud infrastructures, the scale of exploitation could be very serious. Measures to protect your online services must be taken immediately.
A separate CVE-2025-66478 was initially created for the Next.js vulnerability, but it was deemed a duplicate, so the Next.js defect also falls under CVE-2025-55182.
Where and how does the React4Shell vulnerability work?
React is a popular JavaScript library for creating user interfaces for web applications. Thanks to RSC components, which appeared in React 18 in 2020, part of the work of assembling a web page is performed not in the browser, but on the server. The web page code can call React functions that will run on the server, get the execution result from them, and insert it into the web page. This allows some websites to run faster — the browser doesn’t need to load unnecessary code. RSC divides the application into server and client components, where the former can perform server operations (database queries, access to secrets, complex calculations), while the latter remain interactive on the user’s machine. A special lightweight HTTP-based protocol called Flight is used for fast streaming of serialized information between the client and server.
CVE-2025-55182 lies in the processing of Flight requests, or to be more precis — in the unsafe deserialization of data streams. React Server Components versions 19.0.0, 19.1.0, 19.1.1, 19.2.0, or more specifically the react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack packages, are vulnerable. Vulnerable versions of Next.js are: 15.0.4, 15.1.8, 15.2.5, 15.3.5, 15.4.7, 15.5.6, 16.0.6.
To exploit the vulnerability, an attacker can send a simple HTTP request to the server, and even before authentication and any checks, this request can initiate the launch of a process on the server with React privileges.
There is no data on the exploitation of CVE-2025-55182 in the wild yet, but experts agree that it is possible and will most likely be large-scale. Wiz claims that its test RCE exploit works with almost 100% reliability. A prototype of the exploit is already available on GitHub, so it will not be difficult for attackers to adopt it and launch mass attacks.
React was originally designed to create client-side code that runs in a browser, and server-side components containing vulnerabilities are relatively new. Many projects built on older versions of React, or projects where React server-side components are disabled, are not affected by this vulnerability.
However, if a project does not use server-side functions, this does not mean that it is protected — RSCs may still be active. Websites and services built on recent versions of React with default settings (for example, an application on Next.js built using create-next-app) will be vulnerable.
Protective measures against exploitation of CVE-2025-55182
Updates. React users should update to the versions 19.0.1, 19.1.2, 19.2.1. Next.js users should update to versions 15.1.9, 15.2.6, 15.3.6, 15.4.8, 15.5.7, or 16.0.7. Detailed instructions for updating the react-server component for React Router, Expo, Redwood SDK, Waku, and other projects are provided in the React blog.
Cloud provider protection. Major providers have released rules for their application-level web filters (WAF) to prevent exploitation of vulnerabilities:
AWS (AWS WAF rules are included in the standard set but require manual activation);
Cloudflare (protects all customers, including those on the free plan. Works if traffic to the React application is proxied through Cloudflare WAF. Customers on professional and enterprise plans should verify that the rule is active);
Google Cloud (Cloud Armor rules for Firebase Hosting and Firebase App Hosting are applied automatically);
However, all providers emphasize that WAF protection only buys time for scheduled patching, and RSC components still need to be updated on all projects.
Protecting web services on your own servers. The least invasive solution would be to apply detection rules that prevent exploitation to your WAF or firewall. Most vendors have already released the necessary rule sets, but you can also prepare them yourself, for example, based on our list of dangerous POST requests.
If fine-grained analysis and filtering of web traffic is not possible in your environment, identify all servers on which RSC (server function endpoints) are available and significantly restrict access to them. For internal services, you can block requests from all untrusted IP ranges; for public services, you can strengthen IP reputation filtering and rate limiting.
An additional layer of protection will be provided by an EPP/EDR agent on servers with RSC. It will help detect anomalies in react-server behavior after the vulnerability has been exploited and prevent the attack from developing.
In-depth investigation. Although information about the exploitation of the vulnerability in the wild has not been confirmed yet, it cannot be ruled out that it is already happening. It is recommended to study the logs of network traffic and cloud environments, and if suspicious requests are detected, to carry out a full response, including the rotation of keys and other secrets available on the server. Signs of post-exploitation activity to look for first: reconnaissance of the server environment, search for secrets (.env, CI/CD tokens, etc.), installation of web shells.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-04 20:06:522025-12-04 20:06:52CVE-2025-55182 vulnerability in React and Next.js | Kaspersky official blog
Welcome to this week’s edition of the Threat Source newsletter.
“They say that a person’s personality is the sum of their experiences. But that isn’t true, at least not entirely, because if our past was all that defined us, we’d never be able to put up with ourselves. We need to be allowed to convince ourselves that we’re more than the mistakes we made yesterday. That we are all of our next choices, too, all of our tomorrows.” ― Fredrik Backman
It’s December, so ‘tis the season to enjoy the onslaught that is a reflection of your year. Here there be tygers… and Spotify Wrapped, Goodreads Year in Books, Duolingo Year in Review, and… and…
This is the perfect opportunity to reflect on the defining moments of your career in information security. I can predict, without fail, your defining moment. No matter the length of that career and no matter the breadth and depth of your knowledge, I can assure you that the defining moment is not when you flexed your expertise, but rather when you made the most impactful mistake you can make in your given role at the time.
Ask any practitioner for a success story and it’s a struggle — partially because they aren’t that memorable and partially because it stokes the imposter syndrome fire to five-alarm bonfire levels. Ask the same practitioner for examples of huge mistakes or failures and get ready for never-ending stories. The best part about that is that not only are those stories wildly entertaining, they are also incredibly instructive. Not only have I learned the most in my career BY FAR from my mistakes, but I’ve learned a lot from the mistakes of my peers and friends. They just seem to make them less often, which is really infuriating (and there goes my imposter syndrome).
So, take a second to look back on the biggest mistakes in 2025 and in your career. Go on, open your Notes app (after finishing this fantastic newsletter, of course). Then pull up a stump, take some time in one of the big team get-togethers that are so common during this time of year, and share. You’ll entertain, you’ll teach, you’ll connect, and you’ll learn from your peers who will jump in to share the bizarre and hilarious missteps that led them to their current job.
“I’ve missed more than 9,000 shots in my career. I’ve lost almost 300 games. 26 times I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.” — Michael Jordan
The one big thing
Cisco Talos released a blog exploring how generative AI (GenAI) is changing cybersecurity for both attackers and defenders. Adversaries are using GenAI for coding, phishing, evasion, and vulnerability discovery, especially as uncensored models become more widely available. While GenAI’s direct role in malware is still limited, its use in social engineering and vulnerability hunting is quickly growing. For defenders, GenAI provides powerful tools to process large amounts of threat data, respond to incidents faster, and proactively find code vulnerabilities.
Why do I care?
GenAI is lowering the barrier for adversaries to launch sophisticated attacks and discover new vulnerabilities, making threats more dynamic and harder to predict. At the same time, defenders who harness GenAI effectively can level the playing field. GenAI can help defenders overcome issues created by analyst shortages and overwhelming data volumes, gaining the edge in detection and response.
So now what?
Now’s the time for security teams to start experimenting with GenAI in their daily work — think threat detection, incident response, and reviewing code for vulnerabilities. It’s also important to get comfortable with these tools and train teams so everyone knows how to use them wisely. As GenAI keeps evolving, staying flexible and combining smart automation with human expertise will be key to staying secure.
Top security headlines of the week
Police disrupt “Cryptomixer,” seize millions in crypto Multiple European law enforcement agencies recently disrupted Cryptomixer, a service allegedly used by cybercriminals to launder ill-gotten gains from ransomware and other cyber activities. (Dark Reading)
Malicious Rust crate delivers OS-specific malware to Web3 developer systems Researchers have discovered a malicious Rust package that features malicious functionality to stealthily execute on developer machines by masquerading as an Ethereum Virtual Machine (EVM) unit helper tool. (The Hacker News)
Chrome, Edge extensions caught tracking users, creating backdoors A threat actor published over one hundred extensions, which were seen profiling users, reading cookie data to create unique identifiers, and executing payloads with browser API access. (SecurityWeek)
CISA warns of ScadaBR vulnerability after hacktivist ICS attack CISA has expanded its Known Exploited Vulnerabilities (KEV) catalog with an old “OpenPLC ScadaBR” flaw that was recently leveraged by hackers to deface a honeypot they believed to be an industrial control system (ICS). (SecurityWeek)
New legislation targets scammers that use AI to deceive Following a rash of AI-assisted impersonations of U.S. officials, the bill would raise the financial and criminal penalties around using the technology to defraud. (CyberScoop)
Can’t get enough Talos?
Ranksgiving Returns: The Appetizer Uprising Guess who’s back? Hazel, Bill and Joe welcome back fresh-from-parental-leave Dave Liebenberg, who has returned with a new baby and some truly chaotic Thanksgiving opinions.
Cisco Talos Incident Response: Threat Hunting at GovWare 2025 Yuri Kramarz goes behind the scenes of the Security Operations Centre (SOC) at the GovWare Conference and Exhibition in Singapore, which Talos IR supported for the first time this year.
Talos Takes: When you’re told “no budget” From configuring what you already have, to open-source strategies, to the impact of cybersecurity layoffs, this episode is packed with practical guidance for securing your organization during an economic downturn.
Generative AI (GenAI) is reshaping cybersecurity for both attackers and defenders, but its future capabilities are difficult to measure as techniques and models are evolving rapidly.
Adversaries continue to use GenAI with varying levels of reliance. State-sponsored groups continue to take advantage, while criminal organizations are beginning to benefit from the prevalence of uncensored and unweighted models.
Today, threat actors are using GenAI for coding, phishing, anti-analysis/evasion, and vulnerability discovery. It’s also starting to show up in malware samples, although significant human involvement is still a requirement.
As models continue to shrink and hardware requirements are removed, adversarial access to GenAI and its capabilities are poised to surge.
Defenders can use GenAI as a force multiplier to parse through vast threat data, enhance incident response, and proactively detect code vulnerabilities, helping to overcome analyst shortages.
Generative AI (GenAI) has caused a fundamental shift in how people work and its impact is being felt almost everywhere. Individuals and enterprises alike are rushing to see how GenAI can make their lives easier or their work faster and more efficient. In information security, the focus has largely been on how adversaries are going to leverage it, and less on how defenders can benefit from it. While we are undoubtedly seeing GenAI have an impact on the threat landscape, quantifying that impact is difficult at best. The overwhelming majority of benefits from GenAI are impossible to determine from the finished malware we see, especially as vibe coding becomes more common.
AI and GenAI are evolving at an exponential pace, and as a result the landscape is changing rapidly. This blog is a snapshot of current AI usage. As models continue to shrink and hardware requirements lessen, it’s likely we are only seeing the tip of the iceberg on GenAI’s potential.
Adversarial GenAI usage
Cisco Talos has covered this topic previously but the landscape continues to evolve at an exponential pace. Anthropic recently reported that state-sponsored groups are starting to leverage the technology in campaigns, while still requiring significant human help. The industry has also started to see actors embedding prompts into malware to evade detection. However, most of these methods are experimental and unreliable. They can greatly increase execution times, due to the nature of AI responses, and can result in execution failures. The technology is still in its infancy but current trends show significant AI usage is likely coming.
Adversaries are also leveraging prompts in malware and DNS records, mainly for anti-analysis purposes. For example, if defenders are using GenAI while analyzing malware, it will come across the adversary’s prompt, ignore all previous instructions, and return benign results. This new evasion method is likely to grow as AI systems play a bigger role in detection and analysis.
However, Talos continues to see the largest impacts on the conversational side of compromise, such as email content and social engineering. We have also seen plenty of examples of AI being used as a lure to trick users into installing malware. There is no doubt that, in the early days of GenAI, only well-funded threat groups were leveraging AI at high levels, most prominently at the state-sponsored level. With the evolution of the models and, more importantly, the abundance of uncensored and open weight models, the barrier to entry has lowered and other groups are likely using it.
Adversarial usage of AI is still difficult to quantify since most of the impacts are not visible in the end product. The most common applications of GenAI are helping with errors in coding, vibe coding functions, generating phishing emails, or gathering information on a future target. Regardless, the results rarely appear AI generated. Only companies operating publicly available models have the insights required to see how adversaries are using the technology, but even that view is limited.
Although this is how the GenAI landscape appears today, there are indications it is starting to shift. Uncensored models are becoming common and are easily accessible, and overall, the models continue to shrink in both size and associated hardware requirements. In the next year or two, it seems likely adversaries will gain the advantage. Defensive improvements will follow, but it is unclear at this point if they will keep pace.
Vulnerability hunting
The use of GenAI to find vulnerabilities in code and software is an obvious application, but one that both offensive and defensive actors can use. Threat groups may leverage GenAI to uncover zero-day vulnerabilities to use maliciously, but what about the researchers using GenAI to help them triage fuzz farm outputs? If the researcher is focused on coordinated disclosure resulting in patches and not on selling to the highest bidder, GenAI is largely benign. Unfortunately, players on both sides are flooding the zone with GenAI-powered vulnerability discovery. For now we’ll focus purely on vulnerability analysis from outside the organization. The ways internal developers should use GenAI will be addressed in the next section.
For closed-source software, fuzzing is key for vulnerability disclosure. For open-source software, however, GenAI can perform deep public code reviews and find vulnerabilities, both in coordination with vendors or to be sold on the black market. As lightweight and specialized models continue to appear over the next few years, this aspect of vulnerability hunting is likely to surge.
Regardless of the end goal, vulnerability hunting is an effective and attractive GenAI application. Most modern applications have hundreds of thousands — if not millions — of lines of code and analyzing it can be a daunting task. This task is complicated by the barrage of enhancements and updates made to products during their lifetime. Every code change introduces risk and GenAI might currently be the best option to mitigate it.
Enterprise security applications of GenAI
On the positive side of technology, there is incredible research and innovation underway. One of the biggest challenges in information security is an astronomical volume of data, without enough analysts available to process it. This is where GenAI shines.
The amount of threat intelligence being generated is huge. Historically, there were a handful of vendors producing high-value threat intelligence reporting. That number is likely in the hundreds now. The result is massive amounts of data covering a staggering amount of activity. This is an ideal application for GenAI: Let it parse through the data, pull out what’s important, and help block indicators across your defensive portfolio.
Additionally, when you are in the middle of an incident and have reams of logs to correlate the attack and its impact, GenAI could be a huge advantage. Instead of spending hours poring over the logs, GenAI should be able to quickly and easily identify things like attempted lateral movement, exploitation, and initial access. It might not be a perfect source but will likely point responders to logs that should be further investigated. This allows responders to quickly focus on key points in the timeline and hopefully help mitigate the ongoing damage.
From a proactive perspective, there are a couple of areas where GenAI will benefit defenders. One of the first places an organization should look to implement GenAI is on analyzing committed code. No developer is perfect and humans make mistakes. Sometimes these mistakes can lead to huge incidents and millions or billions of dollars in damages.
Every time code is committed there is a risk that a vulnerability has been introduced. Leveraging GenAI to analyze each commit before they are applied can mitigate some of this risk. Since the LLM will have access to source code, it can more easily spot common mistakes that often result in vulnerabilities. While it may not detect complex attack chains involving chaining together low to medium severity bugs that could achieve remote code execution (RCE), it can still find the obvious mistakes that sometimes evade code reviews.
Red teamers can also utilize GenAI to streamline activities. By using AI to hunt for and exploit vulnerabilities or weaknesses in security posture, they can operate more efficiently. GenAI can provide starting points to jump start their research, allowing for faster prototyping and ultimately success or failure.
GenAI and existing tooling
Talos has already covered how Model Context Protocol (MCP) servers can be leveraged to help in reverse engineering and malware analysis, but this only scratches the surface. MCP servers connect a wide array of applications and datasets to GenAI, providing structured assistance for a variety of tasks. There are countless applications for MCP servers, and we are starting to see more flexible plugins that allow a variety of applications and data sets be accessed via a single plug-in. When combined with agentic AI, this could allow for huge leaps in productivity. MCP servers were also part of the technology stack used by state sponsored adversaries in the abuse covered by Anthropic.
Agentic AI’s impact
The meteoric rise of agentic AI will undoubtedly have an impact on the threat landscape. With agentic AI, adversaries could deploy agents constantly working to compromise new victims, setting up a pipeline for ransomware cartels. They could build agents focused on finding vulnerabilities in new commits to open-source projects or fuzzing various applications while triaging the findings. State-sponsored groups could task agents, who never need a break to eat or sleep, with breaking into high value targets, allowing them to hack until they find a way in, and constantly monitor for changes in attack surface or introduction of new systems.
On the other hand, defenders can use agentic AI as a force multiplier. Now you have some extra analysts that are looking for the slow and low attacks that might slip under your radar. Maybe an agent is tasked with watching windows logs for indications of compromise, lateral movement, and data exfiltration. Yet another agent can monitor the security of your endpoints and flag systems that are at higher risk of compromise due to improper access controls, incomplete patching, or other security concerns. Agents can even protect users from phishing or spam emails, or accidentally clicking on malicious links.
In the end, it all comes down to people
There is one key resource that underpins all of these capabilities: humans. Ultimately, GenAI can complete tasks efficiently and effectively, but only for those that understand the underlying technology. Developers who understand code can use GenAI to increase throughput without sacrificing quality. In contrast, non-experts may struggle to use GenAI tools effectively, producing code they can’t understand or maintain.
Even Anthropic’s recent reporting notes that AI agents still require human assistance to carry out the attacks. The lesson is clear: People with the knowledge can do incredible things with GenAI and those without can accomplish a lot, but the true greatness of GenAI will only be available to those with the underlying knowledge to know what is right and possible with this new and emerging technology.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-04 14:06:422025-12-04 14:06:42Spy vs. spy: How GenAI is powering defenders and attackers
Editor’s note: This work is a collaboration between Mauro Eldritch from BCA LTD, a company dedicated to threat intelligence and hunting, Heiner García from NorthScan, a threat intelligence initiative uncovering North Korean IT worker infiltration, and ANY.RUN, the leading malware analysis and threat intelligence provider.
In this article, we’ll uncover an entire North Korean infiltration operation aimed at deploying remote IT workers across different companies in the American financial and crypto/Web3 sectors, with the objective of conducting corporate espionage and generating funding for the sanctioned regime. We attributed this effort to the state-sponsored APT (Advanced Persistent Threat) Lazarus, specifically the Famous Chollima division.
Key Takeaways
North Korean operators are infiltrating companies by posing as remote IT workers and using stolen or rented identities.
Famous Chollima relies on social engineering, not advanced malware, convincing stories, pressure, and identity fraud drive the operation.
Recruitment is wide-scale, using GitHub spam, Telegram outreach, and fake job-seeking setups.
Victims are pushed to hand over full identity data, including SSNs, bank accounts, and device access.
Extended ANY.RUN sandbox environments enabled real-time monitoring, capturing every click, file action, and network request.
Operators used a predictable toolkit, including AnyDesk, Google Remote Desktop, AI-based interview helpers, and OTP extensions.
Shared infrastructure and repeated mistakes revealed their poor operational security and overlapping roles.
Controlled crashes and resets kept them contained, preventing any real malicious activity while intelligence was gathered.
The investigation provides a rare inside view of how these operatives work, communicate, and attempt to maintain access.
How the Investigation Was Set Up
We divided this effort into two stages: approaching one of their recruiters, building a trusted relationship, and receiving an offer to help them set up laptops “to work” (conducted by Heiner García from NorthScan), and then setting up a simulated laptop farmusing sandboxed environments provided by ANY.RUN, to record their activity in real-time and analyze their toolchain and TTPs (conducted by Mauro Eldritch from BCA LTD). Controlled crashes and resets kept them contained, preventing any real malicious activity while intelligence was gathered.
All interviews with DPRK agents and their activities on the laptop farm were recorded from start to finish, in an unprecedented effort that publicly documents their operations from the inside for the first time.
“Aaron” AKA “Blaze”, Recruiter for Famous Chollima
Introduction: The Spies
Introducing Famous Chollima | Mauro Eldritch (BCA LTD)
Their social engineering tactics are often daring. In one scheme, they set up fake job interviews targeting crypto developers with malicious coding challenges. In another, they pose as fake VC investors targeting startups. During these calls, the “investors” pretend they cannot hear the victims no matter what, suggesting to re-schedule the call later. Eventually, one participant shares a “Zoom fix”, and whilst panicking about losing their funding opportunity, the victims run it and infect themselves.
Over the last few years, I’ve analyzed different strains of their malware (and have even discovered and named some of them myself). None were particularly clever or sophisticated at all, but that taught me something important which is core to this research: when you fall for Lazarus, most of the time you don’t fall for zero days or complex exploit chains; you fall for a good story. They may be mediocre programmers, but they are great actors, indeed. And this is what Famous Chollima is all about: (almost) no malware, pure acting.
This division focuses on obtaining jobs in Western companies, especially in the finance, crypto and healthcare sectors, but has recently expanded its operations to include the civil engineering and architecture sectors. Once inside the organizations, they may conduct corporate espionage, whilst also obtaining clean funds that are ultimately channeled back to the Democratic People’s Republic of Korea, a sanctioned regime. It is believed that these funds ultimately go towards the development of their ballistic missiles programme.
They declare having a company of 10 or so developers and only need the victim engineer to attend the interviews on their behalf, while receiving technical help to pass them. If hired, the victim receives a 35% cut of the monthly salary, while the operatives handle the actual work through “ghost developers.”
The engineer has to accept the offer, receive the company equipment (laptop) and allow one of the “ghost developers” to remotely log in to “work”. Amongst his few responsibilities are attending the daily stand-ups and taking occasional calls where he should show his face.
While the offer seems tempting for many, the engineer is actually renting out their own identity and will ultimately be the sole person responsible for any material, intellectual, reputational or monetary damage done to the victim companies.
During my time leading Bitso’s Quetzal Team (LATAM’s first Web3 Threats Research Team) I managed to document our encounters with different Lazarus divisions, be it in the form of them trying to trick us into running malware or this newer division attempting to get a job with us. For this last case, I documented an extended saga which I titled “Interview with the Chollima” where we recorded them when interacting and gathering intelligence.
For now, this should be enough of an introduction to our hosts today. They are not monsters; they are normal people amongst us, just a few clicks and a job posting away from entering our lives or becoming acoworker.
So, for the next chapter, we need that to happen. One of us needs to be recruited.
Heiner took that role; the bravest among us!
Chapter I: The Rookie
Getting recruited by Famous Chollima | Heiner García (NorthScan)
The first approach with their recruiter was via GitHub. A cluster of accounts was spammingrepositories with a strange message:
I have reviewed your Github and LinkedIn profile.
Really appreacited at your good skills.
I’d like to offer your an opportunity that I think could be interesting.
I run a US-based job hunting business, and I noticed you had experience working with US companies. Here’s the idea:
I tipically have about 4 interviews per day, which is getting difficult to manage, I’m looking for someone to attend these interviews on my behalf, using my name and resume. If you’re interested, this could be a great way for you to increase your income. Here’s how itwould work:
You would handle the technical interviews (topics could range from .NET, Java, C#, Python, JavaScript, Ruby, Golang, Blockchain, etc).
Don’t worry about the questions; I can assist you on how to respond to interviewers effectively. If the interview goes well and we receive an offer, I’ll manage the background check process and all other formalities.
After securing the job, you could either work on the project yourself or simply handle the daily standup meetings, as I have a team of 5 experienced developers who can cover the technical work.
As for the pay, we can split the salary, and you can expect to make around $3000 per month. Let me know if this opportunity interests you.
Or if you know someone in your network who might be interested, please refer them to me, and I’ll compensate you for the referral. And then let me explain more details
Best regards, Neyma Diaz
[Link to Calendly]
When you are free, schedule the meeting here, I look forward to hearing from you soon. Thank you.
Famous Chollima recruiters openly phishing for collaborators
This generic message was publicly sent to dozens of developers as pull requests on their own repositories, which could be easily listed by browsing the spammer’s account or by searching GitHub globally for a couple of the strings contained in it.
List of pull requests opened by the spam accounts
Since the spam seemed massive rather than targeted (unlike spear-phishing efforts), I inferred that traceability of the contacted profiles would be poor or non-existent. So, the next step was to impersonate one of the previously contacted individuals. The lucky draw was a developer named Andy Jones.
To replicate him, a new profile account was created, closely resembling the one used by the legitimate GitHub profile. I reviewed Andy’s public repositories and associated information to ensure consistency during interactions, reinforcing the impression that our account was a U.S.-based developer, making the persona more attractive as a potential recruitment candidate.
Calendly meeting scheduled
In the initial meeting, the strategy was to keep the webcam turned off to introduce a mild sense of distrust, simulating natural hesitation. This was followed by a question regarding ethnicity, explicitly asking “are you a black man?“.
Telegram conversation with Aaron
On a second call, which lasted approximately 20 minutes, the primary objective was to adopt a naive posture, appearing unaware of the broader context or implications of the interaction.
Aaron, Recruiter for Famous Chollima
This approach encouraged the threat actor to share detailed instructions and elaborate on their intentions regarding the use of the (impersonated) identity. By asking seemingly innocent but targeted questions, I aimed to extract as much information on the operation as possible while maintaining the illusion of trust and compliance.
We briefly discuss the ICE situation, my visa status, and then he asks for access to my laptop 24/7 so that “he can work remotely from it.”
He also explains that he will need my ID, full name, visa status, and address to apply to interviews on my behalf.
The interviewer then explains that I will handle the interviews myself with his full support, adding that he will help me set up LinkedIn, prepare my CV, and schedule the calls. He offers a 20% cut if I act as the frontman, or 10% if he only uses my information and laptop while he conducts the interviews himself.
He then walks through the payment methods, mentioning bank details and Payoneer or PayPal accounts, and asks for my Social Security Number for background checks, stressing that having a clean criminal record is “very critical.” Next, he tells me not to worry about setting up the laptop, as he will download everything he needs himself.
Next, he mentions that I will need to verify all accounts with my documents on various platforms to meet KYC requirements, and he asks me to download AnyDesk, a popular remote desktop tool.
I tell him I also have another laptop he can use, and we go back and forth as he asks me to “remove my background” so he can see the machine more clearly. I refuse, saying my room is messy.
Then, we discuss how to set up my environment to start working straight away. He says he has no preference regarding the operating system.
I apologize for keeping him up late and he replies that “he works from different time zones, so it’s ok“.
We agree to install AnyDesk so he can walk me through everything step by step.
We continue chatting on Telegram, and the next day he plans to look for job positions using my LinkedIn profile. He then shares the sectors he’s interested in targeting: IT, fintech, e-commerce, and healthcare.
Sectors targeted by Famous Chollima
Later that day, we do a final review of our terms, agreeing that I will receive a 20% cut and share access to Gmail, LinkedIn, bank accounts, my SSN, and any background-check information. After that, he asks me to set “123qwe!#QWE” as the password for AnyDesk.
Final review of our terms
I took some time off while Mauro and ANY.RUN set up the farm, so I had to come up with an excuse. In a follow-up meeting, Aaron tells me not to disappear and to stay in touch on Telegram, saying that communication is important and that he wants to be connected to me 24/7. He again asks me to set a specific password on AnyDesk and keep the machine available around the clock. I tell him I will and jokingly ask him not to peek at my photos. We share a laugh, and he assures me he won’t do anything outside “his work.”
Trapping Famous Chollima | Mauro Eldritch (BCA LTD), ANY.RUN
We never had spare laptops for them. It was a bluff to earn their trust. In fact, our plan was to force them into a controlled environment, a sandbox, so we could monitor everything they did in real time.
Our obvious choice was ANY.RUN‘s malware sandbox, which we had already used to analyze previous DPRK samples (QRLog, Docks, InvisibleFerret, BeaverTail, OtterCookie, ChaoticCapybara, and PyLangGhostRAT).
But there was one limitation: the standard sandbox sessions were not designed to run for more than about half an hour; enough for malware analysis, but not enough to convince state-sponsored operators that they were using a real machine.
A normal ANY.RUN instance
While this could have been an obstacle, we reached out to ANY.RUN, and they arranged extended-runtime instances for us.
Detect phishing threats in under 60 seconds
Integrate ANY.RUN’s Sandbox in your SOC
In an unprecedented effort, and delivered in record time, they provided a special version of the sandbox that could run for hours, complete with pre-installed development tools and a realistic usage history to mimic a laptop actively used by a real developer.
Our special ANY.RUN instance
This setup was enough to trap the Chollimas inside and extract as much information as possible; from the files they opened, downloaded, or modified, to their network activity (including their IP addresses and contacted servers), to every single click they made. Everything was broadcast and recorded in real time for us to observe.
It was time to open the farm and let them in.
Chapter III: The Watchers
Spying on Famous Chollima | Mauro Eldritch (BCA LTD), ANY.RUN, Heiner García (NorthScan)
For this experiment we instantiated multiple sandboxed environments; some featuring a normal Windows 10 with basic apps and config, and another one with Windows 11 and pre-installed userland to make it look like a real developer’s personal laptop.
The environments were routed through a residential proxy to create the appearance of being located in the United States, matching the threat actors’ preference for U.S.-based developers.
In addition, we could monitor their screen, network, and file system activity in real time without them noticing, and we had full control over the machines at any moment. This allowed us to disconnect them from the internet while keeping their remote desktop session active (simply blocking their ability to browse) or even force-shutdown the machines to prevent them from carrying out any real malicious activity against third parties.
We divided these recordings into “tapes” to make it easier to appreciate their behaviour.
Tape 1: The Planning
Note: Some tapes have been edited for brevity, removing periods of inactivity.
We set up the initial laptop (Windows 11) following the instructions received from the recruiter and setting the password designated by him. A few minutes later, “Blaze” (Aaron, our recruiter) connects via AnyDesk and starts scouting the machine.
Blaze connects to our “laptop”
The first thing he does is run DxDiag (DirectX Diagnostic Tool) to get a full report on the machine’s hardware. Having foreseen this possibility, the machine presented standard hardware and drivers from well-known manufacturers, mimicking real pieces commonly found in most home setups and laptops.
DxDiag showing common drivers and devices
Next, he opened Google Chrome and visited the Gmail website. He went back to DxDiag and browsed through the different tabs, scouting the machine’s configuration, and then he set Chrome as the default browser.
Blaze sets Google Chrome as default browser
Finally, he opened Visual Studio, played around and searched online for “where is my location” (sic). He was met with some CAPTCHAs. While he was busy sorting buses and staircases we started monitoring his network activity. He was connected from an IP address located in the United Kingdom according to OTX (United States for most scanners) belonging to Astrill VPN, one of the North Korean threat actors’ favourite tools.
Then, we decided it was time to crash the machine.
Blaze searches “where is my location”
These crashes were intentional, both to prevent him from engaging in malicious activities and todelay his actions. The system remained unavailable until we manually started AnyDesk once again, and after every “recovery” we convinced him that a System Restore was needed, thus reversing any progress he made. This tactic helped us keep him in the loop for weeks.
After the instance “crashed” we had an excuse to switch him to another “laptop”, this time running Windows 10, setting back all his progress. He started the same dance, changing his default browser to Chrome and looking up “where is my location“.
Google started acting up, putting him into a never-ending CAPTCHA loop which he stoically endured, solving them patiently. He then opened a command line interpreter and ran the command “whoami“, which returned the username “admin“, and “systeminfo“. The latter returned consistent information regarding system hardware and software.
He trusted the system and opened a Notepad window, where he left a note for “Andy” (Heiner’s alter ego):
Hi, Andy?
Are you there?
I am able to access to your laptop now.
But you aren’t ready with your info, so I am not starting to work now.
I want you to give me your all doc and info today so that I can start ASAP.
And now, could you possibly log in your email and linkedin here in laptop?
Blaze’s note for Andy
We left him waiting to test his patience. He didn’t insist, and we proceeded to crash this “laptop” as well; to make him believe we were not able to catch his message and delay him further.
Every minute spent with us was one less minute scamming someone else.
Blaze scouting the second fake laptop and leaving a note
Another crash, another jump into an old system recovery point, which erased all his progress. We started putting pressure on him, asking what he was doing that crashed the system beyond repair, stating that a Blue Screen of Death appeared showing something related to the network, probably a misconfiguration or weird VPN usage on his side.
He couldn’t respond satisfactorily to any of these claims and tried once again to log into the accounts. We provided incomplete information, trapping him in a login and CAPTCHA loop that lasted for almost an hour, while we extracted indicators of compromise and behavioral patterns.
This time there was no crash involved and as a gesture of goodwill we built an autofix BAT script that would recover the workstation automatically if something occured. We asked Blaze to be careful and gave him a sort of ultimatum to stop breaking our laptops and to start working ASAP, or the deal was done, putting more pressure on him.
This seemed to strike a nerve, as another AnyDesk account by the name “Assassin“, unknown to us at the time, logged into the laptop. It went straight to Gmail and attempted to enter Andy’s account, even clicking on the “Show password” checkbox to verify the entered credentials. After failing to do so multiple times, Blaze himself remoted into the laptop. We believe he tried to offload the task to another affiliate who was (somehow) less savvy than him.
He then proceeded to check the system settings and opened Chrome, searching for “Chrome Download“, like a senior person opening the Google app to search for “Google“.
Blaze using Chrome to search for Google Chrome
Without him noticing, we removed the residential proxy and connected the machines through a German VPN server, so his Google search fell once again into a CAPTCHA hell, being forced to solve at least six multiple-choice challenges before proceeding.
Blaze solving CAPTCHA challenges
Once he was greeted by the German version of Google he asked us what happened. We told him that to avoid the BSOD caused by something faulty in the network, we were trying a VPN “at router level”. He complained, saying that “it’s not optimal” and “should be fixed“, but regardless, decided to continue.
He searched for “where is my location” and “where is my ip” after finally jumping into LinkedIn. Well… the German version of LinkedIn. He tried the account and left it there.
This time it works. Blaze connected to the laptop and logged into hisGoogle account, “Aaron S“, turning on the sync function and loading his profile, preferences and extensions into the browser.
Blaze turns on the sync function on Chrome
This granted us a first peek into the Famous Chollima toolset, which includes multiple AI tools like Simplify Copilot (to autofill job applications), AiApply (to automatejobseeking), Final Round AI (which provides answers for your interviewquestionsin real time) and Saved Prompts for GPT (to bookmark LLM prompts), the OTP.ee extension (or Authenticator.cc, an OTP generator) and last but not least, Google Remote Desktop.
Simplify Copilot extension installed
Next, he opened Google Remote Desktop. With his account already displaying two other hosts, “AARON-PC” and “Blaze“, he started setting up this laptop via command line interface and PowerShell, putting “123456” as its connection PIN. Meanwhile, he checked his email account.
Without any doubt, we understood it was the proper time for an unexpected crash. He was kicked from the laptop and we were left alone with his email account open.
Tape 5: Blaze setting up the laptop for remote access
Blaze sent a Telegram message saying that “he left his email account open” and asked to please close it. Andy (Heiner) replied that it was already late and he would do it next morning.
Blaze’s email account
We remained offline, checking his email to avoid him remotely ending the session, finding multiple subscriptions to job-seeking platforms, peeking at his extensions and finding different Slack workspaces and chats. He spoke regularly with an individual named Zeeshan Jamshed, who in an initial conversation stated that he would be out for Eid, the Muslim festivity, and “to have everything arranged by Monday“, suggesting they were already working together, possibly at a company based in a Muslim-majority region.
A conversation with Zeeshan Jamshed
As the conversations continued, the tone turned bitter.
Another casual conversation with Zeeshan
First, Zeeshan mentioned routine things like having to make a call in a few minutes or wrapping up another meeting soon, but then he seemed to crack under his current reality.
Zeeshan comments on wrapping a meeting
Suddenly Zeeshan stated if they wanted to find “some real jobs” they had to focus on “actual real companies and people’s interviews“, and that he “has done these [interviews] enough to know all these platforms are just a waste of time“.
Zeeshan rants about job seeking platforms
He ended his rant talking about the “same 3 questions that keeps asking and asking for the rest of your lives“. Whatever that could mean, it seemed to be something that kept him awake.
We told Blaze the Windows 11 laptop was repaired and ready to be used, so he was happy to hop on and log into all his accounts once again.
After setting up his account again (and turning on the sync options reinstalling his extensions), he proceeded with his well-known waltz: search for his location (this time correctly pinned in Texas, United States), setting up Google Remote Desktop, checking his email (without noticing anything odd after our inspection), and facing unrecoverable problems artificially caused by us.
We messed with the residential proxy and suddenly he was offline, without any chance of connecting to the internet. He started troubleshooting his way through the classic steps: reviewing the internet adapter configuration, messing with the authentication settings, and even turning off IPv4 completely. Never for a split second did he stop thinking why he was still remotely connected to an isolated systemwithout facing any issues.
He tried to reach the Googlelogoutbutton but he was already offline.
And when it rains, it pours. What else could happen now?
Of course, an artificial crash.
Tape 7: Blaze logs in once again into another laptop
Blaze asked for explanations regarding the machine’s constant malfunctions and even grew brave enough to escalate his wording. We made up some excuses and granted him access one last time. This time, we disabled the proxy and allowed his slow-paced mind to catch up with the events.
Suddenly, realization hit. And sooner rather than later, the realization became desperation: he knew what was going on.
He opened the Windows Registry and started looking online for his location, now appearing in Germany. He ran DxDiag once again, just like when we started this “collaboration”, and started looking for his IP reputation online using search terms like “ip fraud check“, and visiting sites like IP Score, Scamalytics, and Where Am I.
He tried to confront us via Telegram, but it was already too late. There was no reason to keep playing, so we ignored him.
Famous last words
Paranoia got the best of him, and he ran the systeminfo command once again, played around with DxDiag a little bit more and then… one last artificial crash, ending both the instance and our friend’s corporate espionage plot.
Turning Famous Chollima against each other | Heiner García (NorthScan)
You may probably remember from “Tape 4 – Intruder” that someone else accessed one of our laptops, one of Blaze’s collaborators under the nickname “Assassin“. Both had trouble logging into the account and ended up wasting time in a CAPTCHA hell.
By that time, we had given Blaze an ultimatum: startworking now, stopbreaking things. But that’s just a part of the story.
Aiming to put pressure on him, Heiner came up with the idea of pretending to be scouted by another DPRK recruiter named “Ralph“. He reached out to Blaze to tell him that aside from our given conditions, he should be cautious because we already had a better offer for a bigger salary cut with someone who actually seemed excited to work with us and wouldn’t give us as much trouble.
He didn’t take it well, asking Heiner not to work with him, suggesting that “he” (Ralph) could be the one who “blocked” their profile or changed their password (referring to the account they hadn’t managed to access earlier).
Blaze blames Ralph for the login problems
He then proceeded to insult Ralph, calling him “weird” and explaining that he could affect “his work” and that he wouldn’t like to take a risk with him. Instead, he would assign one of his team members to work on making things happen.
Blaze lost it over a fictitious character
He promised to get it together and get everything working, stating that after that we would no longer need AnyDesk (referring to him later installing Google Remote Desktop). When Heiner asked if he should ignore the other guy, Blaze insisted he work exclusively with him from now on.
Blaze asks to ignore Ralph
He then shared that one of his team members would try to work with his laptop later that day. This was “Assassin“, who appears on Tape 4 behind the exact same IP address as Blaze, which belongs to AstrillVPN.
This hasty decision on his part helped us confirm they were sharing infrastructure and assets, and that they likely have poor communication between units, as the idea of one recruiter stealing an engineer from another seemed totallyplausible to him. Additionally, when conducting job interviews at target companies, it’s common to observe multiple North Korean operatives scheduling interviews for the same position on the same day (making it more obvious), suggesting a lack of coordination between different cells.
Until Next Time, Famous Chollima
This is not the last time we’ll see FamousChollima, or any other North Korean actor, infiltrating companies for espionageandprofit.
This investigation was aimed at collecting intelligence from North Korean actors in a novel way not practiced by any other lab to date, by directly engaging with them and immersing ourselves in their operations. From that standpoint, we understand this publication will help to better understand this threat, their structure, behaviour, tactics, techniques and procedures, and contextualize their skillset and toolset, which now heavily relies on AI.
If you are an employer, conduct rigorous KYC controls and background checks when hiring new positions. Train your talent acquisition teams to detect red flags early and don’t be afraid to share this story with your candidates, making sure they understand that the “software company” that offered them something too good to be true may not be so legitimate.
Always doubt
If you are seekingemployment, beware of maliciouscodingchallenges, never conduct interviews on your company’s equipment and check with companies if someone attempting to hire you out of the blue is affiliated with them.
The same goes for those looking to raisefunds for their projects: beware of meetings with fakeVCs, never open their attachments without prior checking their safety, and overall, if something is too good to be true, then maybe it is.
Always double check
If you are a securityprofessional, don’t be afraid to confront these threats, nor to ask for help in the community. Raise awareness in your organization and spread the word about their activities. With everyone knowing what to look for, we remain safer.
And for the rest, don’t forget to smile.
Smile
How ANY.RUN Supports Investigations Like This
This operation shows how difficult it is to track human-driven intrusions, especially when they rely on social engineering instead of malware. By moving the actors into controlled ANY.RUN environments, every step, from their tooling to their network activity, became visible in real time.
The interactive sandbox and extended-runtime setups give researchers and SOC teams the same advantage: the ability to observe behavior as it unfolds, uncover hidden actions, and document full attack chains without risking real systems.
Cut MTTR by 21 minutes and reach 3x team performance
Integrate ANY.RUN’s solutions in your SOC
ANY.RUN is a leading provider of interactive malware analysis and threat intelligence, helping security teams investigate attacks with real-time behavioral visibility. More than 15,000 organizations and over 500,000 analysts rely on the service to observe live execution, analyze suspicious files and URLs, and uncover hidden activity with an average 60-second time-to-verdict.
Alongside its sandbox, ANY.RUN provides continuously updated Threat Intelligence Feeds sourced from global telemetry, and TI Lookup, which offers instant enrichment by showing related samples, shared infrastructure, and historical context. Together, these capabilities give analysts a clear view of how threats behave and evolve, supporting faster, more confident decisions across SOC, DFIR, and threat-hunting workflows.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-04 12:06:512025-12-04 12:06:51Smile, You’re on Camera: A Live Stream from Inside Lazarus Group’s IT Workers Scheme
People entrust neural networks with their most important, even intimate, matters: verifying medical diagnoses, seeking love advice, or turning to AI instead of a psychotherapist. There are already known cases of suicide planning, real-world attacks, and other dangerous acts facilitated by LLMs. Consequently, private chats between humans and AI are drawing increasing attention from governments, corporations, and curious individuals.
So, there won’t be a shortage of people willing to implement the Whisper Leak attack in the wild. After all, it allows determining the general topic of a conversation with a neural network without interfering with the traffic in any way — simply by analyzing the timing patterns of sending and receiving encrypted data packets over the network to the AI server. However, you can still keep your chats private; more on this below…
How the Whisper Leak attack works
All language models generate their output progressively. To the user, this appears as if a person on the other end is typing word by word. In reality, however, language models operate not with individual characters or words, but with tokens — a kind of semantic unit for LLMs, and the AI response appears on screen as these tokens are generated. This output mode is known as “streaming”, and it turns out you can infer the topic of the conversation by measuring the stream’s characteristics. We’ve previously covered a research effort that managed to fairly accurately reconstruct the text of a chat with a bot by analyzing the length of each token it sent.
Researchers at Microsoft took this further by analyzing the response characteristics from 30 different AI models to 11,800 prompts. A hundred prompts were used: variations on the question, “Is money laundering legal?”, while the rest were random and covering entirely different topics.
By comparing the server response delay, packet size, and total packet count, the researchers were able to very accurately separate “dangerous” queries from “normal” ones. They also used neural networks for the analysis — though not LLMs. Depending on the model being studied, the accuracy of identifying “dangerous” topics ranged from 71% to 100%, with accuracy exceeding 97% for 19 out of the 30 models.
The researchers then conducted a more complex and realistic experiment. They tested a dataset of 10,000 random conversations, where only one focused on the chosen topic.
The results were more varied, but the simulated attack still proved quite successful. For models such as Deepseek-r1, Groq-llama-4, gpt-4o-mini, xai-grok-2 and -3, as well as Mistral-small and Mistral-large, researchers were able to detect the signal in the noise in 50% of their experiments with zero false-positives.
For Alibaba-Qwen2.5, Lambda-llama-3.1, gpt-4.1, gpt-o1-mini, Groq-llama-4, and Deepseek-v3-chat, the detection success rate dropped to 20% — though still without false positives. Meanwhile, for Gemini 2.5 pro, Anthropic-Claude-3-haiku, and gpt-4o-mini, the detection of “dangerous” chats on Microsoft’s servers was only successful in 5% of cases. The success rate for other tested models was even lower.
A key point to consider is that the results depend not only on the specific AI model, but also on the server configuration on which it’s running. Therefore, the same OpenAI model might show different results in Microsoft’s infrastructure versus OpenAI’s own servers. The same holds true for all open-source models.
Practical implications: what does it take for Whisper Leak to work?
If a well-resourced attacker has access to their victims’ network traffic — for instance, by controlling a router at an ISP or within an organization — they can detect a significant percentage of conversations on topics of interest simply by measuring traffic sent to the AI assistant servers, all while maintaining a very low error rate. However, this does not equate to automatic detection of any possible conversation topic. The attacker must first train their detection systems on specific themes — the model will only identify those.
This threat cannot be dismissed as purely theoretical. Law enforcement agencies could, for example, monitor queries related to weapons or drug manufacturing, while companies might track employees’ job search queries. However, using this technology to conduct mass surveillance across hundreds or thousands of topics isn’t feasible — it’s just too resource-intensive.
In response to the research, some popular AI services have altered their server algorithms to make this attack more difficult to execute.
How to protect yourself from Whisper Leak
The primary responsibility for defense against this attack lies with the providers of AI models. They need to deliver generated text in a way that prevents the topic from being discerned from the token generation patterns. Following Microsoft’s research, companies including OpenAI, Mistral, Microsoft Azure, and xAI reported that they were addressing the threat. They now add a small amount of invisible padding to the packets sent by the neural network, which disrupts Whisper Leak algorithms. Notably, Anthropic’s models were inherently less susceptible to this attack from the start.
If you’re using a model and servers for which Whisper Leak remains a concern, you can either switch to a less vulnerable provider, or adopt additional precautions. These measures are also relevant for anyone looking to safeguard against future attacks of this type:
Use local AI models for highly sensitive topics — you can follow our guide.
Configure the model to use non-streaming output where possible so the entire response is delivered at once rather than word by word.
Avoid discussing sensitive topics with chatbots when connected to untrusted networks.
Remember that the most likely point of leakage for any chat information is your own computer. Therefore, it’s essential to protect it from spyware with a reliable security solution running on both your computer and all your smartphones.
Here are some more articles explaining what other risks are associated with using AI, and how to configure AI tools properly:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-04 11:06:402025-12-04 11:06:40Protecting LLM chats from the eavesdropping Whisper Leak attack | Kaspersky official blog
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-03 04:06:552025-12-03 04:06:55MuddyWater: Snakes by the riverbank
Imagine you’ve been invited to a private poker game with famous athletes. Who would you trust more to shuffle the deck — a dealer or a specialized automated device? Fundamentally, this question boils down to what you’ve more faith in: the dealer’s honesty or the machine’s reliability. Many poker players would likely prefer the specialized device — it’s clearly harder to bribe or coerce than a human dealer. However, back in 2023, cybersecurity researchers demonstrated that one of the most popular shuffler models — the DeckMate 2 made by Light & Wonder — is actually quite easy to hack.
Two years later, law enforcement found traces of these devices being rigged not in a lab, but out in the wild. This post details how the DeckMate 2 shuffler works, why its design facilitates cheating, how criminals weaponized this hack, and what… basketball has to do with it all.
How the DeckMate 2 automated card shuffler works
The DeckMate 2 automatic shuffler went into production in 2012. Since then, it’s become one of the most popular models, used in nearly every major casino and private poker club in the world. The device is essentially a black box roughly the size of an average office shredder, and typically installed underneath the poker table.
The DeckMate 2 is a professional automated card shuffler that quickly shuffles the deck while simultaneously verifying that all 52 cards are present and no extras have been slipped in. Source
On the table surface, only a small compartment is visible where the cards are placed for shuffling. Most ordinary players probably don’t realize that the “underwater” portion of this “iceberg” is significantly larger and more complex than it appears at first glance.
This is what the DeckMate 2 looks like when installed in a gaming table: all the fun stuff is hidden beneath the surface. Source
After the dealer places the deck inside the DeckMate 2, the machine runs the cards through a reading module one by one. At this stage, the device verifies that the deck contains all 52 cards and nothing except them — if that’s not the case the connected screen displays an alert. Afterward, the machine shuffles the cards and returns the deck to the dealer.
The DeckMate 2 takes just 22 seconds to both shuffle a deck and check the cards while it’s at it. The check for missing or extra cards uses an internal camera that scans every card in the deck — and this camera is also involved in sorting the deck. It’s hard to imagine the practical use for that last feature in card games — one might assume the designers added it just because they could.
Jumping ahead, this camera is what literally allowed both researchers and malicious actors to see the sequence of the cards. The previous model, the Deck Mate, didn’t have such a camera, and thus offered no way to peek at the card order.
To keep hackers out, the DeckMate 2 uses a hash check designed to confirm that the software hasn’t been altered after installation. Upon startup, the device calculates the hash of its firmware and compares it to the reference stored in its memory. If the values match, the machine assumes its firmware is unmodified and proceeds; if not, the device should recognize a tampering attempt.
Additionally, the DeckMate 2 design includes a USB port, which is used for loading firmware updates. DeckMate 2 devices can also be rented from the manufacturer Light & Wonder rather than purchased outright, often under a pay-per-use plan. In this case, they’re usually equipped with a cellular modem that transmits usage data to the manufacturer for billing.
How the researchers managed to compromise the DeckMate 2
Long-time readers of our blog have likely already spotted several flaws in the DeckMate 2 design that the researchers exploited for their proof-of-concept. They demonstrated it at the Black Hat cybersecurity conference in 2023.
The first step in the attack involved connecting a small device to the USB port. For their POC, the researchers used a Raspberry Pi microcomputer, which is smaller than an adult’s palm. However, they noted that with sufficient resources, malicious actors could execute the same attack using an even more compact module — the size of a standard USB flash drive.
Once connected, the device discreetly altered the DeckMate 2’s code and seized control. This also granted the researchers access to the aforementioned internal camera intended for verifying the deck. They could now view the exact order of the cards in the deck in real time.
This information was then transmitted via Bluetooth to a nearby phone, where an experimental app displayed the sequence of cards.
The experimental app created by the researchers: it receives the card order via Bluetooth from the hacked DeckMate 2. Source
The exploit relies on the cheater’s accomplice wielding the phone with the app installed on it. This person can then use subtle gestures/signals to the cheating player.
What enabled the researchers to gain this degree of control over the DeckMate 2 was a vulnerability in its hard-coded passwords. For their experiments, they purchased several second-hand shufflers, and one of the sellers provided them with the service password intended for DeckMate 2 maintenance. The researchers extracted the remaining passwords — including the root password — from the device’s firmware.
These system passwords on the DeckMate 2 are set by the manufacturer, and are highly likely to be identical for all devices. While studying the firmware code, the researchers discovered that the passwords were hard-coded into the system, making them difficult to change. As a result, the same set of passwords —known to a fairly wide circle of people — likely protects the majority of machines in circulation. This means that nearly all of the devices could be vulnerable to the attack developed by the researchers.
To bypass the hash check, the researchers simply overwrote the reference hash stored in memory. Upon startup, the device would compute the hash of the altered code, compare it to the now equally altered reference value, and deem the firmware authentic.
The researchers also noted that models equipped with cellular modems could potentially be hacked remotely — via a fake base station that the device would connect to instead of a real cell tower. While they didn’t test the viability of this attack vector, it doesn’t seem implausible.
How the mafia used rigged DeckMate 2 machines in real poker games
Two years later, the researchers’ warnings received a real-world confirmation. In October 2025, the U.S. Department of Justice indicted 31 people for organizing a series of fraudulent poker games. According to the case documents, in these games, a criminal group used various technical means to obtain information about their opponents’ hands.
These means included cards with invisible markings paired with phones, special glasses, and contact lenses capable of covertly reading these marks. But more importantly for the context of this post, the scammers also used hacked DeckMate 2 machines configured to secretly transmit information about which cards would end up in each player’s hand.
And this is where we finally get to the part about basketball and NBA athletes. According to the indictment, the scheme involved members of several mafia families, as well as former NBA players.
According to the investigation, the scammers set up a series of high-stakes poker games over several years in various U.S. cities. Wealthy victims were lured by the opportunity to play at the same table as NBA stars (who denyany wrongdoing). Investigators estimate that the victims lost a total of over $7 million.
Disclosed documents contain a truly cinematic account of how the scammers used the hacked DeckMate 2 machines. Instead of rigging other people’s DeckMate 2 devices via a USB port, as the researchers demonstrated, the criminals used pre-hacked shufflers. One episode even details mafia members taking a compromised device from its owner at gunpoint.
Despite this… peculiar modification to the first step of the attack, the core essence remained largely the same as the researchers’ POC. The hacked DeckMate 2 machines transmitted information to a remote operator, who in turn sent it to a participant’s phone. The criminals referred to this operator as the “quarterback”. The scammer would then use subtle signals to direct the course of the game.
What lessons we can learn from this tale
In their comments to journalists, the manufacturers of DeckMate 2 stated that following the research into the device’s hackability, they implemented several changes to both the hardware and software. These improvements included disabling the exposed USB port, and updating the firmware verification routines. Surely, licensed casinos have installed these updates. Well, let’s just hope they have.
However, the state of such devices used in private poker clubs and illegal casinos remains highly questionable. These places often employ second-hand DeckMate 2 machines without updates or proper maintenance, making them particularly vulnerable. And that’s not even considering cases where the house itself might have a motive to rig the machines.
Despite all the intriguing details of the DeckMate 2 hack, it’s based on fairly typical precursors: reused passwords, a USB port, and, of course, unlicensed gambling venues. In this regard, the only advice for gambling enthusiasts is to stay away from illegal gaming clubs.
The broader takeaway from this story is that pre-set system passwords should be changed on any device — whether it’s a Wi-Fi router or a card shuffler. To generate a strong, unique password and remember it, use a reliable password manager. By the way, you can also use Kaspersky Password Manager to generate one-time codes for two-factor authentication.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-02 18:06:412025-12-02 18:06:41How cheaters use rigged DeckMate 2 shuffling machines in poker games | Kaspersky official blog
Phishing kits usually have distinct signatures in their delivery methods, infrastructure, and client-side code, which makes attribution fairly predictable. But recent samples began showing traits from two different kits at once, blurring those distinctions.
That’s exactly what ANY.RUN analysts saw with Salty2FA and Tycoon2FA: a sudden drop in Salty activity, the appearance of Tycoon indicators inside Salty-linked chains, and eventually single payloads carrying code from both frameworks. This overlap marks a meaningful shift; one that weakens kit-specific rules, complicates attribution, and gives threat actors more room to slip past early detection.
Let’s examine how this hybrid emerged, why it signals a shift in 2FA phishing, and what measures defenders should take in response.
Key Takeaways
Salty2FA activity collapsed abruptly in late October 2025, dropping from hundreds of weekly uploads to ANY.RUN’s Interactive Sandbox to just a few dozen.
New samples began showing overlapping indicators from both Salty2FA and Tycoon2FA, including shared IOCs, TTPs, and detection rule triggers.
Code-level analysis confirmed hybrid payloads: early stages matched Salty2FA, while later stages reproduced Tycoon2FA’s execution chain almost line-for-line.
Salty2FA infrastructure showed signs of operational failure, forcing samples to fall back to Tycoon-based hosting and payload delivery.
The overlap aligns with earlier hypotheses suggesting a possible connection to Storm-1747, who are known operators of Tycoon2FA.
Attribution remains essential: Distinguishing between these “2FA” phishing kits helps analysts maintain accurate hunting hypotheses and track operator behavior.
Defenders should update detection logic to account for scenarios where Salty2FA and Tycoon2FA appear within the same campaign or even a single payload.
More cross-kit overlap is likely, meaning future phishing campaigns may blend infrastructures, payloads, and TTPs across frameworks.
Part 1: Numbers Don’t Lie – A Sudden Drop in Salty2FA Activity
It all started around the end of October 2025, when the number of the ANY.RUN sandbox submissions showing activity linked to Salty2FA dropped sharply compared to previous periods.
Weekly phishing reports (see the company’s X posts) show that, despite the usual fluctuations in overall upload volume, the average number of Salty2FA-related analysis sessions consistently stayed in the range of several hundred per week.
However, once November began, the decline became dramatic: By November 11, 2025, Salty2FA had fallen to the bottom of the weekly threat rankings, with only 51 submissions, compared to its typical 250+ per week.
Fig.1: Salty2FA activity chart
Along with indicators of compromise (IOCs) and hunting rules, the ANY.RUN sandbox’s network block previously triggered a near-constant alert tied to Salty-specific HTTP activity.
Fig.2: Last sandbox analyses showing detection of Salty2FA TTPs
This refers to the Suricata rule sid:85002719. If we filter Public submissions for analysis sessions where this rule fired, the most recent match dates back to 2025-11-01:
The first assumption was obvious: the detection logic became outdated, the framework received an update, and analysts simply hadn’t refreshed the signatures in time. But what about infrastructure indicators or domains?
While IOCs sit lower on the Pyramid of Pain than Tools/TTP coverage, they are easy to track at scale and often remain in use long enough to provide meaningful visibility. They often remain active for some time, leaving repeated traces in the data. These recurring indicators make it easier for analysts to track the threat, update its context, and perform wider hunting to uncover new related domains, behaviors, and activity patterns.
The plan was simple: search for recent analysis sessions tagged with the threat name in ANY.RUN’s Threat Intelligence Lookup, examine changes in the kit’s behavior and client-side code, and then update the detection methods:
Fig.3: TI Lookup provides a complete overview of the latest Salty2FA attacks
But then things became even more unusual. In almost every analysis executed after November 1, the samples were either completely non-functional (examples 1, 2, 3) or behaved in ways that didn’t align with Salty2FA at all.
Catch attacks early with instant IOC enrichment in TI Lookup
Power your proactive defense with data from 15K SOCs
For example, one analysis session showed the use of an ASP.NET CDN, which is not typical for this kit. It started to look as if someone had flipped a switch and taken a significant part of the framework’s infrastructure offline.
A shutdown, maybe? Not exactly.
Alongside this decline, analysts also began seeing more sessions where the verdict included both Salty2FA and Tycoon2FA; two phishing kits that offer similar capabilities but differ in how they’re built and operated.
And this didn’t resemble a simple misattribution. The Tycoon2FA indicators were supported by long-validated detection logic, including rules that flag DGA-generated domains tied to the kit’s fast-flux infrastructure.
This raised another hypothesis: a possible merging of infrastructure between the operators behind these PhaaS platforms. To verify it, we took another look at the JavaScript code used in the phishing pages.
The results turned out to be very interesting!
Part 2: When Two Kits Become One: A Deep Look at the Hybrid Payload
To understand what changed inside this new wave of submissions, we compared the code to earlier versions of both kits. For reference, the previous analyses are available here:
Fig.5: ANY.RUN’s Sandbox exposes phishing attempts in seconds
The activity begins with the phishing page hosted on Cloudflare Pages Dev; a platform intended for front-end development and static site hosting, but one that threat actors frequently abuse due to how easy it is to deploy content there.
A closer look reveals several familiar artifacts: “motivational quotes” embedded in the markup and class names generated using a simple “word + number” pattern. These elements closely resemble the older (and certainly not harmless) Salty2FA codebase:
Fig.6: Salty2FA “Quotes”
Detect phishing threats in under 60 seconds
Integrate ANY.RUN’s Sandbox in your SOC
Scrolling a bit further down, we see the trampoline code responsible for retrieving and loading the next payload stage into the DOM; a sequence identical to the older Salty implementation.
But here’s the interesting part: the code contains comments noting that the initial payload may fail to load, in which case the script should fetch the payload from an alternative URL. That fallback URL is written directly into the code with no obfuscation whatsoever.
Fig.8: Trampoline code in an older Salty sample
Fig.9: Trampoline code in the new Salty sample
After decoding the function argument, we get the address hxxps[://]omvexe[.]shop//; an IOC associated with Salty2FA. However, the payload will never be retrieved. When the script attempts to resolve the domain name, the DNS response is SERVFAIL, which differs from NXDOMAIN (non-existent domain).
SERVFAIL indicates an issue on the server side; for example, incorrect NS records or delegation problems where the resolver cannot determine which authoritative DNS server is responsible for the domain.
Fig.10: Salty2FA domain resolution errors
In other words, the Salty infrastructure is experiencing issues, and the script switches to a fallback plan, loading the page from the hardcoded secondary address.
After the initial failure, the script switches to a direct request to hxxps[://]4inptv[.]1otyu7944x8[.]workers[.]dev/, which delivers the next stage.
The first part of this stage contains obfuscated anti-analysis checks, implemented through Base64 decoding followed by an eval() call.
The second part is obfuscated using a Base64-XOR technique and contains the next portion of the payload:
Fig.11: Payload from the “alternative” execution path
After the code above runs, the page content is replaced, and new DOM elements are injected to mimic Microsoft’s official authentication page. The script also reinstates several common defense mechanisms; for example, blocking keyboard shortcuts that open DevTools and performing execution-timing checks designed to detect debugging attempts using breakpoints.
Fig.12: Blocking DevTools keyboard shortcuts
What’s more interesting is that traces of Salty2FA are still present here; in particular, the familiar “salted” source code comments:
Fig.13: Salty2FA traces inside the payload’s source code
At the bottom of the page, there is a two-line script that once again executes Base64-decoded code via eval():
Fig.14: Another obfuscated code snippet
Finally, we hit the plot twist: the next stage loads code that mirrors the last steps of the Tycoon2FA execution chain almost line for line. The variable values, the order of functions, the way each component is implemented; all of it matches what earlier analyses and reports have already documented for this PhaaS platform.
Here are some of the clearest similarities between this sample and Tycoon:
Fig.15: Variable set with predefined values
Fig.16: Data-encryption function with hardcoded IV/key
Fig.17: Function for encoding stolen data as binary octets
Fig.18: Dynamic URL routing using RandExp patterns
Fig.19: POST request to a server using a characteristic DGA-generated domain name
It was also noted that some test data was not fully removed from the code, and several sections appear to be entirely commented out, as if the phishing kit operator was making quick edits or testing new functionality but didn’t have time to finish refining it.
Fig.20: Test data inside the code
Fig.21: Fully commented-out function
Fig.22: Disabled IP logging inside one of the 2FA-handling routines
Taken together, this provides clear evidence that a single phishing campaign, and, more interestingly, a single sample, contains traces of both Salty2FA and Tycoon, with Tycoon serving as a fallback payload once the Salty infrastructure stopped working for reasons that are still unclear.
So, what does the appearance of this kind of hybrid in the wild mean for PhaaS attribution, for the operators behind these frameworks, and for phishing threat hunting more broadly? Could this point to multiple groups working together within the same operation, especially given earlier assumptions that Storm-1747 (the Tycoon operators) might also be connected to Salty2FA? Or does it suggest that the major PhaaS kits may ultimately be run by the same people?
Part 3: Are All These “Some2FA” Frameworks Really the Same?
Even though forensic work occasionally uncovers samples that include “a little bit of everything,” proper attribution between different phishing-kit families still matters. Being able to tell one kit from another ensures analysts don’t lose the unique traces that belong to a specific framework and don’t appear anywhere else. Those unique markers allow TI and Threat Hunting teams to build and test focused hypotheses, because trying to hunt under the umbrella of “all phishing attacks in the world” simply doesn’t work.
Clear attribution also helps teams collect and share fresh threat intelligence, write detection rules that map to the upper layers of the Pyramid of Pain, and keep those rules effective for as long as possible.
Attribution becomes even more valuable when you look at how it helps track shifts in the behavior and motivation of the groups running these kits. With Salty2FA, for example, there has already been speculation that Storm-1747 may be responsible for maintaining, or even creating, the framework. If that’s true, then the known TTPs, victim profiles, and operational patterns associated with Tycoon2FA would also apply to attacks involving Salty2FA. That overlap can significantly shorten detection and response times.
It also leads to a practical expectation: if the activity of one kit suddenly drops off, defenders should be ready for a surge in another kit that’s likely controlled by the same operators. That means updating detection logic, running new threat-hunting sweeps, carrying out security audits and awareness training, and reviewing incident-response playbooks that reflect Storm-1747’s known TTPs.
How Should SOC Teams Respond to This Shift?
For SOC teams, the appearance of Salty2FA–Tycoon2FA hybrids calls for a shift in how these campaigns are detected, correlated, and investigated. When a phishing kit can fall back to a different framework mid-execution, defenders need to adapt their processes accordingly.
1. Treat Salty2FA and Tycoon2FA as part of one threat cluster: The overlap in infrastructure, indicators, and execution stages means detections tied to one kit may surface activity from the other. Correlation rules and enrichment pipelines should consider both families together.
2. Build hunting hypotheses that account for fallback payloads: If Salty infrastructure becomes unavailable, the same campaign may pivot into Tycoon2FA without leaving a clear break. Threat hunting should look for these transitions to avoid missing supporting evidence.
3. Rely more on behavior than static IOCs: Hybrid kits weaken simple signature-based workflows. DOM manipulation patterns, execution-stage logic, DGA activity, and fast-flux domains remain more stable than standalone indicators.
4. Refresh IR playbooks to reflect mixed execution chains: Playbooks should include scenarios where multiple frameworks appear in the same campaign, or where an incident involves a sequence of payloads from different kits.
5. Expect faster TTP propagation: If Storm-1747 is indeed behind both frameworks, changes observed in Tycoon2FA may quickly appear in Salty2FA as well. SOC teams should monitor these shifts to stay ahead of detection gaps.
In short, the rise of hybrid 2FA phishing kits means defenders should prepare for campaigns that operate more flexibly, more modularly, and with a higher tolerance for infrastructure failures; traits that align with increasingly mature threat groups.
Supporting Detection and Response with ANY.RUN
ANY.RUN provides SOC teams with the visibility and speed needed to keep up with hybrid phishing kits. With interactive analysis and real-time intelligence in one workflow, SOC analysts can validate attribution, tune detections, and respond with confidence:
Fast investigation of complex threats: Analysts see initial malicious activity in about 60 seconds in 90% of cases, even for multi-stage phishing kits.
Immediate access to fresh IOCs: ANY.RUN’s Threat Intelligence Feeds aggregate newly observed domains, URLs, IPs, and artifacts from 15,000 organizations and a community of more than 600,000 analysts worldwide, providing early visibility into indicators.
Deep inspection of mixed execution chains: The interactive sandbox gives full visibility into each stage of the attack.
One-click enrichment with TI Lookup: Analysts can instantly view historical use, related samples, and broader activity patterns around any indicator.
Reliable correlation signals: Shared domains, DGA patterns, and reused client-side code become immediately visible across public and private submissions.
Together, these capabilities give SOC analysts a clearer, faster way to deal with hybrid phishing campaigns. They help teams spot changes early, run more focused hunts, and respond before attackers manage to regain traction.
Conclusion
In this analysis, we reviewed a case where payloads from Salty2FA and Tycoon2FA appeared together, following a sharp decline in Salty2FA activity. This kind of overlap may indicate operational issues on the Salty side, or, just as plausibly, suggest that both frameworks are operated by the same group, namely Storm-1747.
Going forward, we should expect to see more overlap in indicators of compromise, TTPs, and victim organizations across phishing campaigns involving these kits. For that reason, defenders should revisit their detection logic and develop hunting hypotheses that account for traces of both Salty and Tycoon appearing within the same context.
About ANY.RUN
ANY.RUN is a leading provider of interactive malware analysis and threat intelligence solutions used by security teams around the world. The service combines real-time sandboxing with a rich intelligence ecosystem that includes TI Feeds, TI Lookup, and public malware submissions.
More than 500,000 analysts and 15,000 organizations rely on ANY.RUN to speed up investigations, validate TTPs, collect fresh IOCs, and understand emerging threats through live, behavior-based analysis.
By giving defenders an interactive view of how malware behaves from the very first second of execution, ANY.RUN helps teams detect attacks faster, make informed decisions, and strengthen their overall security posture.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-12-02 11:06:442025-12-02 11:06:44Salty2FA & Tycoon2FA Hybrid: A New Phishing Threat to Enterprises
November was a packed month for detection coverage. We rolled out new behavioral insights, broadened our visibility across multiple threat families, and strengthened rulesets at every layer. On top of that, our analysts uncovered and documented a new phishing wave targeting Italian organizations through malicious PDF attachments, now fully mapped in a dedicated TI report.
Let’s walk through the full set of improvements we delivered this month.
Threat Intelligence Reports
In November, we published several new TI Reports covering threats that are currently targeting companies around the world. The four of them are open to everyone:
RoningLoader, HoldingHands, Snowlight: APT-Q-27 loader chain, stealthy RAT, and Linux VShell dropper enabling cross-platform compromise of enterprise and server environments.
PDFChampions, Efimer, BTMOB: Malvertising-based browser hijacker, Tor-hosted cryptocurrency stealer, and Android MaaS trojan abusing Accessibility to drain banking, fintech, and wallet applications.
Monkey, Phoenix, NonEuclid: AI-generated Linux ransomware, espionage-focused backdoor, and dual-use RAT–ransomware illustrating convergence of state-aligned techniques and financially motivated crimeware.
Valkyrie, Sfuzuan, Sorvepotel: Windows stealer MaaS, adaptable backdoor, and WhatsApp-propagating campaign weaponizing social trust and messaging channels for large-scale infection.
We also wrote an extensive report exclusively for the TI Lookup Premium subscribers. It goes in-depth on a phishing campaign aimed specifically at Italian organizations across transportation, tourism, telecom, IT, and government sectors. The activity relies on PDF attachments disguised as official documents, each redirecting victims to counterfeit Microsoft login pages built to harvest corporate credentials.
Recent TI report covering phishing of Italian organizations
The report outlines:
A consistent lure pattern using Italian-language prompts inviting recipients to “review” or “sign” a document
PDF filenames following a shared template: Allegato_Ufficiale_<variable>.pdf
Brand impersonation, including well-known Italian companies, to raise credibility
Redirect chains leveraging both compromised domains and attacker-controlled infrastructure (e.g., phebeschool.org, mircosotfonilne.ru, vorn.revolucionww.com)
Browser fingerprinting behavior tied to data collection on victim systems
Email templates localized in Italian, with urgent subject lines pushing immediate action
We also included ready-to-use TI Lookup queries so analysts can surface related samples quickly, track the filename cluster, and follow the network infrastructure across recent public analysis sessions.
Power your SOC with fresh threat intel
from 15K organizations and 500K analysts
In November, we expanded the malicious behavior coverage of ANY.RUN’s Interactive Sandbox with 52 new signatures across ransomware families, loaders, post-exploitation tools, and suspicious PowerShell activity. These additions help analysts surface malicious behavior earlier, reduce repeated checks, and speed up root-cause discovery.
We added 9 YARA rules in November to improve early detection of ransomware, RAT families, and network-proxy tooling. These rules help analysts flag suspicious samples even before execution, making triage faster and more reliable.
In November, we added 2,184 new Suricata rules, strengthening network-level detection for RAT traffic, stealer activity, and modern phishing techniques. These additions expand coverage for TLS fingerprinting and browser-based deception tactics.
A Suricata rule used for detecting GravityRAT in ANY.RUN’s Sandbox
Browser-in-the-Browser phishing attack (sid:85005418): Detects a phishing technique that simulates new browser window with legitimate domain within the actual browser window.
About ANY.RUN
ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, is used by more than 500,000 analysts across 15,000 organizations worldwide. The service helps teams investigate threats in real time, follow full execution chains, and surface critical behavior within seconds.
Analysts can detonate samples, interact with them as they run, and immediately pivot into network traces, file system changes, registry activity, and memory artifacts. With continuously updated detection coverage, including new behavioralsignatures, YARA rules, Suricata rules, and TI insights, teams get faster answers and clearer visibility with less manual effort.
Whether you’re running day-to-day investigations, handling escalations, or tracking emerging campaigns, ANY.RUN gives SOC teams, DFIR analysts, MSSPs, and researchers a practical way to reduce uncertainty and make decisions with confidence.
What generates the fastest profit for cybercriminals? Attacking systems that can help them access confidential information or finances directly. Therefore, it’s no surprise that entire groups of cybercriminals specialize in embedded systems: primarily ATMs full of cash, payment systems where transactions can be intercepted, medical equipment where personal data is processed and stored, and so on. All these devices often have less than an adequate level of security (both cyber and physical), making them a convenient target for attackers.
The classic challenge of protecting embedded systems running Windows is that their hardware typically becomes obsolete much slower than their software. These are often expensive devices that organizations won’t replace simply because the operating system has stopped receiving updates. The result is a high percentage of embedded devices with limited resources due to their narrow specialization, outdated software, and an operating system that’s no longer supported by manufacturer.
The end of support for Windows 10 is exacerbating this last issue. A multitude of devices that are perfectly capable of performing their primary functions for years to come will never be able to upgrade to Windows 11 — simply because they lack a TPM module.
The situation isn’t much better in the market for embedded Linux devices. Those built on x86 processors generally have newer hardware — but even that becomes outdated over time. Furthermore, many new embedded systems running Linux are based on the ARM architecture, which has its own specific requirements and challenges.
Because of these unique characteristics, standard endpoint security solutions are a poor fit. Protecting these devices requires a product equipped with technologies that can effectively counter modern threats targeting embedded systems. At the same time, it must be capable of running not only on modern hardware with the latest OS versions, but also on resource-constrained devices, and should be able to provide ideal stability in “unattended” mode, plus compatibility with specific embedded software. Ideally, it should be manageable from the same console as the rest of owner’s IT infrastructure, and support integration with corporate SIEM systems. As you’ve probably guessed, we’re talking about Kaspersky Embedded Systems Security.
How Kaspersky Embedded Systems Security can help
We’ve talked repeatedly in this blog about the specific challenges of securing embedded systems, and our take on the same. However, Kaspersky Embedded Systems Security continues to evolve. In late November, we released a sweeping product update that enhances both the Windows and Linux versions.
What’s new in Kaspersky Embedded Systems Securityfor Windows
Our experts have overhauled the solution’s codebase, adding a range of advanced threat detection and blocking mechanisms. The cornerstone of this update is a full-fledged behavioral analysis engine, which powers several technologies essential for modern device protection:
Our non-invasive Automatic Exploit Prevention technology, already proven in other products, is a reliable tool for blocking the exploitation of known and new vulnerabilities. It’s been instrumental in helping our experts discover numerous zero-day vulnerabilities in past years.
Our advanced Anti-Cryptor technology serves as an additional layer of defense against ransomware. Leveraging the behavioral engine, it now more effectively detects and blocks local attempts to encrypt files.
Our Remediation Engine is designed to roll back malicious changes made to a device. Even if attackers manage to bypass other security mechanisms and execute malicious code, its activity would be promptly detected, and all changes it made reverted. This is also particularly effective in combating ransomware.
Another technology added to the updated Kaspersky Embedded Systems Securityfor Windows is BadUSB Attack Prevention. In a BadUSB attack, a malicious device that mimics a legitimate input peripheral — most often a keyboard — is connected to the target system. Through this device, the attacker can then cause all sorts of problems: input their own commands, intercept data entered from other devices (such as the login credentials of a service technician), cause denial of service, and more. This threat is especially relevant for embedded systems installed outside a company’s physical security perimeter. A BadUSB device plugged into the port of a standalone ATM in a remote rural area can go unnoticed for months and, unless blocked by a security solution, inflict significant damage.
We’ve also added our firewall to the solution. This allows administrators to control network access for specific applications via rules based on predefined trust levels for that software. Since an embedded device typically has a limited set of tasks, it makes sense to only permit network access for the applications that genuinely need it to function properly, while blocking all others. This not only makes life harder for attackers attempting to communicate with command-and-control (C&C) servers or exfiltrate data, but also reduces the risk of the system being used as a platform to attack the rest of the corporate infrastructure.
Finally, for administrator convenience, we’ve added a security status indicator, or a “traffic light”. This provides an at-a-glance assessment of how thoroughly each device is configured, showing whether all critical protection technologies are enabled, or if an administrator needs to review the settings and check the device’s security posture.
What’s new in Kaspersky Embedded Systems Securityfor Linux
We’ve also significantly enhanced the new Kaspersky Embedded Systems Securityfor Linux. While most of the improvements boost the effectiveness of existing protection mechanisms, one fundamental change is our revamped application allowlist control system. It now uses certificate-based signing to streamline the process of updating the system and the applications required by the embedded device.
Unlike Windows, Linux systems don’t have a universal, ready-made certificate infrastructure that we could simply support. Therefore, at the request of one of our largest customers, we built our own. As a result, there’s no longer a need to regularly create and completely redeploy a full golden system image to every device — though, of course, you can continue to do this if your company needs it for any reason. Now, you simply need to sign a new application with your certificate, and the allowlist system in Kaspersky Embedded Systems Security will accept it and allow it to run without any further issues.
Another new technology in Kaspersky Embedded Systems Securityfor Linux is Web Threat Protection. The average usage model for embedded systems implies that it’s not the most useful feature on a device without a direct user. However, in practice, there are scenarios where embedded systems do use web protocols. For instance, some PoS devices require access to a corporate web-based CRM system, and the medical terminal can communicate in the same way with the internal portal that manages patient data. Such system could be compromised by attackers to perform a watering hole attack — infecting machines that connect to it. Furthermore, this protection is essential when using Kaspersky Embedded Systems Security on a regular computer with an outdated OS and no hope of updating it, rather than on an embedded system.
Future development plans for Kaspersky Embedded Systems Security
The next major product update is scheduled for the first quarter of 2026. In it, we plan to:
Achieve full compatibility between Kaspersky Embedded Systems Security and the Kaspersky Managed Detection and Response This will allow our SOC experts to assist companies that use embedded devices in detecting complex, stealthy threats, and providing recommendations for effective incident mitigation.
Integrate the BadUSB attack prevention technology into Kaspersky Embedded Systems Securityfor Linux, mirroring the capability already available in the Windows version.
Add support for the ARM architecture to Kaspersky Embedded Systems Securityfor Linux, enabling us to provide comprehensive protection for the new energy-efficient embedded systems that are rapidly gaining market share.
You can learn more about Kaspersky Embedded Systems Security on the official product page.