Taiwan Security Firm Confirms Flaw Flagged by CISA Likely Exploited by Chinese APTs

The vulnerability in TeamT5 ThreatSonar Anti-Ransomware was recently added to CISA’s KEV catalog.

The post Taiwan Security Firm Confirms Flaw Flagged by CISA Likely Exploited by Chinese APTs appeared first on SecurityWeek.

SecurityWeek – ​Read More

UnsolicitedBooker Targets Central Asian Telecoms With LuciDoor and MarsSnake Backdoors

The threat activity cluster known as UnsolicitedBooker has been observed targeting telecommunications companies in Kyrgyzstan and Tajikistan, marking a shift from prior attacks aimed at Saudi Arabian entities.
The attacks involve the deployment of two distinct backdoors codenamed LuciDoor and MarsSnake, according to a report published by Positive Technologies last week.
“The group used several

The Hacker News – ​Read More

Anonymous Fénix Members Arrested in Spain

The group’s administrator and moderator were arrested last year, and two other members were arrested this month.

The post Anonymous Fénix Members Arrested in Spain appeared first on SecurityWeek.

SecurityWeek – ​Read More

Moonrise RAT: A New Low-Detection Threat with High-Cost Consequences

Security professionals rely on early detection signals to prioritize and contain incidents. But what happens when a fully capable RAT generates none? 

In a recent investigation, the ANY.RUN experts uncovered a new Go-based remote access trojan we named Moonrise. At the time of analysis, it wasn’t detected on VirusTotal and had no vendor signatures tied to it. 

That’s the problem teams can’t ignore: credential theft, remote command execution, and persistence can be active while static checks stay silent. The result is slower triage, and more escalations. 

Let’s break down Moonrise’s full attack chain and show how you can detect similar threats earlier, before they turn into longer investigations and real business impact. 

Key Takeaways 

  • Moonrise operated without early static detection, establishing active C2 communication before any vendor alerts were triggered. 
  • The RAT supports credential theft, remote command execution, persistence, and user monitoring, enabling full remote control of an infected endpoint. 
  • Silent C2 activity increases business exposure, extending dwell time and raising the risk of data loss, operational disruption, and financial impact. 
  • Static reputation checks alone are not enough. Behavior-based analysis is critical to confirm real attacker activity quickly. 

What Moonrise Means for Organizations 

Moonrise isn’t just a remote access tool. Its command set shows how an attacker can move from access to impact. 

  • Credential theft and clipboard monitoring can expose passwords, session tokens, and sensitive data copied between systems. 
  • Remote command execution and process control let operators run scripts, interfere with defenses, and manipulate business applications. 
  • File upload and execution creates a clean path to drop additional payloads, including stealers or ransomware. 
  • Screen capture, webcam, and microphone access can reveal what’s happening inside finance workflows, admin panels, and internal communications. 
  • Persistence and privilege-related functions increase dwell time and make removal harder. 

One compromised endpoint can disrupt operations and lead to financial and reputational damage, especially when the malware stays below static detection thresholds long enough to expand access. 

Reduce escalation
and investigation costs
Detect threats earlier with behavior-first clarity
 



Integrate in your SOC


Attack Details Exposed: What We Observed in Execution 

You can follow the full Moonrise chain in real time, from execution to C2 control, and note the behaviors you can use for detection and triage. 

Check analysis session with Moonrise 

Moonrise RAT detected inside ANY.RUN sandbox, revealing its full attack chain

Within minutes of execution, Moonrise established outbound communication and began responding to operator-driven commands. What looked harmless in static checks immediately revealed interactive control once behavior was observed. 

Reduce investigation time
from hours to minutes

Act on evidence, not assumptions
 



Register now


1. Session Registration and Persistent Communication 

The communication begins with: 

  • client_hello 
  • connected 
  • ping/pong 

These commands handle client identification and keep the WebSocket session alive. This confirms that the infected system is actively connected and ready to receive instructions. 

At this stage, traditional static checks still show nothing suspicious. But behaviorally, the endpoint is already under remote control. 

C2 communication overview of Moonrise RAT 
C2 communication overview of Moonrise RAT 

2. Visibility Into the Host Environment 

Once the session is established, the operator starts requesting information about the system. 

Observed commands include: 

  • process_list 
  • file_list 
  • webcam_list 
  • monitors_list
  • screenshot  

This allows the attacker to inspect running processes, review directory structures, identify connected displays, and check for available multimedia devices. Even when screen capture fails in a headless environment, the attempt itself signals active operator-driven interaction. 

YARA rule match confirming screenshot functionality inside the Moonrise process 

This stage provides the attacker with enough context to determine what data is accessible and which actions to take next. 

3. Direct System Interaction and Control 

Moonrise supports active command execution and process manipulation: 

  • cmd 
  • process_kill 
  • file_upload 
  • file_run 
  • file_execute 
  • file_delete 
  • mkdir 
  • explorer_restart 

Through these commands, the operator can run system commands remotely, terminate selected processes, upload additional payloads, execute them, modify directories, and restart system components. 

svchost.exe spawning cmd.exe to execute system commands
svchost.exe spawning cmd.exe to execute system commands inside the ANY.RUN sandbox 

This shifts the attack from observation to full control. At this point, the endpoint is no longer just compromised. It can be used to deploy further tools or prepare deeper access. 

4. Credential Access and Data Extraction 

The sample includes commands associated with data theft and credential harvesting: 

  • stealer 
  • steam 
  • file_download 
  • keylogger_logs 
  • clipboard_history 

These functions enable collection of stored credentials, extracted files, logged keystrokes, and clipboard content. If sensitive data is copied between applications, such as passwords or financial details, it becomes accessible to the operator. 

This is where technical compromise transitions into business exposure. 

Reduce the risk
of silent data exfiltration
Turn weak signals into clear decisions fast
 



Integrate now


5. Active User Monitoring 

Moonrise includes extensive user interaction monitoring capabilities: 

  • keylogger_start 
  • keylogger_stop 
  • keylogger_logs 
  • input 
  • clipboard_monitor_start 
  • clipboard_monitor_stop 
  • clipboard_history 
  • clipper_get_addresses 
  • clipper_set_address 
  • screenshot 
  • screen_stream_start 
  • screen_stream_stop 
  • webcam_capture 
  • microphone_record 

These commands allow the operator to monitor user input, track clipboard changes, capture screen content, and access audio or video devices. 

The infected endpoint effectively becomes a live surveillance point. 

Checks for available and operational camera hardware
Moonrise RAT actively checks for available and operational camera hardware before attempting capture

6. Privilege and System-Level Capabilities 

Moonrise also contains commands related to privilege handling and system configuration: 

  • uac_bypass 
  • rootkit_enable 
  • rootkit_disable 
  • watchdog_status 
  • protection_config 
  • uxlocker_trigger 
  • voltage_drop 

These suggest support for privilege manipulation, system configuration changes, and persistence-related behavior. While not all commands may be triggered in every session, their presence indicatesextended control options. 

7. Lifecycle Management and Disruption 

Moonrise includes lifecycle management functions: 

  • update 
  • uninstall 

These allow the operator to modify or remove the deployed version of the malware. This indicates support for maintaining or adjusting the infection over time. 

The command set also contains user-facing system interaction functions: 

  • fun 
  • fun_message 
  • fun_wallpaper 
  • fun_openurl 
  • fun_shake 
  • fun_sound 
  • fun_restart 
  • fun_shutdown 
  • fun_bsod 

These commands suggest the ability to trigger visible system actions, including restarts or shutdown events, depending on operator intent. 

Their presence reinforces that Moonrise provides broad remote interaction capabilities beyond silent monitoring. 

Early Detection: 3-Step Loop That Works for Stealth RATs 

Moonrise is a good example of an annoying reality: sometimes a RAT shows up with no clean static verdict, no reputation you can trust, and nothing obvious to latch onto. In those cases, early detection comes down to how quickly your team can move from unclear signals to evidence-based containment. 

1. Monitoring: Catch the First Weak Signal Early 

A lot of RAT incidents start with infrastructure: a fresh IP, a new domain, traffic that doesn’t match your baseline. 

This is where ANY.RUN’s Threat Intelligence Feeds help. They continuously surface newly observed indicators and patterns based on telemetry and submissions from 15,000+ organizations and 600,000+ security professionals.  

Ti feeds
100% actionable IOCs delivered by TI Feeds to your existing stack 

For SOC managers, that means fewer blind spots in day-to-day monitoring and earlier detection of suspicious infrastructure before it becomes a bigger incident. 

99% unique
threat data for your SOC
Catch attacks early to protect your business
 



Integrate TI Feeds


2. Triage: Enrich Fast, Then Confirm with Behavior 

When static checks don’t help, teams often lose time debating severity. That’s where MTTR grows and escalation pressure builds. 

A cleaner path is enrich → execute → decide. Use Threat Intelligence Lookup to pull immediate context around a hash, URL, domain, or IP (relationships, related samples, historical sightings). Then run the artifact in the ANY.RUN Sandbox to confirm what it actually does in a safe environment. 

ANY.RUN’s sandbox detected full attack chain of Moonrise, including the implemented TTPs in a few minutes, instead of hours  

This is how teams replace uncertainty with evidence, reduce unnecessary Tier-1 escalations, and contain earlier, before a RAT turns into credential loss or broader access. 

74% of Fortune 100 companies
rely on ANY.RUN
for earlier detection and faster SOC response
 



Power your SOC now


3. Threat Hunting: Turn One Confirmed Case into Wider Coverage 

Once you confirm a RAT-like incident, the next step is making sure it doesn’t repeat under a slightly different wrapper. Threat Intelligence Lookup helps you pivot from confirmed indicators to related infrastructure and nearby samples, so hunting stays tied to what’s active now. 

From there, you can pivot into related IPs/domains, cluster similar samples, and validate behavior in the sandbox to decide whether it’s the same activity or a lookalike. 

Below is an example of a TI Lookup query for the Moonrise C2 IP observed in the attack: 

destinationIP:”193.23.199.88″ 

TI Lookup displays sandbox analyses related to the IP address used in the Moonrise attack 
TI Lookup displays sandbox analyses related to the IP address used in the Moonrise attack 

When these three motions run as a loop, monitoring, fast triage, and targeted hunting, stealth RATs stop being “late discoveries” and become manageable security events with lower response cost and less business exposure. 

Conclusion: Reducing Exposure Starts with Faster Clarity 

Moonrise is a reminder that the biggest risk isn’t the RAT itself but the time lost before it’s clearly identified. When static checks stay silent, attackers can steal credentials, stage more payloads, and lock in persistence while teams are still debating severity. 

Reducing exposure comes down to one thing: faster clarity. Feed fresh infrastructure signals into monitoring, enrich quickly with TI Lookup, and confirm behavior in the sandbox before the case grows into a costly incident. 

Bring speed and clarity to your SOC with ANY.RUN ➜

About ANY.RUN 

ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, fits naturally into modern SOC workflows and supports investigations from initial alert to final containment. 

It allows teams to safely execute suspicious files and URLs to observe real behavior, enrich indicators with immediate context through TI Lookup, and continuously monitor emerging infrastructure using Threat Intelligence Feeds. Together, these capabilities help reduce uncertainty, accelerate triage, and limit unnecessary escalations. 

Today, more than 600,000 security professionals across 15,000+ organizations rely on ANY.RUN to make faster decisions, strengthen detection coverage, and stay ahead of evolving phishing and malware campaigns. 

To stay informed about newly discovered threats and real-world attack analysis, follow ANY.RUN’s team on LinkedIn and X, where weekly updates highlight the latest research, detections, and investigation insights. 

Indicators of Compromise (IOCs)  

  • 193[.]23[.]199[.]88 
  • c7fd265b23b2255729eed688a211f8c3bd2192834c00e4959d1f17a0b697cd5e 
  • 8a422b8c4c6f9a183848f8d3d95ace69abb870549b593c080946eaed9e5457ad 
  • 7609c7ab10f9ecc08824db6e3c3fa5cbdd0dff2555276e216abe9eebfb80f59b 
  • Ed5471d42bef6b32253e9c1aba49b01b8282fd096ad0957abcf1a1e27e8f7551 
  • 082fdd964976afa6f9c5d8239f74990b24df3dfa0c95329c6e9f75d33681b9f4 
  • 8d7c1bbdb6a8bf074db7fc1185ffd59af0faffb08e0eb46a373c948147787268 

The post Moonrise RAT: A New Low-Detection Threat with High-Cost Consequences appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

How to take full-page screenshots in Chrome on any device – it’s easy and free

Want to capture a scrolling screenshot of a web page in Chrome? Here’s how to quickly take one on a desktop or your phone.

Latest news – ​Read More

Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

Anthropic on Monday said it identified “industrial-scale campaigns” mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude’s capabilities to improve their own models.
The distillation attacks generated over 16 million exchanges with its large language model (LLM) through about 24,000 fraudulent accounts in violation of its terms

The Hacker News – ​Read More

Try this tiny Linux distro when nothing else will fit – here’s why

Tiny Core Linux is an incredibly small, modular distro that can be customized to your specifications. Here’s how to get started.

Latest news – ​Read More

Faking it on the phone: How to tell if a voice call is AI or not

Can you believe your ears? Increasingly, the answer is no. Here’s what’s at stake for your business, and how to beat the deepfakers.

WeLiveSecurity – ​Read More

SURXRAT: From ArsinkRAT roots to LLM Module Downloads Signaling Capability Expansion

SURXRAT

Executive Summary

SURXRAT is an actively developed Android Remote Access Trojan (RAT) commercially distributed through a Telegram-based malware-as-a-service (MaaS) ecosystem under the SURXRAT V5 branding.

The malware is marketed using structured reseller and partner licensing tiers, allowing affiliates to generate and distribute customized builds while the operator maintains centralized infrastructure and operational control.

This distribution model reflects the increasing professionalization of the Android threat landscape, where malware developers focus on scalability and monetization through affiliate-driven campaigns.

Technical analysis shows that SURXRAT operates as a full-featured surveillance and device-control platform capable of extensive data exfiltration, real-time remote command execution, and ransomware-style device locking.

The malware abuses accessibility permissions for persistent control and communicates with a Firebase-based command-and-control infrastructure to manage infected devices. Code similarities suggest that it evolved from the ArsinkRAT family.

We have identified the latest samples that conditionally download a large LLM module, indicating experimentation with AI-assisted capabilities, device performance manipulation, and alternative monetization strategies alongside traditional surveillance and extortion activities.

While it may not always be possible to avoid these threats entirely, prompt action can help reduce the impact of compromise. Threat intelligence tools such as Vision provide users with a real-time view of their digital threat landscape, alerting them to any compromise and enabling them to take corrective action.

Key Takeaways

  • SURXRAT is sold openly via Telegram, with reseller and partner licensing tiers, enabling scalable distribution through affiliate operators rather than centralized campaigns.
  • Source code references and functional overlap indicate SURXRAT likely evolved from ArsinkRAT, highlighting continued reuse and rapid enhancement of Android RAT frameworks.
  • The malware collects sensitive data, including SMS messages, contacts, call logs, device information, location data, and browser activity, enabling credential theft and financial fraud operations.
  • Use of Firebase Realtime Database infrastructure allows attackers to blend malicious communications with legitimate cloud traffic, improving reliability and complicating detection.
  • SURXRAT conditionally downloads a large LLM module from external repositories, suggesting experimentation with AI-driven functionality, device performance manipulation, or evasion techniques.
  • The integrated ransomware-style screen locker enables attackers to deny device access and demand payment, allowing flexible monetization through surveillance, fraud, or extortion.

Overview

Cyble Research and Intelligence Labs (CRIL) identified a new variant of SURXRAT, an actively developed Android Remote Access Trojan (RAT) being openly commercialized through a dedicated Telegram-based distribution ecosystem. Unlike opportunistic commodity malware, SURXRAT is positioned as a subscription-style cybercrime product, indicating an increasing level of professionalization in the Android malware-as-a-service (MaaS) landscape.

The Indonesian threat actor (TA) operates a Telegram channel through which the malware is marketed, regularly updated, and distributed to resellers and partners. The channel was created in late 2024, suggesting that active malware development likely began in early 2025. At the time of analysis, we identified more than 180 related samples, indicating continuous development activity and demonstrating that the threat actor is actively maintaining and evolving the malware.

Figure 1 – SURXRAT V5 advertisement on Telegram Channel
Figure 1 – SURXRAT V5 advertisement on Telegram Channel

The structured pricing tiers, operational announcements, and feature updates demonstrate a mature commercialization model similar to underground SaaS platforms, suggesting the operator is targeting aspiring cybercriminals rather than conducting attacks directly.

SURXRAT is marketed under a structured licensing scheme branded as SURXRAT V5, indicating active development and ongoing version iteration by the operator. The threat actor offers two primary purchase tiers within a “Ready Plan” model designed to attract both individual operators and larger resellers.

Figure 2 – Pricing Plan for SURXRAT posted on Telegram channel
Figure 2 – Pricing Plan for SURXRAT posted on Telegram channel

The Reseller Plan, advertised at a one-time payment of 200k, provides permanent access, allows buyers to generate up to three malware builds per day, includes free server upgrades, and permits users to create and sell SURXRAT builds while adhering to the operator’s predefined market pricing.

The Partner Plan, priced at 500k as a permanent license, expands these capabilities by increasing the daily build limit to ten accounts, maintaining free server upgrades, and granting buyers the ability to establish their own reseller networks, effectively enabling further distribution.

Both tiers emphasize a one-time payment structure (“anti pt pt”), suggesting no recurring subscription fees. This tiered commercialization approach demonstrates the operator’s deliberate attempt to scale malware adoption through affiliate-style distribution, decentralizing infection operations while retaining centralized control over infrastructure and ecosystem governance.

The threat actor periodically posts operational statistics to reinforce legitimacy and attract buyers. One such announcement revealed:

  • Bot Status: Active
  • Total Users: 1,318 registered accounts within the system
  • Operational confirmation timestamp: January 2026

Figure 3 – Telegram post indicating the registered accounts
Figure 3 – Telegram post indicating the registered accounts

While these figures cannot be independently verified, public disclosure of user metrics is a common underground marketing tactic intended to establish credibility and demonstrate adoption among cybercriminal customers. If accurate, the numbers suggest a growing ecosystem of operators leveraging SURXRAT for Android surveillance and financial fraud operations.

SURXRAT V5 provides a comprehensive surveillance and remote-control feature set consistent with modern Android RATs. The functionality indicates a strong emphasis on data harvesting, device monitoring, and full remote manipulation.

Data Collection and Surveillance Features

The malware enables extensive extraction of sensitive user information, including:

  • SMS monitoring
  • Contact list and call logs
  • System information and installed applications
  • Gmail account data
  • Device location tracking
  • Network and connectivity information
  • Notification interception
  • Clipboard monitoring
  • Web browsing history
  • Cellular tower intelligence
  • WiFi scanning and connection history
  • Full file manager access

This level of visibility allows attackers to perform credential harvesting, OTP interception, profiling, and reconnaissance for secondary fraud operations.

Remote Device Control Capabilities

SURXRAT extends beyond passive surveillance by enabling attackers to manipulate compromised devices actively:

  • Remote device unlocking
  • Triggering phone calls
  • Wallpaper modification via remote URL
  • Remote audio playback
  • Network lag manipulation
  • Push notification delivery
  • Forced website opening
  • Flashlight activation
  • Device vibration control
  • On-screen text overlays
  • Device locking using attacker-defined PIN
  • Complete storage wipe functionality

During analysis of the SURXRAT sample, references to ArsinkRAT were found in the source code, suggesting a developmental relationship between the two malware families. In January 2026, Zimperium reported an increase in activity associated with ArsinkRAT campaigns targeting Android devices.

A comparative analysis indicates notable functional and structural similarities between SURXRAT and ArsinkRAT, suggesting that the threat actor likely leveraged the ArsinkRAT source code. Using this foundation, an enhanced variant incorporating additional capabilities and updated features was subsequently developed.

Figure 4 – ArsinkRAT string mentioned in SURXRAT malware
Figure 4 – ArsinkRAT string mentioned in SURXRAT malware

This evolution highlights how existing Android RAT frameworks continue to be repurposed and expanded by threat actors, accelerating malware development cycles and enabling rapid introduction of new surveillance and control functionalities.

During our analysis of the latest SURXRAT variant, we identified a deliberate mechanism to manipulate network lag. The malware initiates the download of a large LLM module (>23GB) hosted on Hugging Face. This approach is highly atypical for a mobile-based device.

Notably, this download is conditionally triggered when specific gaming applications are active on the victim’s device, namely Free Fire MAX x JUJUTSU KAISEN (com.dts.freefiremax) and Free Fire x JUJUTSU KAISEN (com.dts.freefireth), or when the malware receives alternative target package names dynamically from the threat actor–controlled server.

This indicates that the download behavior is remotely configurable, allowing operators to initiate the module retrieval based on applications specified through backend commands.

Figure 5 – Downloads LLM module from Hugging Face

While downloading a model of this size on a mobile device may initially appear impractical, the observed behavior indicates intentional implementation rather than a misconfiguration. The LLM module appears to be under active development and may be leveraged to:

  • Deliberately introduce device or network latency during gameplay, potentially supporting paid cheating or disruption services
    mask malicious background activity by degrading overall device performance, leading users to attribute abnormal behavior to system issues rather than malware
    enable future AI-driven capabilities, such as automated interactions or adaptive social engineering techniques

The selective and conditional deployment of this module suggests that the threat actor is actively experimenting with AI-based components to enhance monetization strategies, improve evasion techniques, and expand operational capabilities.

Technical Analysis

Upon execution, the malware prompts the victim to grant multiple high-risk permissions, including access to location services, contacts, SMS messages, and device storage.

Following initial permission approval, the malware displays additional prompts guiding the user to enable Accessibility Services. This commonly abused Android feature allows applications to monitor screen content and perform automated actions. The abuse of accessibility permissions significantly increases attacker control, enabling surveillance and facilitating further malicious operations without continuous user interaction.

Figure 6 – Malware prompting to enable permissions
Figure 6 – Malware prompting to enable permissions

After acquiring the required permissions, SURXRAT establishes communication with a backend infrastructure hosted on a Firebase Realtime Database:

hxxps://xrat-sisuriya-default-rtdb.firebaseio[.]com

The malware connects using a database reference labeled “arsinkRAT,” further reinforcing the developmental linkage between SURXRAT and the previously observed ArsinkRAT malware family.

Once connectivity is established, the malware performs device registration by generating a random UUID, which serves as a unique identifier for tracking infected devices. Following registration, SURXRAT immediately begins exfiltrating sensitive information to the Firebase backend.

Figure 7 – Device registration
Figure 7 – Device registration

The malware collects and transmits a wide range of victim data, enabling comprehensive device profiling. Exfiltrated information includes:

  • Contact lists
  • SMS messages
  • Call logs
  • Device brand and model
  • Android OS version
  • Battery level and status
  • SIM card details
  • Network information
  • Public IP address

This dataset allows attackers to uniquely identify victims, monitor communications, and prepare follow-on fraud or surveillance activities such as OTP interception and account takeover.

After successful device registration, SURXRAT launches a persistent background service that maintains continuous communication with the Firebase command-and-control (C&C) infrastructure and receives commands. The malware initializes multiple internal manager classes that handle surveillance, device control, and data collection.

Figure 8 – Background service
Figure 8 – Background service

The infected device periodically sends status updates to the backend while simultaneously polling for incoming commands issued by the operator. This near real-time synchronization enables attackers to execute actions on compromised devices remotely with minimal delay.

Analysis of command handlers revealed several instructions received from the Firebase backend that allow attackers to perform surveillance and active device manipulation:

Spy Commands Description
accounts Collects Google account information associated with the device
apps_list Retrieves the list of installed applications
device_info Collects detailed device metadata
audio_record Records audio
file_list Enumerates files and extracts metadata
flashlight Remotely controls the device flashlight
camera_photo Captures images using the device camera
contacts Collects contacts
call_log Collects call log
sms_read Collects SMSs
Sms_send Sends SMSs from the infected device
tts Execute text to speech
call Makes a call from the infected device
toast Display a toast message
vibrate Remotely vibrates the device
file_delete Deletes file
location Collects the victim’s location
file_upload Sends file to the server
RAT Commands Description
access Collects clipboard data
unlock Remove locks
app Sync app list
Cal Dail calls
fla Handles flashlight
for Wipe data
Mus Play music
Not Send System update notification
url Opens URL
vib Vibrates device
voi Executes text-to-speech
wal Changes wallpapers
Brow Collects browser history
Cell Collects the device’s cell info
Lock Execute the Screen Locker feature
wifih Collect Wi-Fi history
wifis Execute text-to-speech

The figure below shows the admin panel image shared on the threat actor’s Telegram account, highlighting the various actions and controls available through SURXRAT.

Figure 9 – SURXRAT admin panel
Figure 9 – SURXRAT admin panel

Screen Locker Activity

The SURXRAT sample also contains a ransomware-style screen locker module that allows a remote attacker to seize control of the victim’s device and temporarily deny access to it. When activated, the malware forces the device to display a persistent full-screen lock message that the user cannot easily dismiss. The attacker can remotely customize both the displayed message and the unlock PIN, enabling them to demand a ransom payment directly from the victim.

Figure 10 – Screen Locker activity
Figure 10 – Screen Locker activity

The malware continuously reports user interactions back to the attacker’s server. Each incorrect PIN entry is transmitted to the backend, allowing the operator to monitor victim behavior and response attempts in real time. The lock screen can also be remotely removed by the attacker, giving them complete control over when the device becomes usable again. Overall, this functionality appears intended to coerce victims through disruption and intimidation, ultimately facilitating ransom-based monetization.

Figure 11 – Malware sends a wrong attempts log
Figure 11 – Malware sends a wrong attempts log

The integration of ransomware-style locking into a surveillance RAT indicates hybrid monetization, allowing operators to switch between espionage, fraud, and direct extortion based on the value of the victim.

Conclusion

SURXRAT represents a notable evolution in Android malware, combining MaaS-style commercialization, cloud-based command infrastructure, and modular capabilities into a single adaptable threat platform. The malware’s extensive surveillance features, real-time remote control functions, and ransomware-style device locking demonstrate a shift toward multi-functional mobile threats designed for flexible monetization.

The observed experimentation with large AI model integration further indicates that threat actors are actively exploring emerging technologies to enhance operational effectiveness and evade detection. As Android malware ecosystems continue to mature, threats like SURXRAT highlight the increasing accessibility of advanced mobile attack capabilities to a broader cybercriminal audience, reinforcing the need for improved mobile threat visibility, behavioral detection, and user awareness.

Prevention is ideal, but it isn’t always an option. Threat Intelligence platforms such as Cyble Vision provide users with insight into their digital risk profile and can notify them of any breaches or unauthorized access, enabling them to take immediate corrective action.

Our Recommendations

We have listed some essential cybersecurity best practices that serve as the first line of defense against attackers. We recommend that our readers follow the best practices given below:

  • Install Apps Only from Trusted Sources:
    Download apps exclusively from official platforms, such as the Google Play Store. Avoid third-party app stores or links received via SMS, social media, or email.
  • Be Cautious with Permissions and Installs:
    Never grant permissions and install an application unless you’re certain of an app’s legitimacy.
  • Watch for Phishing Pages:
    Always verify the URL and avoid suspicious links and websites that ask for sensitive information.
  • Enable Multi-Factor Authentication (MFA):
    Use MFA for banking and financial apps to add an extra layer of protection, even if credentials are compromised.
  • Report Suspicious Activity:
    If you suspect you’ve been targeted or infected, report the incident to your bank and local authorities immediately. If necessary, reset your credentials and perform a factory reset.
  • Use Mobile Security Solutions:
    Install a mobile security application that includes real-time scanning.
  • Keep Your Device Updated:
     Ensure your Android OS and apps are updated regularly. Security patches often address vulnerabilities exploited by malware.

MITRE ATT&CK® Techniques

Tactic Technique ID Procedure
Persistence (TA0028) Event Triggered Execution: Broadcast Receivers(T1624.001) SURXRAT registered the BOOT_COMPLETED broadcast receiver to activate the screen locker activity
Persistence (TA0028) Foreground Persistence (T1541) SURXRAT uses foreground services by showing a notification
Defense Evasion (TA0030) Impair Defenses: Prevent Application Removal (T1629.001) Prevent uninstallation
Defense Evasion (TA0030) Obfuscated Files or Information (T1406) SURXRAT uses a Base64 encoding to encode the stolen files and send them to the Telegram Bot
Credential Access (TA0031) Access Notifications (T1517) SURXRAT collects device notifications
Discovery (TA0032) Software Discovery (T1418) SURXRAT collects the installed application list
Discovery (TA0032) System Information Discovery (T1426) SURXRAT collects the device information
Discovery (TA0032) System Network Connections Discovery (T1421) SURXRAT collects cell and wifi information
Discovery (TA0032) File and Directory Discovery (T1420) SURXRAT Enumerates external storage
Credential Access (TA0031) Clipboard Data (T1414) SURXRAT collects Clipboard Data
Collection (TA0035) Audio Capture (T1429) SURXRAT can capture audio
Collection (TA0035) Data from Local System (T1533) SUXRAT collects files from external storage
Collection (TA0035) Location Tracking (T1430) SURXRAT Can collect location
Collection (TA0035) Protected User Data: Call Log (T1636.002) SURXRAT Collects call log
Collection (TA0035) Protected User Data: Contact List (T1636.003) Collects contact data
Collection (TA0035) Protected User Data: SMS Messages (T1636.004) Collects SMS data
Collection (TA0035) Protected User Data: Accounts (T1636.005) SUXRAT collects Gmail account data
Collection (TA0035) Video Capture (T1512) SURXRAT Captures photos using the device camera
Command and Control (TA0037) Application Layer Protocol: Web Protocols (T1437.001) Malware uses HTTPs protocol
Exfiltration (TA0036) Exfiltration Over C2 Channel (T1646) SURXRAT sends collected data to the C&C server
Impact (TA0034) SMS Control (T1582) SURXRAT can send SMSs from the infected device
Impact (TA0034) Call Control (T1616) SURXRAT can make calls
Impact (TA0034) Data Destruction (T1662) Wipe external storage

Indicators of Compromise (IOCs)

The IOCs have been added to this GitHub repository. Please review and integrate them into your Threat Intelligence feed to enhance protection and improve your overall security posture.

The post SURXRAT: From ArsinkRAT roots to LLM Module Downloads Signaling Capability Expansion appeared first on Cyble.

Cyble – ​Read More

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Anthropic dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of orchestrating coordinated, industrial-scale campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts.

The San Francisco-based company said the three labs collectively generated more than 16 million exchanges with Claude through approximately 24,000 fake accounts, all in violation of Anthropic’s terms of service and regional access restrictions. The campaigns, Anthropic said, are the most concrete and detailed public evidence to date of a practice that has haunted Silicon Valley for months: foreign competitors systematically using a technique called distillation to leapfrog years of research and billions of dollars in investment.

“These campaigns are growing in intensity and sophistication,” Anthropic wrote in a technical blog post published Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

The disclosure marks a dramatic escalation in the simmering tensions between American and Chinese AI developers — and it arrives at a moment when Washington is actively debating whether to tighten or loosen export controls on the advanced chips that power AI training. Anthropic, led by CEO Dario Amodei, has been among the most vocal advocates for restricting chip sales to China, and the company explicitly connected Monday’s revelations to that policy fight.

How AI distillation went from obscure research technique to geopolitical flashpoint

To understand what Anthropic alleges, it helps to understand what distillation actually is — and how it evolved from an academic curiosity into the most contentious issue in the global AI race.

At its core, distillation is a process of extracting knowledge from a larger, more powerful AI model — the “teacher” — to create a smaller, more efficient one — the “student.” The student model learns not from raw data, but from the teacher’s outputs: its answers, reasoning patterns, and behaviors. Done correctly, the student can achieve performance remarkably close to the teacher’s while requiring a fraction of the compute to train.

As Anthropic itself acknowledged, distillation is “a widely used and legitimate training method.” Frontier AI labs, including Anthropic, routinely distill their own models to create smaller, cheaper versions for customers. But the same technique can be weaponized. A competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system — capturing capabilities that took years and hundreds of millions of dollars to develop.

The technique burst into public consciousness in January 2025 when DeepSeek released its R1 reasoning model, which appeared to match or approach the performance of leading American models at dramatically lower cost. Databricks CEO Ali Ghodsi captured the industry’s anxiety at the time, telling CNBC: “This distillation technique is just so extremely powerful and so extremely cheap, and it’s just available to anyone.” He predicted the technique would usher in an era of intense competition for large language models.

That prediction proved prescient. In the weeks following DeepSeek’s release, researchers at UC Berkeley said they recreated OpenAI’s reasoning model for just $450 in 19 hours. Researchers at Stanford and the University of Washington followed with their own version built in 26 minutes for under $50 in compute credits. The startup Hugging Face replicated OpenAI’s Deep Research feature as a 24-hour coding challenge. DeepSeek itself openly released a family of distilled models on Hugging Face — including versions built on top of Qwen and Llama architectures — under the permissive MIT license, with the model card explicitly stating that the DeepSeek-R1 series supports commercial use and allows for any modifications and derivative works, “including, but not limited to, distillation for training other LLMs.”

But what Anthropic described Monday goes far beyond academic replication or open-source experimentation. The company detailed what it characterized as deliberate, covert, and large-scale intellectual property extraction by well-resourced commercial laboratories operating under the jurisdiction of the Chinese government.

Anthropic traces 16 million fraudulent exchanges to researchers at DeepSeek, Moonshot, and MiniMax

Anthropic attributed each campaign “with high confidence” through IP address correlation, request metadata, infrastructure indicators, and corroboration from unnamed industry partners who observed the same actors on their own platforms. Each campaign specifically targeted what Anthropic described as Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.

DeepSeek, the company that ignited the distillation debate, conducted what Anthropic described as the most technically sophisticated of the three operations, generating over 150,000 exchanges with Claude. Anthropic said DeepSeek’s prompts targeted reasoning capabilities, rubric-based grading tasks designed to make Claude function as a reward model for reinforcement learning, and — in a detail likely to draw particular political attention — the creation of “censorship-safe alternatives to policy sensitive queries.”

Anthropic alleged that DeepSeek “generated synchronized traffic across accounts” with “identical patterns, shared payment methods, and coordinated timing” that suggested load balancing to maximize throughput while evading detection. In one particularly notable technique, Anthropic said DeepSeek’s prompts “asked Claude to imagine and articulate the internal reasoning behind a completed response and write it out step by step — effectively generating chain-of-thought training data at scale.” The company also alleged it observed tasks in which Claude was used to generate alternatives to politically sensitive queries about “dissidents, party leaders, or authoritarianism,” likely to train DeepSeek’s own models to steer conversations away from censored topics. Anthropic said it was able to trace these accounts to specific researchers at the lab.

Moonshot AI, the Beijing-based creator of the Kimi models, ran the second-largest operation by volume at over 3.4 million exchanges. Anthropic said Moonshot targeted agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. The company employed “hundreds of fraudulent accounts spanning multiple access pathways,” making the campaign harder to detect as a coordinated operation. Anthropic attributed the campaign through request metadata that “matched the public profiles of senior Moonshot staff.” In a later phase, Anthropic said, Moonshot adopted a more targeted approach, “attempting to extract and reconstruct Claude’s reasoning traces.”

MiniMax, the least publicly known of the three but the most prolific by volume, generated over 13 million exchanges — more than three-quarters of the total. Anthropic said MiniMax’s campaign focused on agentic coding, tool use, and orchestration. The company said it detected MiniMax’s campaign while it was still active, “before MiniMax released the model it was training,” giving Anthropic “unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch.” In a detail that underscores the urgency and opportunism Anthropic alleges, the company said that when it released a new model during MiniMax’s active campaign, MiniMax “pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system.”

How proxy networks and ‘hydra cluster’ architectures helped Chinese labs bypass Anthropic’s China ban

Anthropic does not currently offer commercial access to Claude in China, a policy it maintains for national security reasons. So how did these labs access the models at all?

The answer, Anthropic said, lies in commercial proxy services that resell access to Claude and other frontier AI models at scale. Anthropic described these services as running what it calls “hydra cluster” architectures — sprawling networks of fraudulent accounts that distribute traffic across Anthropic’s API and third-party cloud platforms. “The breadth of these networks means that there are no single points of failure,” Anthropic wrote. “When one account is banned, a new one takes its place.” In one case, Anthropic said, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.

The description suggests a mature and well-resourced infrastructure ecosystem dedicated to circumventing access controls — one that may serve many more clients than just the three labs Anthropic named.

Why Anthropic framed distillation as a national security crisis, not just an IP dispute

Anthropic did not treat this as a mere terms-of-service violation. The company embedded its technical disclosure within an explicit national security argument, warning that “illicitly distilled models lack necessary safeguards, creating significant national security risks.”

The company argued that models built through illicit distillation are “unlikely to retain” the safety guardrails that American companies build into their systems — protections designed to prevent AI from being used to develop bioweapons, carry out cyberattacks, or enable mass surveillance. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” Anthropic wrote, “enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

This framing directly connects to the chip export control debate that Amodei has made a centerpiece of his public advocacy. In a detailed essay published in January 2025, Amodei argued that export controls are “the most important determinant of whether we end up in a unipolar or bipolar world” — a world where either only the U.S. and its allies possess the most powerful AI, or one where China achieves parity. He specifically noted at the time that he was “not taking any position on reports of distillation from Western models” and would “just take DeepSeek at their word that they trained it the way they said in the paper.”

Monday’s disclosure is a sharp departure from that earlier restraint. Anthropic now argues that distillation attacks “undermine” export controls “by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means.” The company went further, asserting that “without visibility into these attacks, the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective.” In other words, Anthropic is arguing that what some observers interpreted as proof that Chinese labs can innovate around chip restrictions was actually, in significant part, the result of stealing American capabilities.

The murky legal landscape around AI distillation may explain Anthropic’s political strategy

Anthropic’s decision to frame this as a national security issue rather than a legal dispute may reflect the difficult reality that intellectual property law offers limited recourse against distillation.

As a March 2025 analysis by the law firm Winston & Strawn noted, “the legal landscape surrounding AI distillation is unclear and evolving.” The firm’s attorneys observed that proving a copyright claim in this context would be challenging, since it remains unclear whether the outputs of AI models qualify as copyrightable creative expression. The U.S. Copyright Office affirmed in January 2025 that copyright protection requires human authorship, and that “mere provision of prompts does not render the outputs copyrightable.”

The legal picture is further complicated by the way frontier labs structure output ownership. OpenAI’s terms of use, for instance, assign ownership of model outputs to the user — meaning that even if a company can prove extraction occurred, it may not hold copyrights over the extracted data. Winston & Strawn noted that this dynamic means “even if OpenAI can present enough evidence to show that DeepSeek extracted data from its models, OpenAI likely does not have copyrights over the data.” The same logic would almost certainly apply to Anthropic’s outputs.

Contract law may offer a more promising avenue. Anthropic’s terms of service prohibit the kind of systematic extraction the company describes, and violation of those terms is a more straightforward legal claim than copyright infringement. But enforcing contractual terms against entities operating through proxy services and fraudulent accounts in a foreign jurisdiction presents its own formidable challenges.

This may explain why Anthropic chose the national security frame over a purely legal one. By positioning distillation attacks as threats to export control regimes and democratic security rather than as intellectual property disputes, Anthropic appeals to policymakers and regulators who have tools — sanctions, entity list designations, enhanced export restrictions — that go far beyond what civil litigation could achieve.

What Anthropic’s distillation crackdown means for every company running a frontier AI model

Anthropic outlined a multipronged defensive response. The company said it has built classifiers and behavioral fingerprinting systems designed to identify distillation attack patterns in API traffic, including detection of chain-of-thought elicitation used to construct reasoning training data. It is sharing technical indicators with other AI labs, cloud providers, and relevant authorities to build what it described as a more holistic picture of the distillation landscape. The company has also strengthened verification for educational accounts, security research programs, and startup organizations — the pathways most commonly exploited for setting up fraudulent accounts — and is developing model-level safeguards designed to reduce the usefulness of outputs for illicit distillation without degrading the experience for legitimate customers.

But the company acknowledged that “no company can solve this alone,” calling for coordinated action across the industry, cloud providers, and policymakers.

The disclosure is likely to reverberate through multiple ongoing policy debates. In Congress, the bipartisan No DeepSeek on Government Devices Act has already been introduced. Federal agencies including NASA have banned DeepSeek from employee devices. And the broader question of chip export controls — which the Trump administration has been weighing amid competing pressures from Nvidia and national security hawks — now has a new and vivid data point.

For the AI industry’s technical decision-makers, the implications are immediate and practical. If Anthropic’s account is accurate, the proxy infrastructure enabling these attacks is vast, sophisticated, and adaptable — and it is not limited to targeting a single company. Every frontier AI lab with an API is a potential target. The era of treating model access as a simple commercial transaction may be coming to an end, replaced by one in which API security is as strategically important as the model weights themselves.

Anthropic has now put names, numbers, and forensic detail behind accusations that the industry had only whispered about for months. Whether that evidence galvanizes the coordinated response the company is calling for — or simply accelerates an arms race between distillers and defenders — may depend on a question no classifier can answer: whether Washington sees this as an act of espionage or just the cost of doing business in an era when intelligence itself has become a commodity.

Security | VentureBeat – ​Read More