Release Notes: Expanded Threat Intelligence Access, AI Assisted Search 1,770 New Detections and More

April brought several updates across ANY.RUN’s Threat Intelligence and detection coverage. 

The biggest change is expanded access to Threat Intelligence: Free plan users now get 20 premium requests in TI Lookup and YARA Search. This gives security teams a practical way to check suspicious indicators, explore related sandbox sessions, and validate malware or phishing activity using real attack data. 

On the detection side, our team added 78 new behavior signatures1,657 new Suricata rules, and 35 new YARA rules. We also released new Threat Intelligence Reports covering malware, loaders, RATs, backdoors, and supply-chain threats observed in recent submissions. 

Here’s a closer look at what’s new. 

Product Updates 

In April, ANY.RUN expanded access to Threat Intelligence capabilities, giving more teams a way to test threat context directly in their SOC workflows. 

The key update: Free plan users now get 20 premium requests in TI Lookup and YARA Search. This gives security teams a practical way to check indicators, explore related sandbox sessions, and validate suspicious activity using real attack data from ANY.RUN’s community. 

More Threat Context with 20 Premium TI Requests 

Threat intelligence brings the most value when it helps teams make faster decisions during active investigations. Instead of stopping at one suspicious IP, domain, hash, or behavior, analysts can pivot to connected samples, infrastructure, artifacts, and attack context. 

With 20 premium requests now included in the Free plan, SOC and MSSP teams can explore threat data across IOCs, IOBs, and IOAs linked to recent malware and phishing activity. 

TI Lookup request with AI assistant that helps the user select sandbox analyses of malware using a TTP

Teams can use this expanded access across key SOC workflows: 

  • Alert triage: Check suspicious indicators against real sandbox data and get more context before closing or escalating an alert. 
  • Incident response: Pivot from one indicator to related artifacts, infrastructure, and behavior to understand the wider attack chain. 
  • Threat hunting: Use TI Lookup and YARA Search to test hypotheses against real-world malware data.
  • Detection work: Find patterns and artifacts that can support new or improved detection logic. 

ANY.RUN also introduced AI-assisted search in TI Lookup, allowing users to describe what they need in natural language while the system helps translate the request into a structured query. 

Give your team the context for faster triage and response.
Test ANY.RUN Threat Intelligence in real SOC workflows.



Contact us


With threat intelligence available directly in the workflow, SOC and MSSP teams can move faster from suspicious signal to confident action: 

  • Faster alert validation: Teams can check suspicious indicators against real attack data and make decisions sooner. 
  • Lower escalation noise: More context helps reduce escalations driven by uncertainty. 
  • Shorter investigations: Analysts can move from one indicator to related samples, infrastructure, and behavior faster. 
  • Stronger threat hunting: Teams can test hypotheses against current malware and phishing data. 
  • Better detection quality: Real-world artifacts and behavior patterns can support more relevant detection logic. 
  • More measurable security value: Faster triage, better prioritization, and clearer evidence help teams focus capacity on confirmed risk. 

Threat Coverage Updates 

In April, our detection team continued to strengthen ANY.RUN’s threat coverage with new behavior signatures, Suricata rules, and YARA rules. 

This month’s updates include: 

  • 78 new behavior signatures 
  • 1,657 new Suricata rules 
  • 35 new YARA rules 

These additions help expand detection coverage across suspicious behavior, network activity, and file-based indicators. 

New Behavior Signatures  

In April, we added 78 new behavior signatures covering malware-specific activity, mutex-based indicators, suspicious persistence behavior, and exploitation-related activity. 

The new signatures focus on observable actions and artifacts that appear during detonation, helping teams move beyond file reputation and confirm what a sample actually does in the sandbox. 

Highlighted detections include: 

Killada detected inside ANY.RUN sandbox
Killada detected inside ANY.RUN sandbox

New Suricata Rules 

In April, we also added 1,657 new Suricata rules to improve visibility into malicious network activity, including payload retrieval, DLL downloads, and possible command-and-control checks. 

With these additions, sandbox sessions can surface more network-level indicators tied to malware delivery and post-infection communication. 

Cut response delays before threats become costly incidents.
Give your SOC faster, evidence-backed decisions.



Integrate in your SOC


New YARA Rules 

In April, ANY.RUN added 35 new YARA rules to expand static detection coverage for suspicious files and known threat artifacts. 

This layer is especially useful when a sample contains recognizable strings, code patterns, or structural markers that can link it to a known detection before or alongside behavior-based analysis. 

Highlighted YARA detections include: 

Together, the new behavior signatures, Suricata rules, and YARA rules give security teams broader coverage across runtime behavior, network traffic, and file-level indicators. 

Threat Intelligence Reports 

In April, our team released new Threat Intelligence Reports covering recent malware activity, attacker tooling, and techniques observed across real-world submissions. 

Available as part of ANY.RUN’s TI Lookup Premium plan, these reports give security teams a clearer view of how specific threats behave, what artifacts they leave behind, and which indicators can support faster investigation. 

Threat Intelligence reports in ANY.RUN
Threat Intelligence reports in ANY.RUN with updated search parameters for faster threat investigation
  • MIMIC, CrystalX, and Trojanized Telnyx Package: This report covers MIMIC ransomware, CrystalX RAT, and a trojanized Telnyx Python SDK, focusing on encryption behavior, remote access and persistence, and malicious code execution through unauthorized PyPI releases. 
  • ETHERRAT, OCRFix, and SILENTCONNECT: This brief examines a Node.js backdoor, a loader/botnet component, and a Windows loader, focusing on blockchain-based C2/configuration retrieval, scheduled-task persistence, in-memory PowerShell execution, and ScreenConnect deployment. 
  • CRYSOME, INFINITY, and BRUSHWORM: This report examines a Windows RAT, a macOS stealer, and a Windows backdoor, focusing on TCP-based remote control, ClickFix-like delivery, credential theft, scheduled-task persistence, modular DLL download, and file theft. 

About ANY.RUN 

ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, helps security teams investigate threats faster and make confident decisions with real-world attack data. 

Its solutions, including Interactive Sandbox and Threat Intelligence, give SOC and MSSP teams the context they need to analyze malware, phishing, infrastructure, behaviors, and indicators in one workflow. 

Trusted by more than 15,000 organizations and 600,000 security professionals worldwide, including 74% of Fortune 100 companies, ANY.RUN helps teams improve triage speed, strengthen detection coverage, reduce investigation time, and respond to emerging threats with clearer evidence. 

Integrate ANY.RUN into your SOC workflow → 

The post Release Notes: Expanded Threat Intelligence Access, AI Assisted Search 1,770 New Detections and More appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Vehicle-based surveillance tools | Kaspersky official blog

It’s best to think of the modern car as a computer on wheels — one that constantly offloads diagnostic data to the manufacturer or dealer’s servers. On board, you’ll find dozens of sensors: everything from GPS, speedometers, and hands-free microphones, to external cameras and the less obvious (but highly active) sensors for pedal pressure, tire pressure, engine temperature, and more. Even if this data isn’t beamed to the manufacturer in real-time, it’s logged in the car’s internal memory, and can reveal a wealth of information about a driver’s trips, habits, and surroundings. We’ve already taken a deep dive into how automakers collect data for commercial use, and who they sell it to (spoiler alert: insurance companies are the biggest buyers of telemetry), but today we’re looking at how law enforcement and intelligence agencies tap into this goldmine.

Digital evidence

Police departments across the globe have recognized the immense value of data stored within vehicles. If a car or its owner is potentially linked to a crime, investigators do more than just check for prints or DNA. Car Intelligence (CARINT) technology allows them to essentially scour all onboard computers, extracting data such as:

  • GPS-based trip history
  • Call logs, media player activity, and voice commands
  • Lists of paired devices and synced contact lists
  • Driving statistics: mileage, engine performance modes, and other technical parameters

There are numerous precedents where this data has served as evidence and dismantled alibis. In one U.S. criminal case, a recorded voice command became a smoking gun, proving the suspect was behind the wheel of a stolen vehicle.

With the rise of connected cars equipped with their own SIM cards and direct links to the manufacturer, law enforcement no longer needs physical access to the vehicle. Key data, such as GPS location history, can be pulled directly from the manufacturer’s servers. Furthermore, a U.S. Senate investigation revealed that nine out of 14 surveyed automakers were providing this data without a warrant.

Major suppliers of car intelligence software, such as Ateros, Berla, TA9/Rayzone, and Toka, sell their solutions exclusively to government and law enforcement agencies, which is why they’ve remained largely out of the public eye.

Comprehensive surveillance

To track persons of interest, data pulled from the vehicle itself is cross-referenced with information from other sources. According to media leaks, flagship products in this category aggregate data from the car’s SIM card, Bluetooth communication trails, street-level CCTV footage, and commercially available information from data brokers. This hybrid dataset simplifies the comprehensive mapping of a target’s movements and contacts. Journalists have discovered that some companies even market the ability to activate a vehicle’s microphones and cameras remotely and covertly, enabling real-time eavesdropping on conversations. However, experts note that due to the diversity of technical implementations across different systems, hacking the car itself remains a difficult task with no sure way of succeeding. Often, it’s simpler to correlate other, more accessible datasets to achieve the same result.

Factory-installed spy tools

Features like covert activation of cameras, microphones, and other sensors may theoretically be part of a vehicle’s stock functionality rather than the result of a hack. While we haven’t found any public evidence of such cases, it’s well known that Chinese-made vehicles are coming under increased scrutiny in several countries. For instance, they’ve been banned from Israeli military sites — with the exception of a single Chery model, provided its multimedia system is removed. Similar bans exist in the UK and Poland; furthermore, UK Ministry of Defense employees are instructed not to connect their work phones to Chinese-made cars. In Germany, security analyses of Chinese vehicles were conducted by the specialized agencies BfV and ZITiS, but the findings remain classified.

Low-cost surveillance

Tracking a vehicle — or even thousands of them — doesn’t necessarily require hacking onboard systems or tapping into vast networks of license plate readers. A recent scientific study demonstrated that innocent tire pressure monitoring systems (TPMS) provide enough data for effective tracking. Data from these sensors is transmitted via radio without any encryption and includes a unique ID that makes identifying a specific car easy. This allows for more than just confirming the vehicle’s movement; it can even be used to estimate the driver’s weight or determine if they are traveling alone. While this might not sound as impressive as remotely accessing a car’s cameras, it requires very little financial investment and works even on relatively old vehicles without an internet connection.

What you can do about vehicle tracking

While tracking a person through their car is undoubtedly a privacy risk, striking a balance in mitigating this threat is difficult: many measures are complex, largely ineffective, and simultaneously reduce the utility, safety, and convenience of a modern vehicle. Consequently, any steps taken should be weighed against your personal risk profile.

Basic security measures

  • Avoid syncing your smartphone with your car via Bluetooth, CarPlay, or Android Auto. Decline requests to sync your contact book, call history, and messages. If you need the advanced navigation and multimedia features these services provide, consider either installing the required apps directly onto the head unit or purchasing an inexpensive Android box with its own SIM card — an anonymous one, if permitted in your country.
  • Periodically clear accumulated data from the head unit: trip history, unnecessary paired Bluetooth devices, and so on.
  • Whenever possible, avoid using the manufacturer’s mobile app, especially remote control features. If you can’t do without this app, opt out of sharing your data with third parties in the app settings. Disable data sharing in the vehicle’s own settings as well, if the option is available.
  • Do not use voice commands in the car.

Advanced security measures

  • Buying an older, “dumb” car. This is a fairly effective way to reduce surveillance risks, though it increases the safety risks and discomfort associated with driving an outdated vehicle. Keep in mind that tracking via street cameras or the smartphone in the driver’s pocket is still possible.
  • Dismantling telematics hardware (disabling the car’s cellular module). While theoretically possible, this solution will likely void the vehicle’s warranty. It may also violate local laws regarding mandatory emergency communication systems, and will disable numerous vehicle features that rely on telematics.

What other threats do connected cars hide? Read more in our posts:

Kaspersky official blog – ​Read More

AI-powered honeypots: Turning the tables on malicious AI agents

  • Generative AI allows defenders to instantly create diverse honeypots, like Linux shells or Internet of Things (IoT) devices, using simple text prompts. This makes deploying complex, convincing deceptive environments much easier and more scalable than traditional methods. 
  • AI-driven attacks often prioritize speed over stealth, making them highly vulnerable to being tricked by these simulated systems. This is critical because it allows defenders to catch and study automated threats that might otherwise overwhelm human teams. 
  • This method shifts the strategy from merely detecting attacks to actively manipulating and misleading threat actors. Organizations can safely observe attacker methodologies in real-time within a controlled “hall of mirrors.” 
  • Ultimately, by exploiting the inherent lack of awareness in AI agents, defenders can level the playing field and turn an attacker’s automation into a liability.

AI-powered honeypots: Turning the tables on malicious AI agents

Just as AI brings time-saving advantages to our lives, it brings similar advantages to threat actors. The laborious, time-consuming tasks of finding potentially vulnerable systems, identifying their vulnerabilities, and executing exploit code can be automated and orchestrated using AI. 

Clearly, these new capabilities put defenders at a disadvantage, as they expose new vulnerabilities for the threat actor. Attackers seek to minimize exposure. The more that a defender knows about a potential attack, the better they can prepare to repel or detect an attack. Using AI-orchestrated tooling to gain access to systems trades stealth for capability. That trade-off increases attacker visibility, and increased visibility is something defenders can exploit.

AI systems do not possess awareness. They generate plausible responses within a given context and set of inputs. As such they can be tricked or fooled into responding inappropriately through prompt injection or into interacting with systems that are not what they appear to be. 

Honeypot systems have long been deployed as a method for gathering information about malicious activities. There are many software projects providing honeypots which can be installed and configured. However, the advent of generative AI systems provides us with the possibility to use AI to masquerade as vulnerable systems and allowing them to be deployed widely and with minimal effort. 

In this post, I show how generative AI can be used to rapidly deploy adaptive honeypot systems. 

Getting started

The implementation consists of three components: a listener that will accept network connections, a simulated vulnerability that will grant access to the attacker once triggered, and an AI framework that will respond to the attacker’s instructions. 

The listener opens a TCP port, accepts incoming connections, and forwards traffic to handle_client. I set HOST to be “0.0.0.0” to accept any incoming connections to any local IPv4 addresses that my device is assigned.

def start_server(): 
    """Starts the TCP server.""" 
    server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 
    server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)  
    server.bind((HOST, PORT))  
    server.listen(3) # max number of concurrent connections 
    print(f"[*] Listening on {HOST}:{PORT}") 
 
    while True: 
        try: 
            conn, addr = server.accept()  
            client_handler = threading.Thread(target=handle_client, args=(conn, addr,)) 
            client_handler.start() 
        except KeyboardInterrupt: 
            print("n[*] Shutting down server...") 
            break 
        except Exception as e: 
            print(f"[-] Server error: {e}") 
             
    server.close() 
 
if __name__ == "__main__": 
    start_server()

Within handle_client I have created a very basic vulnerability that must be exploited before further access is granted. In this case, the attacker must supply the username “admin”with the password “password123” before they are authenticated.

The nature of the vulnerability need not be this simple. We could respond only to attempts to exploit Shellshock (CVE-2014-6271) or masquerade as a web shell that is only activated in response to port knocking.

def handle_client(conn, addr): 
    print(f"[*] Accepted connection from {addr}:{addr}") 
    # Store conversation history for this client to maintain context  
    conversation_history = [SYSTEM_PROMPT] 
    try: 
        authenticated = False 
      	 while not authenticated: 
            conn.sendall(b"Username: ") 
            username = conn.recv(BUFFER_SIZE).decode('utf-8').strip() 
            conn.sendall(b"Password: ") 
            password = conn.recv(BUFFER_SIZE).decode('utf-8').strip() 
 
            if username == "admin" and password == "password123": 
                authenticated = True 
                conn.sendall(b"Authentication successful.n") 
                print(f"[*] Client {addr[0]}:{addr[1]} authenticated successfully.") 
            else: 
                conn.sendall(b"Invalid credentials. Try again.n") 

The remainder of the handle_client code accepts the attacker’s input, forwards it to the ChatGPT instance, and outputs the message and response to the console.

        while True: 
            conn.sendall(b'>') 
            data = conn.recv(BUFFER_SIZE) 
            if not data: 
                print(f"[*] Client {addr}:{addr} disconnected.") 
                break 
 
            command = data.decode('utf-8').strip() 
            print(f"[*] Received command from {addr}:{addr}: '{command}'") 
 
            if command.lower() == 'exit': 
                print(f"[*] Client {addr}:{addr} requested exit.") 
                break 
            conversation_history.append({"role": "user", "content": command}) 
 
            # Call ChatGPT API 
            try: 
                chat_completion = client.chat.completions.create( 
                    model=MODEL_NAME, 
                    messages=conversation_history, 
                    temperature=0.1, # Keep responses less creative, more factual/direct 
                    max_tokens=500 # Limit response length 
                ) 
                 
                # Extract AI's response 
                ai_response = chat_completion.choices[0].message.content.strip() 
                print(f"[*] ChatGPT response: '{ai_response}'") 
                # Append AI's response to history for continued context 
                conversation_history.append({"role": "assistant", "content": ai_response}) 
                # Send AI's response back to the client 
                conn.sendall(ai_response.encode('utf-8') + b'n') 
 
            except Exception as e: 
                error_message = f"Error communicating with ChatGPT: {e}" 
                print(error_message) 
                conn.sendall(error_message.encode('utf-8') + b'n') 
 
    except ConnectionResetError: 
        print(f"[*] Client {addr}:{addr} forcibly closed the connection.") 
    except Exception as e: 
        print(f"[-] Error handling client {addr}:{addr}: {e}") 
    finally: 
        conn.close() 
        print(f"[*] Connection with {addr}:{addr} closed.") 

The key functionality is contained in the instructions to the ChatGPT instance. You must have a valid OpenAI key in order to initiate a ChatGPT instance that will behave as instructed.

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Get API key from environment variable                                       
MODEL_NAME = "gpt-3.5-turbo" 
 
# --- OpenAI Client Initialization --- 
if not OPENAI_API_KEY: 
    print("Error: OPENAI_API_KEY environment variable not set.") 
    print("Please set the environment variable before running the script.") 
    exit(1) 
 
client = OpenAI(api_key=OPENAI_API_KEY) 
 
# --- System Prompt for Linux Shell Simulation --- 
SYSTEM_PROMPT = { 
    "role": "system", 
    "content": ( 
        "You are a Linux bash shell. You will receive commands from the user and execute them, " 
        "returning only the standard output and standard error. Do not include any conversational text, " 
        "explanations, or additional formatting like markdown code blocks. You must only behave as a bash shell. " 
        "If a command produces no output, return an empty string" 
        "If a command is invalid or unknown, return an appropriate error message consistent with a bash shell." 
        "The Linux system that you are impersonating belongs to a junior software engineer learning python, " 
        "the file system structure and the content of any files should reflect that expected of a python learner." 
    ) 
} 

Generative AI doesn’t just simulate human personas, it can convincingly impersonate entire computing environments. In this example, we instruct the system to masquerade as a basic Linux shell owned by a software engineer learning Python.

AI-powered honeypots: Turning the tables on malicious AI agents

We can be more inventive and instruct the system to masquerade as a smart fridge by changing our instructions to ChatGPT.

SYSTEM_PROMPT = { 
    "role": "system", 
    "content": ( 
        "You are a smart fridge running Busybox operating system and providing a Bash shell." 
        "You will receive commands from the user and execute them in the context of being a smart fridge." 
        "You will only return the standard output and standard error. Do not include any conversational text, " 
        "explanations, or additional formatting like markdown code blocks. You must only behave as a shell for an " 
        "IoT device. If a command produces no output, return an empty string" 
        "If a command is invalid or unknown, return an appropriate error message consistent with a bash shell." 
        "The file system structure should reflect that of a smart fridge manufactured by SmartzFrijj running " 
        "Busybox operating system as an embedded device. The current and historical values for temperature are " 
        "recorded in the file system path '/usr/local', information about stored milk is in the user directory." 
    ) 
}

AI-powered honeypots: Turning the tables on malicious AI agents

The limiting factor is no longer tooling, but how convincingly we can model a target environment.  A skilled human attacker is unlikely to be fooled for long — that milk would be rank. But that’s not the point. We’re not deploying AI honeypots to trick human threat actors.  

 Let’s ask ChatGPT what it thinks…

AI-powered honeypots: Turning the tables on malicious AI agents

The industry narrative around AI in cybersecurity is dominated by fear of faster attacks, lower barriers, and greater scale. But speed and scale come with a cost. AI systems require interaction and context. Automation does not simply amplify attackers. but also constrains and exposes them. In that constraint lies an opportunity: not just to detect attacks, but to mislead, study, and ultimately manipulate the attacker.

Cisco Talos Blog – ​Read More

Margin vs. Madness: Fixing MSSP Top 5 Operational Nightmares

Leading a managed security services provider has never been a comfortable job. And it isn’t now, though the demand for MSSPs has never been higher. The global threat landscape is expanding faster than most enterprise security teams can keep pace with, and organizations across every sector are turning to managed providers to fill the gap.  

For MSSP leaders, this looks like an opportunity. And it is. The problem is that seizing it costs more than it used to. 

Key Points 

  • Linear scaling kills margins.  
    Adding more clients traditionally requires proportionally more analysts, making profitable growth nearly impossible. 
  • Alert noise is expensive. 
    Up to 70% of alerts are false positives that waste analyst time and inflate operational costs. 
  • Context gaps slow everything down. 
    Disconnected tools force manual aggregation of data from multiple systems, delaying investigations. 
  • Tool switching destroys efficiency. 
    Constant platform hopping increases turnaround time and contributes to missed SLAs. 
  • Standardization is essential for multi-client environments. 
    Every client being unique creates bespoke processes that do not scale and accelerate analyst burnout. 
  • ANY.RUN’s Threat Intelligence (TI Lookup + TI Feeds) and Interactive Sandbox work as an integrated infrastructure layer that reduces manual labor and improves unit economics. 
  • True scalability comes from automation and shared context. 
    MSSPs can serve more clients at higher quality without linear headcount increases, while lowering stress and turnover. 

The quiet storm inside every MSSP 

Threat actors automate attacks at unprecedented speed, while client environments grow more complex and diverse. MSSP leaders face mounting pressure to deliver faster, deeper, and more reliable protection across dozens or hundreds of customers: all while keeping margins healthy and SLAs intact. 

  • More clients still often means more analysts; 
  • More alerts still means more noise; 
  • More data still doesn’t mean more clarity. 

Meanwhile, the analysts carrying the weight are burning out. Turnover in MSSP analyst roles is among the highest in the industry, creating a perpetual cycle of recruitment, onboarding, and knowledge loss that compounds every other problem. 

MSSP leaders aren’t looking for “another feature.” They’re looking for something closer to an operational backbone. Something that reduces manual effort and improves unit economics without adding complexity. 

1. Linear Growth Equals Margin Death: The Scalability Trap 

For many MSSPs, growth is a paradox: every new client increases revenue — but also operational cost at nearly the same rate. Hiring, training, and retaining talent is expensive and painful, with turnover creating constant friction. The more manual the work your analysts do per client, the harder it is to decouple revenue from headcount.  

Your revenue line and your cost line climb together, and the margin in between never quite widens the way a growth business should. 

How ANY.RUN helps 

The Interactive Sandbox directly attacks the cost-per-investigation problem by compressing deep malware analysis from hours to minutes and speeding up triage, so each analyst can handle significantly more cases without sacrificing quality or output depth. 
 
To see how the Sandbox automatically interacts with malware detonating the kill chain elements and eliminating the need for manual interventions for a malware analyst, view an analysis session:

Sandbox analysis with automated CAPTCHA pass and QR link follow 

Threat Intelligence Lookup removes repetitive investigation steps by providing instant access to previously analyzed artifacts, indicators, and behaviors. It supports quick search across a huge database of contextual data on indicators and attacks drawn from sandbox investigations of over 15K SOC teams that are using ANY.RUN.  

Together, these solutions shift effort from linear human scaling to knowledge reuse and automation. Analysts spend less time rebuilding context and more time making decisions. 

ANY.RUN operational and business impact 

2. Alert Noise Equals Wasted Money 

With up to 70% of alerts representing noise, MSSPs burn resources investigating false positives. Every unnecessary alert translates into extra analyst time, higher operational costs, and increased risk of missing genuine threats amid the fatigue. 

The downstream effects compound quickly. Analysts fatigued by noise start to triage faster and less carefully. Real threats get downgraded. Critical detections get buried under the volume. The service quality the MSSP is paid to deliver degrades — quietly, then suddenly. 

Improve triage accuracy. 
Reduce false positives to protect both your margins and your analysts’ time.



Try ANY.RUN


How ANY.RUN helps 

ANY.RUN Threat Intelligence — comprising TI Lookup and Threat Intelligence Feeds — puts a verification and enrichment layer in front of the analyst queue, so that the 70% that doesn’t matter gets filtered before it consumes investigation resources, and the 30% that does matter arrives with actionable context. 

  • Cuts false positive handling time; 
  • Raises triage confidence; 
  • Reduces analyst fatigue across multi-client environments; 
  • Feeds directly into SIEM and SOAR workflows. 

TI Lookup provides on-demand, deep queries across a continuously updated database of threats, allowing an analyst to determine in seconds whether a suspicious IP, domain, file hash, or URL is genuinely malicious, benign, or requires deeper analysis. 

destinationIP:”103.224.212.211″ 

IP check in TI Lookup with a “malicious” verdict, additional IOCs, and sandbox analyses

TI Feeds deliver structured, high-fidelity threat data enriched with behavioral context that integrates directly into SIEM and SOAR workflows.  

TI Feeds integration capabilities

Instead of raw indicator lists that require manual validation, analysts receive intelligence that has already been correlated with real-world malware behavior observed in the Sandbox. The noise doesn’t just get filtered; it gets explained. Analysts spend time on what matters, and triage decisions become faster and more defensible. 

3. Missing Context: The Manual Puzzle Problem 

An MSSP analyst’s work happens across a fractured landscape. Threat intelligence feeds live in one place. SIEM alerts in another. Endpoint telemetry in a third. Sandboxing results in a fourth. An analyst responding to an incident doesn’t get the full picture handed to them. They construct it, manually, by pulling data from multiple sources, correlating it in their head or in a spreadsheet, and hoping nothing slips through the cracks. 

This manual context assembly is slow, error-prone, and analyst-dependent. Investigations that should take minutes take hours. And in a threat landscape where speed matters, fragmented context is a liability that shows up in missed detections and broken SLAs. 

How ANY.RUN helps 

ANY.RUN collapses the distance between intelligence and action by delivering investigation context as a connected whole, giving MSSPs faster incident resolution, less analyst-dependent knowledge, and investigation outputs that hold their value even when team composition changes. 

  • Eliminates manual context assembly; 
  • Connects intelligence to behavior; 
  • Reduces investigation time per incident. 

ANY.RUN’s modules are designed for seamless integration and context sharing. The Interactive Sandbox delivers comprehensive behavioral data in one place: processes, network activity, MITRE ATT&CK mappings, and more. TI Lookup instantly correlates any indicator (IOC, IOA, or IOB) with related threats, full attack chains, and supporting sandbox reports. TI Feeds extend this intelligence across the entire stack, feeding enriched data into existing workflows. 

The impact of ANY.RUN’s solution on MSSP processes

Analysts no longer “build the picture manually.” They access unified, actionable intelligence that accelerates triage, investigation, and reporting across all clients, reducing context gaps and enabling consistent, high-quality outcomes. The investigation pipeline becomes a connected workflow rather than a manual collage. 

4. Tool-Switching: The Hidden Time Tax 

Constantly jumping between platforms kills efficiency and extends turnaround times. Analysts lose momentum with every tab switch, every login, and every manual data transfer, directly impacting SLA compliance and team morale. 

When tools are slow, unreliable, or disconnected, analysts route around them. They rely on memory, on informal knowledge-sharing, on workarounds. All of it introduces inconsistency and risk. 

How ANY.RUN Helps 

ANY.RUN’s API-first architecture is built to disappear into the workflows analysts already use, surfacing intelligence in the context where work is happening, rather than requiring analysts to pivot toward it. The result is less friction, higher adoption, and more consistent investigation quality across the team. 
 
TI Lookup and TI Feeds can be embedded directly into SIEM, SOAR, and ticketing environments, so analysts can surface intelligence without leaving the context they’re already working in. The Interactive Sandbox can be invoked as part of an automated or semi-automated investigation pipeline, with results returned in structured, machine-readable formats that feed directly into case management. 

Reports accessible in the Sandbox

The goal is to make ANY.RUN invisible in the best sense: present at every stage of investigation, without requiring analysts to pivot their attention toward it. 

Stop scaling pain and start scaling profit.

Check how ANY.RUN Intelligence fits your workflows.



Contact sales


5. No Standardization — Scaling Chaos Across Clients 

No two MSSP clients are alike. One runs a legacy on-premises environment with minimal telemetry. Another is cloud-native with dozens of SaaS integrations. A third has custom applications, bespoke logging configurations, and a security team with strong opinions about how investigations should be documented. For the MSSP trying to serve all three, the challenge isn’t just operational: it’s structural. 

When client environments are siloed, institutional knowledge about one doesn’t transfer to another. When investigation workflows differ by engagement, onboarding new analysts takes longer, errors are harder to catch, and QA becomes a guessing game. What scales, in the absence of standardization, is chaos. And chaos has a cost. 

How ANY.RUN helps 

ANY.RUN Threat Intelligence was built with multi-tenant MSSP operations in mind. 

  • Normalizes intelligence across client environments; 
  • Gives analysts a single investigative interface; 
  • Standardizes investigation outputs; 
  • Shortens analyst onboarding. 

TI Feeds deliver structured, consistently formatted intelligence that can be normalized and applied across client environments without per-client customization of the data layer.  

TI Lookup gives analysts a single investigative interface regardless of which client environment they’re working in. And the Interactive Sandbox produces structured, reproducible analysis outputs — process trees, network maps, MITRE mappings, IOC exports — that can be templated into client-specific reporting workflows without requiring analysts to rebuild their investigation approach from scratch each time. 

Standardization doesn’t mean treating every client the same. It means having a consistent intelligence layer beneath the client-specific details, so that quality and speed hold constant even as the client roster grows. 

Analyst burnout (the pain that amplifies all others) 

When systems don’t scale, people absorb the pressure. Overload, repetitive work, constant alert fatigue — this is where everything converges. 

Burnout isn’t just a people problem. It’s an operational risk: 

  • Higher turnover; 
  • Knowledge loss 
  • Reduced investigation quality 

How ANY.RUN helps 

By reducing noise, minimizing manual work, and accelerating investigations, the combined capabilities of Interactive SandboxTI Lookup, and TI Feeds directly lower cognitive and operational pressure. Analysts move from reactive overload to structured, efficient workflows. 

Conclusion: What MSSPs Are Actually Looking For 

The pains above are not independent problems. They are interconnected symptoms of the same underlying condition: MSSP operations that have scaled their client load without scaling the intelligence infrastructure underneath it. 

MSSPs don’t need more isolated features. They need: 

  • Less manual aggregation; 
  • Less switching; 
  • More context, faster; 
  • Reliable, always-available capabilities; 
  • Infrastructure that improves margins, not just performance. 

When Threat Intelligence Lookup and Threat Intelligence Feeds operate as a unified threat intelligence layer, and Interactive Sandbox feeds it with fresh behavioral data, the result isn’t just efficiency. It’s a shift in how MSSPs operate: from effort-heavy scaling to intelligence-driven scaling.  

About ANY.RUN

ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, helps security teams investigate threats faster and with greater clarity across modern enterprise environments.   

It allows teams to safely execute suspicious files and URLs, observe real behavior in an Interactive Sandbox, enrich indicators with immediate context through TI Lookup, and monitor emerging malicious infrastructure using Threat Intelligence Feeds. Together, these capabilities help reduce investigation uncertainty, accelerate triage, and limit unnecessary escalations across the SOC.   

ANY.RUN is trusted by thousands of organizations worldwide and meets enterprise security and compliance expectations. It is SOC 2 Type II certified, demonstrating its commitment to protecting customer data and maintaining strong security controls. 

FAQ

What are the main operational challenges facing MSSP leaders today?

The biggest pains include linear headcount scaling, high alert noise (up to 70%), missing context, constant tool switching, lack of standardization across clients, and resulting analyst burnout and turnover.

How does ANY.RUN help MSSPs scale without proportionally increasing staff?

By combining Threat Intelligence and the Interactive Sandbox, ANY.RUN dramatically reduces time spent on triage and investigation, allowing the same team to handle more clients effectively while maintaining or improving service quality.

Can ANY.RUN reduce alert fatigue?

Yes. TI Feeds deliver high-confidence, low-noise IOCs, while TI Lookup and Sandbox analysis provide rapid behavioral context that helps filter genuine threats from noise.

How does ANY.RUN solve the problem of missing context?

The Interactive Sandbox reveals full attack behavior, and TI Lookup instantly correlates indicators with rich, real-world intelligence — all in one integrated workflow instead of manual collection across tools.

Is ANY.RUN suitable for multi-tenant MSSP environments?

Yes. It supports strong client isolation and centralized management, replacing manual separation processes with reliable, scalable infrastructure.

How fast is analysis with ANY.RUN?

The Interactive Sandbox and Threat Intelligence deliver quick turnaround times, often in seconds to minutes, helping MSSPs comfortably meet aggressive SLAs (typically ~1 hour for initial analysis).

The post Margin vs. Madness: Fixing MSSP Top 5 Operational Nightmares appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

A practical guide to secure vibe-coding for small businesses | Kaspersky official blog

The entry barriers for app development have plummeted in recent times — with nearly anyone now able to build a professional website, personal news bot, or dashboard simply by giving a chatbot or AI agent a few instructions in natural English. Unfortunately, a massive gap exists between a slick prototype and a reliable, production-ready, secure application. To avoid becoming the subject of another AI fail story, or losing money and sensitive data, follow these straightforward tips. These are intended specifically for non-technical creators and very small teams. Larger enterprises should follow more sophisticated recommendations.

The primary risks of AI-generated code

While vibe coding can deliver a seemingly functional app in just a few hours, it will likely contain dangerous flaws. AI models are trained on code samples from across the internet, which often include suboptimal tutorials, buggy snippets, and outright junk. Sometimes this code simply fails to run, but more often the situation is subtler and more hazardous: the app appears to work, yet under the hood, it might rely on a crude imitation of the required logic or contain critical vulnerabilities. According to a study by the Cloud Security Alliance AI Safety Initiative, the following facts should be considered when using AI for coding:

  • At least 45% of AI-generated code contains dangerous vulnerabilities, such as failing to verify the user before granting access to sensitive data.
  • A professional developer using AI can write code three to four times faster, but may introduce 10 times as many vulnerabilities.
  • Twenty percent of AI-generated code attempts to use external libraries and modules that don’t actually exist.
  • Even when an application handles confidential data — such as payments, private messages, or documents — AI-generated code sometimes skips credential verification entirely. This can leave the app’s data open for anyone on the internet to read.
  • In other instances, the app might correctly prompt for a username and password but fail to enforce access controls, allowing any registered user to view everyone else’s data.
  • Access keys (tokens) for databases and AI services may be embedded directly into the source code, easy to steal, and difficult to rotate after a data breach or cyberattack.
  • Project code or critical build outputs are often deployed to servers without proper access restrictions, leaving both the application logic and sensitive access keys vulnerable to theft.
  • AI may implement insecure database access patterns, which can allow attackers to bypass the application to steal data or execute arbitrary code on the database server.
  • Apps that include API functionality often suffer from insecure API implementations, lacking both user permission checks and rate limiting.

Core principles of securing vibe code

Always verify. Treat AI-generated code as a rough draft. It should always be reviewed and rigorously tested. Ideally, professional developers should handle this; however, if none are available, the vibe-coder should at least test the application themselves, have friends or colleagues poke around the live app, and ask them to review key code snippets. It’s also possible to evaluate code integrity by submitting a separate prompt to the AI: “Review this code for secure development best practices and check for OWASP Top 10 vulnerabilities”.

Protect secrets. Never include passwords, API keys, or any other sensitive data in AI prompts. Instead, instruct the AI to write code that securely stores all secrets in environment variables (special hidden settings).

Prioritize efforts. The main risks emerge when an application is network-accessible to outsiders, processes valuable data, or runs on infrastructure that would be useful to attackers. The components of an app or system that meet these criteria are precisely what’s needed to be protected first. A static website composed of three HTML pages faces significantly lower risk than a loyalty program integrated into an online store.

Make security an explicit requirement. Even a simple, straightforward line in the prompt, like “Follow industry standards and security best practices when generating this code”, improves the output. Providing more specific requirements for critical code snippets makes the results even better.

Don’t trust default settings. Often, the danger in vibe coding lies in the configuration rather than the code itself. For example, an app processing sensitive company data might be deployed on a public vibe-coding platform (Lovable or the like), and remain accessible to the entire internet by default. Even if the code is flawless, making that information public is a critical security failure. Because of this, every component — from hosting and database settings to the deployment pipeline — must be manually reviewed and properly configured. If the purpose of a setting is unclear, consult a chatbot for the optimal values, specifying that its goal is to enhance security, and describing who the app is intended for.

Security is a continuous process. Securing the app should not be treated as a one-off task. Every time an application is updated, hosting providers are changed, or a project undergoes any other major shift, all steps in making it secure should be revisited, and the risks reassessed.

Tips for securing vibe code

It’s natural to want an app built from broad prompts like “Make me a beautiful, user-friendly, fast, reliable, and secure app for [use case].” However, for the results to actually be effective, each of those requirements needs to be fleshed out. Below, we’ve outlined recommendations for building standard components that will make vibe code more secure. It’s important to emphasize that “more secure” doesn’t mean “perfectly secure” — these approaches lower the risk, but that risk remains well above zero.

Demand security from the AI. When assigning a task to a neural network, be explicit: “write secure code, validate data, encrypt passwords”. Each type of task requires its own security prompt. For instance, don’t just ask to “build a login form”. Instead, ask for a “secure login form with credential validation, authentication and authorization (user permissions) controls, brute-force protection, password hashing according to modern standards, transmission strictly over HTTPS, and no hardcoded secrets”. It makes sense to use these secure requirement templates every time. It’s also helpful to keep a short cheat sheet of standard requirements for AI prompts: “validate all external data and user input before processing”, “no secrets in code”, “protect APIs from abuse”, “restrict user permissions”, and “secure default settings”.

Use off-the-shelf solutions. If an app needs a user management system, insist on using a popular, reputable library, such as NextAuth, Auth0, and so on, rather than inventing a new and vulnerable solution. This is the most common cause of data breaches. This applies to more than just login and registration; for other high-risk actions like file uploads and API call processing, it’s better to use established frameworks and libraries with built-in protections rather than building everything from scratch.

Don’t trust the AI blindly; verify open-source components. Neural networks often try to inject non-existent components and libraries into a project or suggest outdated versions. Always search for the suggested names online to ensure they are real, widely used, and secure — and make sure the latest versions are used.

Demand robust encryption. Explicitly state that modern industry standards must be used for both data transmission and storage: TLS 1.3 based on OpenSSL for network traffic; argon2 or bcrypt for hashing credentials; and so on.

Never trust user input. Always instruct the AI to include validation for any data entered by users, whether in forms or search bars. Use terms like “parameterization” and “sanitization” to emphasize that the app needs protection against malicious actors, not just users’ typos.

Set limits on user actions. Require the AI to implement rate limiting for login attempts or general requests. This will protect a project from automated attacks like DoS and brute-force password guessing.

Hide the system’s inner workings. If the site crashes, users should see a simple apology page rather than a detailed error report containing snippets of the code. That kind of information is a goldmine for hackers.

Remember that you’re a developer, and you need to protect development-related digital assets. All related accounts — such as access to GitHub, project hosting, and other resources — are prime targets for attackers. Be sure to enable two-factor authentication (2FA) on all work accounts.

Make backups. Regularly back up a project both locally and to the cloud to protect it against critical AI errors as well as cyberattacks. These backups should include both the application’s source code and its databases.

Set up a sandbox. Test new features and app versions in a secure environment using a clone of an active site or app and a copy of a database. Always run thorough tests before pushing an update live. This allows catching issues without putting users or their data at risk.

Update dependencies and scan them for vulnerabilities. A vibe-coded app will almost certainly rely on third-party libraries and components, known as dependencies. It’s wise to update these regularly by rebuilding an app with the latest versions, even if app’s code itself has not been changed. This process helps patch known security flaws in the used packages.

Check for secrets leaking into the repository. Use secrets scanners like TruffleHog to audit resulting code. Even with instructions, AI might slip up and include an API key or password in the source code. A scanner ensures that files containing keys and passwords don’t end up in Git or get published alongside the project.

Kaspersky official blog – ​Read More

Five defender priorities from the Talos Year in Review

Five defender priorities from the Talos Year in Review

A familiar theme in security right now is that the barrier to entry for attackers is at an all-time low. AI tools can spin up websites within minutes that can easily direct data to disposable external data stores and send alerts for new captures — all without code. 

One such case was recently detailed in the latest Cisco Talos Incident Response Quarterly Trends report.

Proof-of-concept code for exploiting new vulnerabilities used to take attackers months to create. Now they take hours.

All of this is very concerning for defenders. Yesterday, my colleague told me about a recent conference Q&A he hosted, where he was asked to provide some hope to those in the room who have faced an overwhelming amount of change in recent months. 

His answer was to focus on the here and now. Focus on what you can control, and what you have influence over. We can’t change what may or may not happen in six months’ time, but we can prioritize what’s important now. 

The other key thing for defenders to bear in mind is that even when attackers move fast, they still don’t behave like your normal users. At the end of the day, you’re still looking for anomalous behavior – whether that behavior is machine- or human-generated.

As we come to the end of our Year in Review content release (if you haven’t seen it yet, we published videos, podcasts, and topic specific blog posts), we’d like to end by summarizing the key priorities for defenders. 

Here are five of them that are worth considering when it comes to spotting malicious, unusual behaviour in your environment.

1. Identity is the main battlefield 

The Year in Review highlights how frequently attackers rely on valid accounts and credential abuse throughout the attack chain. We see this across multiple areas:

  • MFA spray attacks targeting IAM platforms directly 
  • Device compromise attacks increasing 178% year over year 
  • Attackers registering their own devices as trusted multi-factor authentication (MFA) methods
  • Ransomware attack chains largely relying on valid accounts, credentialed tools, or both

Network infrastructure is a key part of this. VPNs, Active Directory Controllers (ADCs), and firewalls are being exploited to steal session tokens, bypass MFA, and impersonate users.

However, when attackers successfully authenticate, where they go from there tends not to fall in line with normal user behavior. They start to access new systems outside of their role, move laterally using tools like PsExec, execute commands at unusual times, and overall operate at a scale that normal users don’t.

Therefore, having a baseline understanding of normal user behavior is more important than ever.

Prioritize:

  • Treating identity infrastructure as Tier 1 critical assets and apply the strongest monitoring and protection controls to IAM and PAM systems
  • Securing MFA device registration workflows with strict verification procedures and limited administrative approval rights
  • Hardening authentication systems against automated attacks by enforcing rate limiting, anomaly detection, and strong conditional access policies
  • Building baseline detections around what users do, not just how they log in

2. Prioritize the vulnerabilities that have the most exposure

One of the most important callouts in the report is how attackers select targets. The rapid exploitation of vulnerabilities such as React2Shell and ToolShell shows that exploitation can begin immediately after disclosure with readily available proof-of-concepts. Attackers then prioritize what is exposed and reachable. 

Attackers also like to exploit the vulnerabilities that are closest to identity, session handling, and access logic.

At the same time, older vulnerabilities such as Log4Shell remain among the most exploited, over four years after disclosure.

This creates a dual reality where some new vulnerabilities are weaponized instantly, but old, highly-valued vulnerabilities are never fully eliminated.

Prioritize:

  • Remediating vulnerabilities based on internet exposure and access impact, not just CVSS scores
  • Reducing time-to-patch for externally accessible systems 
  • Continuously reassessing what is reachable from the outside

3. Address the long tail of legacy and embedded risk

The Year in Review highlights that nearly 40% of the top 100 most targeted vulnerabilities impact EOL systems, and 32% are over a decade old. Many of these vulnerabilities exist in deeply embedded components such as PHP frameworks, Log4j, and ColdFusion.

These components are often poorly inventoried, difficult to patch, and tightly coupled to business-critical systems.

It’s a frustrating fact that the most persistent risks are often the least visible,
and the hardest to remove. They create long-term blind spots, which are an attacker’s favorite thing to find and exploit.

Prioritize:

  • Improving visibility into software dependencies and embedded components 
  • Treating development frameworks and libraries as part of your attack surface 
  • Establishing clear strategies for isolating or retiring legacy systems

4. Secure the systems that broker trust

Attackers are increasingly targeting systems that provide maximum operational leverage. This includes network management platforms, application delivery controllers (ADCs), and shared software platforms running across multiple devices.

These systems are attractive to adversaries because they store credentials, control configurations across large environments, provide visibility into the network, and enable changes at scale.

Unfortunately, these platforms are also traditionally less monitored than endpoints, more complex to patch or upgrade, and have centralized points of failure.

Prioritize:

  • Identifying management-plane and control-plane systems that need securing
  • Applying enhanced monitoring and access controls to these platforms 
  • Limiting administrative access and enforce strong segmentation

5. Keep focusing on patterns, even with increased automation and AI-driven attacks

Yes, automation and AI are changing the threat landscape. As we’ve spoken about, attackers are increasingly able to rapidly identify and exploit vulnerabilities, launch large-scale identity attacks, generate convincing phishing lures that mimic real business workflows, and accelerate parts of the attack lifecycle using AI-assisted tooling.

However, all these things do not remove a key constraint for adversaries: Automated attacks still produce patterns of unusual behavior, and patterns are detectable.

Even highly scalable attacks tend to reuse the same infrastructure, tools, and techniques. They also follow predictable sequences of activity and generate anomalies.

Prioritize:

  • Focusing detection efforts on anomalous events (e.g., unusual authentication flows, abnormal system access, anomalous device registration) 
  • Reducing alert fatigue by prioritizing a smaller number of meaningful detections over broad, low-confidence alerting 
  • Supporting triage and enrichment with automation where possible, alongside human decision-making
  • Ensuring teams are equipped to investigate patterns of behavior, not just isolated alerts

Final thoughts

Much of the current concern in and around the security community is the new reality that anyone can create a malicious campaign. The Year in Review doesn’t disagree.

However, Talos data also shows something equally important:

  • Attackers still rely on the same vulnerabilities 
  • They reuse the same tools and techniques 
  • They follow repeatable patterns 
  • And, critically, they don’t behave like your users

Even when they successfully authenticate, move laterally, or establish persistence, their activity introduces detectable anomalies.

That’s where the opportunity lies for defenders. 

Cisco Talos Blog – ​Read More

Phishing-to-RMM Attacks: The Remote Access Blind Spot CISOs Can’t Ignore 

CISOs are under pressure to prove that their security programs can detect threats early, reduce business risk, and support fast, confident response. But that becomes harder when attackers stop relying on obviously malicious tools.

In recent phishing-to-RMM campaigns observed by ANY.RUN analysts, threat actors are using fake Microsoft, Adobe, and OneDrive pages to deliver legitimate remote management tools instead of traditional malware. Once installed, these tools can give attackers remote access to a victim’s device while blending into software categories many enterprises already use or allow.

For security leaders, this creates a difficult visibility problem. The payload may be legitimate. The infrastructure may be trusted. The user action may look like a routine download. Yet the outcome is the same: unauthorized remote access inside the environment.

Key Takeaways 

  • Phishing-to-RMM attacks create a dangerous visibility gap for enterprise SOCs: Attackers can deliver legitimate remote management tools through phishing pages that impersonate trusted services like Microsoft, Adobe, and OneDrive.
  • The payload may not look malicious on its own: Tools such as ScreenConnect and LogMeIn Rescue can appear as legitimate remote administration software, especially in organizations where RMM usage is already allowed.
  • Domain reputation is not enough: These attacks may use legitimate platforms, vendor infrastructure, or compromised websites instead of obvious newly registered domains.
  • The real signal is in the full attack chain: SOC teams need to connect the phishing lure, download context, execution behavior, RMM installation, and outbound connections.
  • For CISOs, the risk is operational as much as technical: Missed phishing-to-RMM activity can lead to slower detection, longer attacker dwell time, delayed containment, and weaker confidence in approved remote access tools.
  • ANY.RUN helps turn gray-zone activity into evidence: With Interactive Sandbox and Threat Intelligence, teams can safely analyze suspicious URLs and files, trace RMM behavior, and investigate related phishing-to-RMM chains.

The Blind Spot: When “Allowed” Tools Become the Attack Path

Most enterprise security programs are built to separate malicious activity from normal operations. Phishing-to-RMM attacks blur that line.

An RMM installer can pass basic checks because it is not malware by design. But the risk is not in the tool alone. It is in the context around it: how it reached the user, whether the download was expected, which endpoint launched it, and what connection followed.

For CISOs, this is where the risk becomes critical. Unauthorized access can hide inside routine-looking activity, giving the business a false sense of control while the attacker is already inside.

The outcome can be serious: 

  • Slower detection because the activity does not look like classic malware 
  • Longer attacker dwell time inside the environment 
  • Higher risk of lateral movement from the compromised endpoint 
  • More pressure on SOC teams to investigate ambiguous alerts 
  • Delayed containment because the initial access path is harder to prove 
  • Weaker confidence in whether approved remote access tools are being used safely 

Close the gap before it becomes business risk.
Give your SOC full visibility into suspicious activity.



Contact us


Which Organizations are Most Exposed 

ANY.RUN data shows that phishing-to-RMM activity is primarily concentrated in the United States, followed by Canada, Europe, and Australia. The most affected industries include Education, Technology, Banking, Government, Manufacturing, and Finance.

These sectors often depend on remote administration for IT support, distributed workforce management, and endpoint maintenance. That reliance creates more room for abuse: when RMM tools are already part of normal operations, unauthorized access can take longer to recognize and contain.

How Legitimate RMM Tools Are Delivered Through Phishing 

Since early April, the ANY.RUN team has observed a rise in phishing-to-RMM attacks, where threat actors use phishing to deliver legitimate remote management tools and gain remote access to victims’ devices.  

For just one of these campaigns, we are seeing more than 50 public analyses in ANY.RUN every week: suricataID:”84002229″

Public analyses related to phishing-to-RMM attacks demonstrated inside ANY.RUN’s TI Lookup
Public analyses related to phishing-to-RMM attacks demonstrated inside ANY.RUN’s TI Lookup

Phishing campaigns that deliver RMM tools are especially dangerous for SOC teams because these tools can appear to be legitimate remote administration software. If an organization already uses or allows RMM solutions, the launch of ScreenConnect may not immediately trigger security policies.

Close the RMM abuse gap in your SOC.
Integrate ANY.RUN’s threat analysis and intelligence.



Contact us


The screenshot below shows a phishing page impersonating Microsoft Store and Adobe Acrobat Reader DC. The user is prompted to download Adobesetup.exe, but behind that name is ScreenConnect; an RMM tool that attackers can use to establish remote access to the system.

View analysis session 

A fake Microsoft Store page with an RMM installer disguised as Adobe 
A fake Microsoft Store page with an RMM installer disguised as Adobe 

Another example shows the attack disguised as a protected Microsoft OneDrive download. The page at vmail.app.n8n.cloud displays a “Verify to Download” prompt for what appears to be a PDF document. Once the user clicks, they receive ScreenConnect.ClientSetup.exe:

Fake Microsoft OneDrive page with an RMM installer disguised as a PDF document 

This chain makes SOC triage more difficult: the phishing landing page is hosted on the legitimate n8n.cloud platform, while the RMM agent download and subsequent connection occur through legitimate ScreenConnect infrastructure.

The attack does not rely on obvious newly registered domains, which are often an easy signal for blocking. As a result, detection needs to be based on behavior, download context, and anomalies around RMM execution, not domain reputation alone.

Traffic to ScreenConnect in ANY.RUN’s Connections tab 
Traffic to ScreenConnect in ANY.RUN’s Connections tab 

In addition to ScreenConnect, threat actors use other legitimate RMM and remote-access tools in these phishing chains, including Datto RMM, ITarian, LogMeIn Rescue, Action1 RMM, NetSupport, Syncro, MeshAgent, SimpleHelp, RustDesk, and Splashtop.

TI Lookup query for tracking phishing-to-RMM attack chains 
TI Lookup query for tracking phishing-to-RMM attack chains 

To retrospectively track similar chains in ANY.RUN Threat Intelligence, teams can use the following query. As part of TI Lookup, every user has access to 20 full queries: 

threatName:”^phishing$” and threatName:”rmm-tool” 

In addition to standard installers, threat actors are also using more sophisticated delivery methods, as shown in this public analysis: 

VBS document disguised as an Adobe Acrobat installer
VBS document disguised as an Adobe Acrobat installer

In this example, the user is shown a phishing page with an Adobe document download lure. Instead of the expected file, the page delivers a VBS script

Once executed, the script attempts to elevate privileges through UAC, disable SmartScreen, and weaken Microsoft Defender protections. It then silently downloads the LogMeIn Rescue installer, removes the Mark-of-the-Web, and runs a quiet installation via msiexec, turning the endpoint into a system with unattended RMM access.

Detect trusted-tool abuse before attackers gain access.
Bring ANY.RUN into your SOC for faster threat response.



Integrate in your SOC


It is also worth noting that in campaigns like this, threat actors try to minimize easily blocked, lower-level IoCs from the Pyramid of Pain, such as newly registered domains.

Instead, phishing pages may be hosted on already existing websites. The domain itself appears legitimate, while the suspicious activity is hidden deeper in the URL — in an unusual URI path that may indicate SEO injection or a compromised website.

SEO injection into a legitimate domain in a phishing-to-RMM attack chain 

At the time of analysis, VirusTotal showed that no vendor had flagged this domain as malicious: 

VirusTotal did not flag the domain as malicious at the time of analysis 
VirusTotal did not flag the domain as malicious at the time of analysis 

Taken together, these cases reflect a broader shift from malware-first initial access to phishing-first initial access. Threat actors are increasingly gaining access not through an obviously malicious payload, but through social engineering and legitimate remote administration tools.

How SOC Teams Can Close the RMM Visibility Gap 

Phishing-to-RMM attacks cannot be handled like ordinary malware delivery. The payload may be legitimate, the infrastructure may be trusted, and the domain may not have a malicious reputation at the time of analysis.

To detect this activity earlier, SOC teams need visibility into the full attack chain, not just the final file. That means connecting:

  • the phishing page that initiated the download 
  • the file or script delivered to the user 
  • the execution path on the endpoint 
  • attempts to weaken security controls 
  • RMM installation behavior 
  • outbound connections to remote access infrastructure 

This is where ANY.RUN helps teams close the gap. With the Interactive Sandbox, security teams can safely examine suspicious URLs, files, and scripts during triage.

Phishing-to-RMM attack chain exposed inside ANY.RUN sandbox
Phishing-to-RMM attack chain exposed inside ANY.RUN sandbox

They can observe the phishing lure, delivered payload, execution flow, attempts to weaken security controls, RMM installation, and outbound connections in one controlled environment.

ANY.RUN Threat Intelligence adds the retrospective layer. Teams can search across public analyses, track phishing-to-RMM chains, pivot from one indicator to related activity, and understand whether a single event is part of a wider campaign.

Sandbox analyses linked to phishing-to-RMM attacks displayed inside TI Lookup 
Sandbox analyses linked to phishing-to-RMM attacks displayed inside TI Lookup 

For CISOs, this means more control over a risk that is usually hard to prove. The SOC can validate suspicious remote access activity faster, show how the access path started, and give leadership clearer evidence for containment and follow-up decisions.

Strengthen early threat detection across your SOC.
See suspicious activity clearly and act with confidence.



Power up your SOC


Instead of relying on reputation-based signals or waiting for a high-confidence malware alert, security teams can prove when trusted tools are being abused. That gives CISOs stronger confidence in detection coverage, faster response readiness, and better visibility into whether approved remote access software is creating hidden business risk. 

About ANY.RUN 

ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, helps security teams detect, investigate, and respond to threats faster.

ANY.RUN solutions include Interactive Sandbox, Threat Intelligence Lookup, Threat Intelligence Feeds, and integrations for SOC workflows across SIEM, SOAR, EDR, and other security tools. Together, they help teams analyze suspicious files and URLs, uncover attacker behavior, enrich investigations with real-world threat context, and operationalize intelligence across their environment.

Built for security-conscious organizations, ANY.RUN is SOC 2 Type II attested and supports enterprise-ready controls such as SSO, MFA, granular privacy settings, and AES-256-CBC encryption.

Trusted by more than 15,000 organizations and 600,000 security professionals worldwide, ANY.RUN gives SOC teams the visibility they need to move from uncertain alerts to evidence-based decisions.

The post Phishing-to-RMM Attacks: The Remote Access Blind Spot CISOs Can’t Ignore  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

Phishing crypto-wallet clones in the App Store and other attacks on iOS and macOS crypto owners | Kaspersky official blog

Even if you keep your crypto assets in a cold wallet and use Apple devices — which enjoy a strong reputation for security — cybercriminals may still find a way to swipe your funds. These bad actors are combining well-known tricks into new attack chains — including baiting victims right inside the App Store.

Crypto-wallet clones

This past March, we discovered phishing apps at the top of the Chinese App Store charts with icons and names mimicking popular crypto-wallet management tools. Because regional restrictions block several official wallet apps from the Chinese App Store, attackers have stepped in to fill the void. They created fake apps using icons similar to the originals and names with intentional typos — likely to bypass App Store moderation and deceive users.

Phishing apps in the App Store appearing in search results for Ledger Wallet (formerly Ledger Live)

Phishing apps in the App Store appearing in search results for Ledger Wallet (formerly Ledger Live)

Beyond these, we found a number of apps with names and icons that had nothing to do with cryptocurrency. However, their promotional banners claimed they could be used to download and install official wallet apps that are otherwise unavailable in the regional App Store.

Banners on app pages claiming they can be used to download the official TokenPocket app, which is missing from the local App Store

Banners on app pages claiming they can be used to download the official TokenPocket app, which is missing from the local App Store

In total, we identified 26 phishing apps mimicking the following popular wallets:

  • MetaMask
  • Ledger
  • Trust Wallet
  • Coinbase
  • TokenPocket
  • imToken
  • Bitpie

A few other very similar apps didn’t contain phishing functionality yet, but all signs point to them being linked to the same attackers. It’s likely they plan to add malicious features in future updates.

To get these apps cleared for the App Store, the developers added basic functionality, such as a game, a calculator, or a task planner.

Installing any of these clones is the first step toward losing your crypto assets. While the apps themselves don’t steal cryptocurrency, seed phrases, or passwords, they serve as bait that builds user trust by virtue of being listed on the official App Store. Once installed and launched, however, the app opens a phishing site in the victim’s browser, designed to look like the App Store, which then prompts the user to install a compromised version of the relevant crypto wallet. The attackers have created multiple versions of these malicious modules, each tailored to a specific wallet. You can find a detailed technical breakdown of this attack in our Securelist post.

A victim who falls for the ruse is first prompted to install a provisioning profile, which allows apps to be sideloaded onto an iPhone outside the App Store. The profile is then used to install the malicious app itself.

A fake App Store site prompting the user to install an app masquerading as Ledger Wallet

A fake App Store site prompting the user to install an app masquerading as Ledger Wallet

In the example above, the malware is built on the original Ledger app with integrated Trojan functionality. The app looks identical to the original, but when connected to a hardware wallet, it displays a window requiring a seed phrase, supposedly to restore access. This is not standard procedure: typically, you only need to enter a PIN — never a recovery phrase. If a victim is deceived by the app’s apparent legitimacy and enters their seed phrase, it’s immediately sent to the attackers’ server — granting them full access to the victim’s crypto assets.

Sideloading outside the App Store

A critical component of this scheme involves installing malware on the victim’s iPhone by bypassing the App Store and its verification process. This is executed much like the SparkKitty iOS infostealer we discovered previously. The attackers managed to gain access to the Apple Developer Enterprise Program. For just US$299 a year — and following an interview and corporate verification — this program allows entities to issue their own configuration profiles and apps for direct download to user devices without ever publishing them in the App Store.

To install the app, the victim must first install a configuration profile that enables the malware to be downloaded directly, bypassing the App Store. Note the green verification checkmark

To install the app, the victim must first install a configuration profile that enables the malware to be downloaded directly, bypassing the App Store. Note the green verification checkmark

 

In general, enterprise profiles are designed to allow organizations to deploy internal apps to employees’ devices. These apps don’t require App Store publication and can be installed on an unlimited number of devices. Unfortunately, this feature is often abused. These profiles are frequently used for software that fails to meet Apple’s policies, such as online casinos, pirated mods, and, of course, malware.

This is precisely why the fake site mimicking the Apple Store prompts the user to install a configuration profile before delivering the app signed by that profile.

Stealing cryptocurrency via macOS apps and extensions

Many crypto owners prefer managing their wallets on a computer rather than a smartphone — often choosing Macs for the task. It’s no surprise, then, that most popular macOS infostealers target crypto-wallet data in one way or another. Recently, however, a new malicious tactic has been gaining traction: in addition to stealing saved data, attackers are embedding phishing dialogs directly into legitimate wallet applications already installed on users’ computers. Earlier this year, the MacSync infostealer adopted this functionality. It infiltrates systems via ClickFix attacks: users searching for software are lured to fake sites with fraudulent instructions to install the app by running commands in Terminal. This executes the infostealer, which scrapes passwords and cookies saved in Chrome, chats from popular messengers, and data from browser-based crypto-wallet extensions.

But the most interesting part is what happens next. If the victim already has a legitimate Trezor or Ledger app installed, the infostealer downloads additional modules and… swaps out fragments of the app with its own trojanized code. The malware then re-signs the modified file so that after these “fixes” are made, Gatekeeper (a built-in protection mechanism in macOS) allows the application to run without an additional permission request from the user. While this trick doesn’t always work, it’s effective for simpler apps built on the popular Electron framework.

The trojanized app prompts the user for the seed phrase of their wallet

The trojanized app prompts the user for the seed phrase of their wallet

When the trojanized app is opened, it fakes an error and initiates a “recovery process”, prompting the user for their wallet seed phrase.

Besides MacSync, the developers behind other popular macOS infostealers have adopted this same trojanization approach. We previously detailed a similar mechanism used to compromise Exodus and Bitcoin-Qt wallets.

How to keep your crypto assets safe

Time and again, attackers have proved that no gadget is truly invincible. With so many developers and cryptocurrency users preferring macOS and iOS, threat actors have designed and deployed industrial-scale attacks for both platforms. Staying safe requires in-depth defense backed by skepticism and vigilance.

  • Download apps only from trusted sources: either the developer’s official website or their App Store page. Since malware can slip even into official stores, always verify the app’s publisher.
  • Check the app’s rating, publication date, and download counter.
  • Read the reviews — especially the negative ones. Sort reviews by date to evaluate the latest version. Attackers often start with a perfectly innocent app that earns high ratings before introducing malicious functionality in a later update.
  • Never copy and paste commands into your Terminal unless you’re 100% certain what they do. These attacks have become very popular lately, often disguised as installation steps for AI apps like Claude Code or OpenClaw.
  • Use a comprehensive security system on all your computers and smartphones. We recommend Kaspersky Premium. This goes a long way to mitigate the risk of visiting phishing sites or installing malicious apps.
  • Never enter your seed phrase into a hardware wallet app, on a website, or in a chat. In every scenario, whether migrating to a new wallet, reinstalling apps, or recovering a wallet, the seed phrase should be entered exclusively on the hardware device itself — never in a mobile or desktop app.
  • Always verify the recipient’s address on the hardware wallet’s screen to prevent attacks involving address swapping.
  • Store your seed phrases in the most secure way possible, such as on a metal plate or in a sealed envelope in a safe deposit box. It’s best not to store them on a computer at all, but if that’s your only option, use a secure, encrypted vault like Kaspersky Password Manager.

Still believe that Apple devices are bulletproof? Think again as you read the following:

Kaspersky official blog – ​Read More

The calm before the ransom: What you see is not all there is

A breach claims the systems as well as the confidence that was, in retrospect, a major vulnerability

WeLiveSecurity – ​Read More

Eavesdropping via fiber-optic cables | Kaspersky official blog

Researchers from three universities in Hong Kong have published a paper demonstrating a method of eavesdropping through fiber-optic cables. Fiber optics have long been the gold standard for data transmission due to their ability to transfer information at high speeds over long distances. Fiber-optic cabling utilizes ultra-thin glass threads for transmission, and is widely used not only for backbone data lines but also for connecting individual premises. And as it turns out, these very glass threads are sensitive enough to vibrations that they subtly alter the parameters of the optical signal.

Potentially, this allows a fiber-optic cable to be turned into a microphone and intercept room conversations while being kilometers away from the sound source. In other words, this exploits so-called side channels — non-obvious characteristics of everyday home or office appliances that enable information leaks. Of course, this work is largely theoretical, much like other similar studies we’ve covered previously — eavesdropping through mouse sensors, using RAM modules as radio transmitters, exfiltrating data from CCTV sensors, or screen snooping through HDMI cables. However, several news outlets have reported on the Hong Kong researchers’ study as if it were a turnkey method, so let’s try to determine just how dangerous it really is in practice.

Hurdles of optical eavesdropping

The unique characteristics of fiber-optic cables were first considered back in 2012 by Russian researchers, who conceded the theoretical possibility of such an attack. The goal of the Hong Kong researchers was to demonstrate at least some level of practical implementation for eavesdropping.

Network and room layout

Diagram of a provider’s fiber-optic network showing the location of the attacker and the room targeted for eavesdropping. Source

The diagram above illustrates a typical FTTH (fiber-to-the-home) network architecture, where end users or organizations connect directly to a fiber-optic cable. The ISP manages the so-called Optical Distribution Network (ODN), to which end-users are connected. The device on the user’s end is called an Optical Networking Unit (ONU).

An attack leveraging this equipment is quite difficult to execute. To eavesdrop on a specific ONU endpoint, a potential adversary would need access to the provider’s infrastructure and control over the ODN equipment. What exactly is this device? It’s a network router or an optical-to-Ethernet converter — a small box usually tucked away in an office utility closet. Inside the premises, connectivity is provided either by Wi-Fi or a local network using Ethernet cabling. Crucially, the fiber-optic cable is unlikely to run directly into a sensitive area like a CEO’s office — the very place where eavesdropping would be most relevant.

Eavesdropping setup

Schematic representation of the eavesdropping setup on the attacker’s side. Source

And here’s a rough idea of what the attacker’s equipment would look like. Using special tech, they send optical pulses down the fiber-optic cable and measure the parameters of their transmission. Minor vibrations from footsteps in a room near the cable and nearby conversations trigger an effect known as Rayleigh scattering. This effect, in turn, causes minute deviations in the reflected signal’s parameters, which are then captured on the attacker’s end using a photosensor.

Recording the sound of footsteps

Recording the sound of footsteps in a room through a fiber-optic cable. Source

Before moving on to voice recording, the researchers decided to test a simpler scenario. To streamline the task, they ran the fiber-optic cable around the perimeter of the room and recorded footsteps — which generate significant vibration — rather than quiet conversation. This experiment was quite successful — the footsteps were audible. However, human speech proved to be far more challenging to capture. It turned out that even in laboratory conditions, intercepting a conversation between two people was impossible. To make further stages of the attack possible, the researchers assumed the presence of a bug at the fiber’s entry point into the room. This module is essentially a microphone that converts audio signals into vibrations on the optical cable. This amplifies the signal, making it possible to intercept on the attacker’s side.

Not-so-obvious advantages

But wait — if we’re talking about planting a bug in a room, why go through all the trouble with fiber optics? Why not just have the bug transmit the conversation on its own through cellular data or the building’s landline — especially since it’s already sitting right on top of it? Because there’s a distinct advantage to the researchers’ proposed attack scenario.

A regular bug transmitting audio over a cellular network or through the internet is fairly easy to detect, whereas a transmitter relaying data via fiber-optic cable vibrations can operate much more stealthily. Such a tap would be relatively easy to implant during the installation of network equipment, and harder to detect using traditional bug-sweeping tools.

Another major benefit of this hypothetical attack is that the eavesdropping can take place kilometers away from the target room — the attacker wouldn’t have to put themselves at extra risk by being near the target. Theoretically, one could also imagine a scenario where a separate fiber-optic cable is run into a room solely for surveillance purposes without raising much suspicion from those being surveilled.

Practical takeaways

If we frame the question as, “Can attackers remotely eavesdrop on any room that has fiber-optic cabling?” the answer is no; it’s still impossible. However, this work by the Hong Kong researchers, which highlights quirks of a common data transmission medium, demonstrates a technically feasible — albeit unlikely and quite expensive to execute — scenario for a targeted attack.

Kaspersky official blog – ​Read More