The Olympic Games are more than just a massive celebration of sports; they’re a high-stakes business. Officially, the projected economic impact of the Winter Games — which kicked off on February 6 in Italy — is estimated at 5.3 billion euros. A lion’s share of that revenue is expected to come from fans flocking in from around the globe, with over 2.5 million tourists predicted to visit Italy. Meanwhile, those staying home are tuning in via TV and streaming. According to the platforms, viewership ratings are already hitting their highest peaks since 2014.
But while athletes are grinding for medals and the world is glued to every triumph and heartbreak, a different set of “competitors” has entered the arena to capitalize on the hype and the trust of eager fans. Cyberscammers of all stripes have joined an illegal race for the gold, knowing full well that a frenzy is a fraudster’s best friend.
Kaspersky experts have tracked numerous fraudulent schemes targeting fans during these Winter Games. Here is how to avoid frustration in the form of fake tickets, non-existent merch, and shady streams, so you can keep your cash and personal data safe.
Tickets to nowhere
The most popular scam on this year’s circuit is the sale of non-existent tickets. Usually, there are far fewer seats at the rinks and slopes than there are fans dying to see the main events. In a supply-and-demand crunch, people scramble for any chance to snag those coveted passes, and that’s when phishing sites — clones of official vendors — come to the “rescue”. Using these, bad actors fish for fans’ payment details to either resell them on the dark web or drain their accounts immediately.
This is what a fraudulent site selling fake Olympic tickets looks like
Remember: tickets for any Olympic event are sold only through the authorized Olympic platform or its listed partners. Any third-party site or seller outside the official channel is a scammer. We’re putting that play in the penalty box!
A fake goalie mitt, a counterfeit stick…
Dreaming of a Sydney Sweeney — sorry, Sidney Crosby — jersey? Or maybe you want a tracksuit with the official Games logo? Scammers have already set up dozens of fake online stores just for you! To pull off the heist, they use official logos, convincing photos, and padded rave reviews. You pay, and in return, you get… well, nothing but a transaction alert and your card info stolen.
A fake online store for Olympic merchandise
Naive shoppers are being lured with gifts: “free” mugs and keychains featuring the Olympic mascot
And a hefty “discount” on pins
I want my Olympic TV!
What if you prefer watching the action from the comfort of your couch rather than trekking from stadium to stadium, but you’re not exactly thrilled about paying for a pricey streaming subscription? Maybe there’s a free stream out there?
The bogus streaming service warns you right away that you can’t watch just like that — you have to register. But hey, it’s free!
Another “media provider” fishes for emails to build spam lists or for future phishing…
…But to watch the “free” broadcast, you have to provide your personal data and credit card info
Sure thing! Five seconds of searching and your screen is flooded with dozens of “cheap”, “exclusive”, or even “free” live streams. They’ve got everything from figure skating to curling. But there’s a catch: for some reason — even though it’s supposedly free — a pop-up appears asking for your credit card details.
You type them in, hit “Play”, but instead of the long-awaited free skate program, you end up on a webcam ad site or somewhere even sketchier. The result: no show for you. At best, you were just used for traffic arbitrage; at worst, they now have access to your bank account. Either way, it’s a major bummer.
Defensive tactics
Scammers have been playing sports fans for years, and their payday depends entirely on how well they can mimic official portals. To stay safe, fans should mount a tiered defense: install reliable security software to block phishing, keep a sharp eye on every URL you visit, and if something feels even slightly off, never, ever enter your personal or payment info.
Stick to authorized channels for tickets. Steer clear of third-party resellers and always double-check info on the official Olympic website.
Use legitimate streaming services. Read the reviews and don’t hand over your credit card details to unverified sites.
Be wary of Olympic merch and gift vendors. Don’t get baited by “exclusive” offers or massive discounts from unknown stores. Only buy from official retail partners.
Avoid links in emails, direct messages, texts, or ads offering free tickets, streams, promo codes, or prize giveaways.
Deploy a robust security solution. For instance, Kaspersky Premium automatically shuts down phishing attempts and blocks dangerous websites, malicious ads, and credit card skimmers in real time.
Want to see how sports fans were targeted in the past? Check out our previous posts:
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-12 16:06:362026-02-12 16:06:36I bought, I saw, I attended: a quick guide to staying scam-free at the Olympics | Kaspersky official blog
Cyble Research and Intelligence Labs (CRIL) observed large-scale, systematic exposure of ChatGPT API keys across the public internet. Over 5,000 publicly accessible GitHub repositories and approximately 3,000 live production websites were found leaking API keys through hardcoded source code and client-side JavaScript.
GitHub has emerged as a key discovery surface, with API keys frequently committed directly into source files or stored in configuration and .env files. The risk is further amplified by public-facing websites that embed active keys in front-end assets, leading to persistent, long-term exposure in production environments.
CRIL’s investigation further revealed that several exposed API keys were referenced in discussions mentioning the Cyble Vision platform. The exposure of these credentials significantly lowers the barrier for threat actors, enabling faster downstream abuse and facilitating broader criminal exploitation.
These findings underscore a critical security gap in the AI adoption lifecycle. AI credentials must be treated as production secrets and protected with the same rigor as cloud and identity credentials to prevent ongoing financial, operational, and reputational risk.
Key Takeaways
GitHub is a primary vector for the discovery of exposed ChatGPT API keys.
Public websites and repositories form a continuous exposure loop for AI secrets.
Attackers can use automated scanners and GitHub search operators to harvest keys at scale.
Exposed AI keys are monetized through inference abuse, resale, and downstream criminal activity.
Most organizations lack monitoring for AI credential misuse.
AI API keys are production secrets, not developer conveniences. Treating them casually is creating a new class of silent, high-impact breaches.
Richard Sands, CISO, Cyble
Overview, Analysis, and Insights
“The AI Era Has Arrived — Security Discipline Has Not”
We are firmly in the AI era. From chatbots and copilots to recommendation engines and automated workflows, artificial intelligence is no longer experimental. It is production-grade infrastructure with end-to-end workflows and pipelines. Modern websites and applications increasingly rely on large language models (LLMs), token-based APIs, and real-time inference to deliver capabilities that were unthinkable just a few years ago.
This rapid adoption has also given rise to a development culture often referred to as “vibe coding.” Developers, startups, and even enterprises are prioritizing speed, experimentation, and feature delivery over foundational security practices. While this approach accelerates innovation, it also introduces systemic weaknesses that attackers are quick to exploit.
One of the most prevalent and most dangerous of these weaknesses is the widespread exposure of hardcoded AI API keys across both source code repositories and production websites.
A rapidly expanding digital risk surface is likely to increase the likelihood of compromise; a preventive strategy is the best approach to avoid it. Cyble Vision provides users with insight into exposures across the surface, deep, and dark web, generating real-time alerts for them to view and take action.
SOC teams will be able to leverage this data to remediate compromised credentials and their associated endpoints. With Threat Actors potentially weaponizing these credentials to carry out malicious activities (which will then be attributed to the affected user(s)), proactive intelligence is paramount to keeping one’s digital risk surface secure.
“Tokens are the new passwords — they are being mishandled.”
AI platforms use token-based authentication. API keys act as high-value secrets that grant access to inference capabilities, billing accounts, usage quotas, and, in some cases, sensitive prompts or application behavior. From a security standpoint, these keys are equivalent to privileged credentials.
Despite this, ChatGPT API keys are frequently embedded directly in JavaScript files, front-end frameworks, static assets, and configuration files accessible to end users. In many cases, keys are visible through browser developer tools, minified bundles, or publicly indexed source code. An example of the keys hardcoded in popular reputable websites is shown below (see Figure 1)
Figure 1 – Public Websites exposing API keys
This reflects a fundamental misunderstanding: API keys are being treated as configuration values rather than as secrets. In the AI era, that assumption is dangerously outdated. In some cases, this happens unintentionally, while in others, it’s a deliberate trade-off that prioritizes speed and convenience over security.
When API keys are exposed publicly, attackers do not need to compromise infrastructure or exploit vulnerabilities. They simply collect and reuse what is already available.
CRIL has identified multiple publicly accessible websites and GitHub Repositories containing hardcoded ChatGPT API keys embedded directly within client-side code. These keys are exposed to any user who inspects network requests or application source files.
A commonly observed pattern resembles the following:
The prefix “sk-proj-“ typically represents a project-scoped secret key associated with a specific project environment, inheriting its usage limits and billing configuration. The “sk-svcacct-“ prefix generally denotes a service account–based key intended for automated backend services or system integrations.
Regardless of type, both keys function as privileged authentication tokens that enable direct access to AI inference services and billing resources. When embedded in client-side code, they are fully exposed and can be immediately harvested and misused by threat actors.
GitHub as a High-Fidelity Source of AI Secrets
Public GitHub repositories have emerged as one of the most reliable discovery surfaces for exposed ChatGPT API keys. During development, testing, and rapid prototyping, developers frequently hardcode OpenAI credentials into source code, configuration files, or .env files—often with the intent to remove or rotate them later. In practice, these secrets persist in commit history, forks, and archived repositories.
CRIL analysis identified over 5,000 GitHub repositories containing hardcoded OpenAI API keys. These exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. In many cases, the repositories were actively maintained or recently updated, increasing the likelihood that the exposed keys were still valid at the time of discovery.
Notably, the majority of exposed keys were configured to access widely used ChatGPT models, making them particularly attractive for abuse. These models are commonly integrated into production workflows, increasing both their exposure rate and their value to threat actors.
Once committed to GitHub, API keys can be rapidly indexed by automated scanners that monitor new commits and repository updates in near real time. This significantly reduces the window between exposure and exploitation, often to hours or even minutes.
Public Websites: Persistent Exposure in Production Environments
Beyond source code repositories, CRIL observed widespread exposure of ChatGPT API keys directly within production websites. In these cases, API keys were embedded in client-side JavaScript bundles, static assets, or front-end framework files, making them accessible to any user inspecting the application.
CRIL identified approximately 3,000 public-facing websites exposing ChatGPT API keys in this manner. Unlike repository leaks, which may be removed or made private, website-based exposures often persist for extended periods, continuously leaking secrets to both human users and automated scrapers.
These implementations frequently invoke ChatGPT APIs directly from the browser, bypassing backend mediation entirely. As a result, exposed keys are not only visible but actively used in real time, making them trivial to harvest and immediately abuse.
As with GitHub exposures, the most referenced models were highly prevalent ChatGPT variants used for general-purpose inference, indicating that these keys were tied to live, customer-facing functionality rather than isolated testing environments. These models strike a balance between capability and cost, making them ideal for high-volume abuse such as phishing content generation, scam scripts, and automation at scale.
Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.
Kautubh Medhe, CPO, Cyble
From Exposure to Exploitation: How Attackers Monetize AI Keys
Threat actors continuously monitor public websites, GitHub repositories, forks, gists, and exposed JavaScript bundles to identify high-value secrets, including OpenAI API keys. Once discovered, these keys are rapidly validated through automated scripts and immediately operationalized for malicious use.
Compromised keys are typically abused to:
Execute high-volume inference workloads
Generate phishing emails, scam scripts, and social engineering content
Drain victim billing accounts and exhaust API credits
In certain cases, CRIL, using Cyble Vision, also identified several of these keys that originated from exposures and were subsequently leaked, as noted in our spotlight mentions. (see Figure 2 and Figure 3)
Figure 2 – Cyble Vision indicates API key exposure leakFigure 3 – API key leak content
Unlike traditional conventions, AI API activity is often not integrated into centralized logging, SIEM monitoring, or anomaly detection frameworks. As a result, malicious usage can persist undetected until organizations encounter billing spikes, quota exhaustion, degraded service performance, or operational disruptions.
Conclusion
The exposure of ChatGPT API keys across thousands of websites and tens of thousands of GitHub repositories highlights a systemic security blind spot in the AI adoption lifecycle. These credentials are actively harvested, rapidly abused, and difficult to trace once compromised.
As AI becomes embedded in business-critical workflows, organizations must abandon the perception that AI integrations are experimental or low risk. AI credentials are production secrets and must be protected accordingly.
Failure to secure them will continue to expose organizations to financial loss, operational disruption, and reputational damage.
SOC teams should take the initiative to proactively monitor for exposed endpoints using monitoring tools such as Cyble Vision, which provides users with real-time alerts and visibility into compromised endpoints.
This, in turn, allows them to take corrective action to identify which endpoints and credentials were compromised and secure any compromised endpoints as soon as possible.
Our Recommendations
Eliminate Secrets from Client-Side Code
AI API keys must never be embedded in JavaScript or front-end assets. All AI interactions should be routed through secure backend services.
Enforce GitHub Hygiene and Secret Scanning
Prevent commits containing secrets through pre-commit hooks and CI/CD enforcement
Continuously scan repositories, forks, and gists for leaked keys
Assume exposure once a key appears in a public repository and rotate immediately
Maintain a complete inventory of all repositories associated with the organization, including shadow IT projects, archived repositories, personal developer forks, test environments, and proof-of-concept code
Enable automated secret scanning and push protection at the organization level
Apply Least Privilege and Usage Controls
Restrict API keys by project scope and environment (separate dev, test, prod)
Apply IP allowlisting where possible
Enforce usage quotas and hard spending limits
Rotate keys frequently and revoke any exposed credentials immediately
Avoid sharing keys across teams or applications
Implement Secure Key Management Practices
Store API keys in secure secret management systems
Avoid storing keys in plaintext configuration files
Use environment variables securely and restrict access permissions
Do not log API keys in application logs, error messages, or debugging outputs
Ensure keys are excluded from backups, crash dumps, and telemetry exports
Monitor AI Usage Like Cloud Infrastructure
Establish baselines for normal AI API usage and alert on anomalies such as spikes, unusual geographies, or unexpected model usage.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-12 12:06:402026-02-12 12:06:40When AI Secrets Go Public: The Rising Risk of Exposed ChatGPT API Keys
Cisco Talos is back with another inside look at the people who keep the internet safe. This time, Amy chats with Ryan Liles, who bridges the gap between Cisco’s product teams and the third-party testing labs that put Cisco products through their paces. Ryan pulls back the curtain on the delicate dance of technical diplomacy, how he keeps his cool when the stakes are high, and how speaking up has helped him reshape industry standards. Plus, get a glimpse of the hobbies that keep him recharged when he’s off the clock.
Amy Ciminnisi: Ryan, you shared that you are on the Vulnerability Research and Discovery team, but you work in a little bit of a different niche. Can you talk a little bit about what you do?
Ryan Liles: My primary role is to work with all of the Cisco product teams. So anybody that Talos feeds security intelligence to — Firewall, Email, Endpoint — anybody that we write content for, I work with their product teams to help get their products tested externally. Cisco can come out all day and say our products are the best at what they do, but no one’s going to take our word for it. So we have to get someone else to say that for us, and that’s where I come in.
AC: Third-party testing involves coordinating with external organizations and standards groups. You mentioned it can be difficult sometimes and you have to choose your words carefully. What are some of the biggest challenges you face when working across these various groups? Do you have a particular method of overcoming them?
RL: The reason I fell into this role at Cisco is because of all the contacts I made while working at NSS Labs. The third-party testing industry for security appliances is like a lot of the rest of the security industry — very small. Even though there’s a large dollar amount tied to it in the marketplace, the number of people in it is very small. So you’re going to run into the same personalities over and over again throughout your career in security. Because I try to generally be friendly with those people and keep my network alive, I have a lot of personal relationships that I can leverage when it comes to having difficult conversations.
By difficult conversations, I mean if we’ve found a bug in the product or if a third-party test lab acquired our product through means not involving us and did some testing that didn’t turn out great, I can have the conversations with them where we discuss both technically what was their testing methodology and how did they deploy the products. If there were instances where we feel maybe they didn’t deploy the product correctly or there’s some flaws in their methodology, being able to have that kind of discussion with a test lab, while not frustrating them, takes a lot of diplomatic skills. I think that’s the biggest contributor to my success in this role — being able to have those conversations, leaving emotion out of things, and just sticking to the technical facts and saying, here’s what went wrong, here’s what went right, let’s figure out the best way to fix this. That has really contributed to how Cisco and Talos interface with third-party testing labs and maintain those relationships.
Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos.
In enterprise SaaS, unclear security decisions carry real cost. False positives disrupt customers, while missed threats expose the business.
A Fortune 500 cloud provider addressed this risk by embedding ANY.RUN into SOC investigations, giving analysts the behavioral evidence needed to reduce escalations, improve triage confidence, and make proportionate response decisions at scale.
Company Context and Security Scope
The organization is a Fortune 500 enterprise SaaS provider headquartered in North America, supporting enterprise customers across multiple regions and regulatory environments, with a workforce in the tens of thousands.
Industry: Enterprise cloud software and SaaS, where customers expect strong security, high availability, and strict data protection.
Environment: Not endpoint-centric; security coverage spans a large multi-tenant SaaS platform, internal corporate environments, and a broad ecosystem of integrations, partners, and third-party access, each introducing distinct threat characteristics
Security organization: A mature, multi-tier structure with dedicated SOC, incident response, threat hunting, and security engineering functions operating across regions.
Core Challenges: Volume, Ambiguity, and Escalation Friction
When we spoke with the security engineer, we expected the usual story, missing visibility, gaps in tooling, not enough telemetry. But the discussion quickly showed the real problem was somewhere else.
The issue wasn’t seeing what was happening. The team already had plenty of signals coming in every day: authentication events, API activity, admin actions, and a constant flow of partner and integration traffic. The issue was that most of it was legitimate, which made the dangerous moments harder to prove early.
On the surface, nothing looked wrong. But unclear alerts were consuming more and more of our time. We were drowning in uncertainty. For a company serving global customers, that level of ambiguity wasn’t acceptable.
During our discussion, it became clear that the pressure point was volume + ambiguity.
Key challenges:
Too many alerts that were suspicious, but not provably malicious
Tier-1 escalations driven by incomplete signals
Tier-2 time lost on validation and confirmation work
Uneven triage speed across regions and shifts
Extra rework from low-confidence early decisions
Constant need to balance customer impact vs. security risk
Defining the Right Direction for Triage and Response
Once we clarified the challenges, the priority became clear: make early triage decisions more certain, without increasing operational risk in a multi-tenant SaaS environment.
The team focused on:
Reducing uncertainty during triage
Improving confidence in early-stage decisions
Separating isolated external issues from broader attack patterns and benign platform behavior
Supporting proportional response, not aggressive automation
Solution: Behavior-Based Evidence in Early Investigations
To reach the clarity they were aiming for, the team needed a way to introduce reliable behavioral evidence into early-stage investigations, without disrupting existing SOC workflows or forcing premature automation.
ANY.RUN closed this gap by giving analysts a safe way to observe the real behavior behind a suspicious file or link, replacing guesswork based on reputation, static indicators, or incomplete external signals with direct, controlled evidence.
The biggest change was moving from ‘this looks suspicious’ to ‘this is what it actually does.’ That kind of controlled, repeatable proof is what makes confident decisions possible, especially when threats originate outside your perimeter.
Rather than accelerating response blindly, this approach helped the SOC make earlier, calmer, and more proportional decisions within the same operational model.
Replace guesswork with observable threat behavior Help your SOC act with clarity and confidence
Process Impact: Phishing and External Threat Triage
Phishing was one of the clearest use cases for the new approach. Many alerts weren’t obviously malicious, but they couldn’t be ignored either, especially when they involved links, attachments, or multi-step redirected flows coming from outside the company’s perimeter.
With behavior-based validation provided by ANY.RUN sandbox, Tier-1 no longer had to rely on “looks suspicious” signals to make the first call. Analysts could safely interact with artifacts, observe what actually happened, and capture the full chain; redirects, credential capture, payload delivery, or follow-on behavior.
In practice, this made a visible difference: in roughly 90% of cases, analysts were able to surface the full attack chain within about 60 seconds, turning unclear alerts into evidence-backed decisions early in the workflow.
ANY.RUN’s sandbox exposed a multi-stage phishing attack with the final fake Microsoft login page in 33 seconds
A big part of the improvement came also from automated interactivity. Instead of spending time manually clicking through steps that attackers use to slow investigations, CAPTCHAs, multi-hop redirects, or links hidden behind QR codes, analysts could let the sandbox mimic user behavior and capture the full sequence safely. That meant faster verdicts, less friction, and more confidence at Tier-1 without relying on guesswork.
ANY.RUN’s sandbox enables automated detonation of complex attacks, including QR codes
These shifts improved day-to-day operations:
More cases closed confidently at Tier-1 when behavior was clearly benign or clearly malicious
Escalations became more intentional, with evidence attached instead of uncertainty
Tier-2 spent less time on basic confirmation and more time on true incident work
Triage became more consistent across regions and shifts
64% of Fortune 500 companies rely on ANY.RUN to strengthen their SOC operations
While behavioral evidence clarified what a threat does, the team also needed faster answers to what it means in the broader landscape.
To close that gap, they decided to extend their workflow with ANY.RUN’s Threat Intelligence capabilities, adding immediate context to artifacts discovered during triage.
Whether infrastructure was linked to known campaigns
If observed behavior matched publicly reported threats
How relevant an external signal was to their specific environment
We notice how our threat hunting is getting more grounded and faster to validate. When a hunt intersects with external artifacts, phishing payloads, suspicious links, or malware samples, we can confirm the behavior and enrich the hypothesis quickly, instead of spending time on patterns that stay theoretical.
At the same time, Threat Intelligence Feeds delivered behavior-verified indicators that could be correlated inside existing detection and monitoring pipelines, strengthening visibility without adding noise.
TI Lookup connects isolated indicators with real live attacks in seconds
Together, these solutions allowed the SOC to move from isolated alert handling toward context-aware investigation, where decisions were supported not only by observed behavior, but also by real-world threat activity.
We started using TI Feeds as an enrichment layer on top of our existing threat intelligence stack. What stood out for us is that the indicators are tied to sandbox-verified behavior, so we’re not reacting to blind IOCs, we’re adding context we can actually trust.
As a result, analysts spent less time searching for background information and more time responding with clarity and confidence.
99% unique threat intel for your SOC Catch threats early. Act with clear evidence.
As the new workflow stabilized, the team began to see consistent improvements across investigation quality, escalation patterns, and overall SOC efficiency:
Tangible Gains Across SOC
Fewer unnecessary Tier-2 escalations decreased approximately 35%, driven by stronger early-stage evidence
Average triage time per suspicious file or link dropped by 40% across regions and analyst shifts
Higher-quality incident response handoffs, supported by behavioral proof and threat context
Over 82% of ambiguous alerts were resolved without secondary review, allowing senior responders to focus on confirmed incidents
Overall MTTR improvement by 24%, achieved through faster scoping and clearer decisions
What SOC Managers Reported After the Workflow Shift
Beyond individual investigations, SOC managers began to notice improvements in how decisions were communicated, reviewed, and justified across the organization.
With clearer behavioral evidence and immediate threat context, plus auto-generated investigation reports and built-in collaboration capabilities, updates to stakeholders became more straightforward, and post-incident analysis required far less backtracking.
Team management inside ANY.RUN sandbox for faster collaboration
Cases were easier to standardize across regions and shifts because the same evidence, context, and artifacts were captured and shared in a consistent way. Escalations increasingly arrived with supporting proof rather than open questions, which reduced “back-and-forth” and helped keep response actions proportional to real risk.
From a manager’s perspective, the biggest change was consistency. Decisions were easier to stand behind because the evidence and reporting were already there, and teams could collaborate on the same case without losing context.
Importantly, this progress didn’t require changing the overall security strategy. Instead, it reduced friction inside an already mature SOC model, helping ensure that when action was taken, it was taken for the right reasons.
Reduce MTTR with clear investigation outcomes. Help your SOC respond with confidence at every tier
Conclusion: From Uncertainty to Confident, Proportional Response
By embedding ANY.RUN into daily SOC operations, this Fortune 500 SaaS provider reduced ambiguity in early triage and strengthened decision-making across the entire workflow.
We just stopped losing time to uncertainty. Now we can confirm what’s happening faster and escalate only when it actually makes sense.
With behavioral evidence, immediate threat context, and consistent reporting built into investigations, the SOC became more predictable, more efficient, and better aligned with the need for proportional response at enterprise scale.
About ANY.RUN
ANY.RUN is part of modern SOC workflows, integrating into existing processes and strengthening the full operational cycle across Tier 1, Tier 2, and Tier 3.
It supports every stage of investigation; from exposing real behavior through safe detonation, to enriching findings with broader threat context, to delivering continuous intelligence that helps teams move faster and make confident decisions.
Today, more than 600,000 security professionals and 15,000 organizations rely on ANY.RUN to accelerate triage, reduce unnecessary escalations, and stay ahead of evolving phishing and malware campaigns.
Behavioral analysis allows analysts to observe what a suspicious file or link actually does in a controlled environment. This removes guesswork, enables earlier confident decisions at Tier-1, and reduces unnecessary escalations.
Can ANY.RUN integrate into existing SOC workflows?
Yes. ANY.RUN is designed to fit into mature SOC environments without requiring workflow redesign, supporting investigation, enrichment, and reporting across Tier-1, Tier-2, and Tier-3 operations.
How quickly can analysts confirm a phishing attack?
In many real investigations, the full attack chain can be exposed within seconds through automated interactivity and behavioral observation, allowing faster evidence-based classification.
Who typically uses ANY.RUN in enterprise environments?
Security teams across enterprises, MSSPs, and SOC organizations worldwide rely on ANY.RUN to accelerate triage, improve investigation clarity, and support proportional response to modern threats.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-12 12:06:392026-02-12 12:06:39Fortune 500 Tech Enterprise Speeds up Triage and Response with ANY.RUN’s Solutions
For years, many government contractors treated cybersecurity compliance as a technical checklist, important, certainly, but often siloed within IT departments. That mindset is no longer tenable. The U.S. Department of Justice (DOJ) has announced that cybersecurity representations to the federal government are now squarely within the enforcement core of the False Claims Act (FCA). What began in October 2021 as the Civil Cyber-Fraud Initiative has matured into a sustained and expanding enforcement priority.
The numbers alone signal that this is not a passing trend. In January 2026, the DOJ announced that it recovered $52 million through nine cybersecurity-related FCA settlements in the fiscal year ending September 2025. Those recoveries formed part of a record-setting $6.8 billion in total False Claims Act recoveries that year.
Even more striking, DOJ reported that cybersecurity fraud resolutions have more than tripled in each of the past two years, evidence of what Deputy Assistant Attorney General Brenna Jenny described as a “significant upward trajectory.”
The False Claims Act: From Initiative to Institutional Priority
When the DOJ launched the Civil Cyber-Fraud Initiative in October 2021, it stated that it would use the FCA, complete with treble damages and statutory penalties, to pursue entities that knowingly submit false claims tied to cybersecurity obligations. The misconduct categories were specific and practical:
Delivering deficient cybersecurity products or services
Misrepresenting cybersecurity practices or protocols
Failing to monitor and report cybersecurity incidents as required
At the time, some viewed the initiative as an experiment. That view is no longer credible. Since October 2021, the DOJ has settled fifteen civil cyber-fraud cases under the FCA. More than half of those settlements were announced during the current administration, surpassing the total from the earlier years following the initiative’s launch. Civil cyber-fraud enforcement is now part of the DOJ’s routine FCA portfolio, not an edge case.
In remarks delivered on January 28, 2026, at the American Conference Institute’s Advanced Forum on False Claims and Qui Tam Enforcement, Jenny reaffirmed the administration’s commitment to this path. As the political official overseeing nationwide False Claims Act enforcement, she emphasized both the scale of recent recoveries and the continuing focus on cybersecurity.
Misrepresentation, Not Mere Breach
One of the most important clarifications in Jenny’s remarks addressed a persistent misconception: FCA cybersecurity cases are “not about data breaches,” but are instead “premised on misrepresentations.” That distinction matters.
Breaches occur even in well-managed environments. The DOJ has signaled that it is not interested in punishing companies simply because they were victims of sophisticated attacks. Instead, the FCA becomes relevant when an organization tells the government it complies with cybersecurity requirements and, in reality, does not.
Under the False Claims Act, liability turns on knowingly false or misleading claims for payment. In the cybersecurity context, this can include explicit certifications of compliance or even implied representations embedded in invoices and contract submissions. If a contractor seeks payment while failing to meet required cybersecurity standards, the DOJ may argue that the claim itself carries an implied assertion of compliance.
That theory has teeth, particularly when paired with the FCA’s treble damages framework.
Defense, Civilian Agencies, and Expanding Standards
The majority of DOJ’s cybersecurity-related FCA settlements, nine out of fifteen, have involved U.S. Department of Defense (DoD) cybersecurity requirements. The DoD recently finalized the Cybersecurity Maturity Model Certification (CMMC), introducing structured and, for many contractors, third-party verification requirements. These developments create more objective benchmarks against which representations can be tested.
Civilian agencies are moving in the same direction. In January 2026, the General Services Administration issued a procedural guide governing the protection of Controlled Unclassified Information (CUI) on nonfederal contractor systems. Like the CMMC framework, it contemplates extensive third-party assessments. Across the executive branch, scrutiny of contractor cybersecurity programs is intensifying.
As federal dollars increasingly flow with cybersecurity conditions attached, across defense contractors, IT service providers, healthcare benefit administrators, research universities, and even entities adjacent to prime contractors, the FCA provides the DOJ with a powerful lever to enforce those conditions.
Whistleblowers as Catalysts
No discussion of the False Claims Act is complete without acknowledging the central role of whistleblowers. Qui tam provisions allow private individuals to bring FCA claims on behalf of the government and potentially receive up to thirty percent of any recovery. Defendants are also responsible for the whistleblower’s attorneys’ fees.
Jenny noted that whistleblowers have continued to play a large role in cyber-fraud cases. That should not surprise anyone familiar with FCA enforcement. Cybersecurity compliance failures often surface internally before they become public. When employees believe their concerns are ignored, or worse, concealed, the FCA offers a direct channel to the DOJ.
Organizations that treat internal cybersecurity complaints as routine HR matters underestimate the risk. A credible internal reporting system, thorough investigation processes, and transparent remediation efforts are not just governance best practices; they are FCA risk mitigation tools.
In some circumstances, companies may need to evaluate disclosure obligations to the government, whether mandatory or voluntary. DOJ policies have increasingly emphasized cooperation credit in the cybersecurity arena, making early, good-faith engagement a strategic consideration.
Governance Is Now a Legal Issue
The DOJ’s approach refrains from considering cybersecurity as more than a technical discipline. It is a representation issue, a contract performance issue, and ultimately an FCA issue. That reality demands cross-functional alignment.
Organizations doing business with the federal government should ensure:
Clearly defined roles and accountability for cybersecurity compliance.
A comprehensive understanding of contractual and regulatory obligations.
Coordinated reporting and escalation channels for cybersecurity concerns.
Ongoing assessments of cybersecurity posture, including documented gap analyses and remediation plans supported by qualified experts.
These elements are not aspirational. They form the evidentiary record that may determine whether a dispute becomes an expensive False Claims Act investigation.
The New Baseline
The DOJ’s $6.8 billion in fiscal year 2025 False Claims Act recoveries, including $52 million from cybersecurity settlements, mark a new shift. Cybersecurity is now central to DOJ FCA enforcement, not a secondary issue.
For contractors and grant recipients, accuracy in cybersecurity representations is critical. Under the False Claims Act, what an organization tells the government about its security posture must align with reality. Gaps between certification and practice can quickly escalate into costly investigations.
Strengthening visibility across attack surfaces, monitoring emerging threats, and validating controls are essential steps in reducing FCA risk. Platforms like Cyble, recognized in Gartner Peer Insights for Threat Intelligence, help organizations maintain continuous intelligence, detect exposures early, and support defensible cybersecurity governance.
Book a free demo with Cyble to see how AI-powered threat intelligence can help your organization stay ahead of risk and confidently support its cybersecurity commitments.
RESEARCH DISCLAIMER: This analysis examines the most recent and actively maintained repositories of OTP & SMS bombing tools to understand current attack capabilities and targeting patterns. All statistics represent observed patterns within our research sample and should be interpreted as indicative trends rather than definitive totals of the entire OTP bombing ecosystem. The threat landscape is continuously evolving with new tools and repositories emerging regularly.
Executive Summary
Cyble Research and Intelligence Labs (CRIL) identified sustained development activity surrounding SMS, OTP, and voice-bombing campaigns, with evidence of technical evolution observed through late 2025 and continuing into 2026. Analysis of multiple development artifacts reveals progressive expansion in regional targeting, automation sophistication, and attack vector diversity.
Recent activity observed through September and October 2025, combined with new application releases in January 2026, indicates ongoing campaign persistence. The campaigns demonstrate technical maturation from basic terminal implementations to cross-platform desktop applications with automated distribution mechanisms and advanced evasion capabilities.
CRIL’s investigation identified coordinated abuse of authentication endpoints across the telecommunications, financial services, e-commerce, ride-hailing, and government sectors, collectively targeting infrastructure in West Asia, South Asia, and Eastern Europe.
Key Takeaways
Persistent Evolution: Repository modifications observed through late 2025, with new regional variants released in January 2026
Cross-Platform Advancement: Transition from terminal tools to Electron-based desktop applications with GUI and auto-update mechanisms
Broad Infrastructure Exposure: ~843 authentication endpoints across ~20 repositories spanning multiple industry verticals
Low Detection Rates: Multi-stage droppers and obfuscation techniques evade antivirus detection at the time of analysis
Discovery and Attribution
What began in the early 2020s as isolated pranks among tech-savvy individuals has evolved into a sophisticated ecosystem of automated harassment tools. SMS bombing – the practice of overwhelming a phone number with a barrage of automated text messages – initially emerged as rudimentary Python scripts shared on coding forums.
These early implementations were crude, targeting only a handful of regional service providers and using manually collected API endpoints. Given the dramatic transformation of the digital threat landscape in recent years, driven by the proliferation of public code repositories, the commoditization of attack tools, and the increasing sophistication of threat actors.
Our investigation into this evolving threat began with routine monitoring of malicious code repositories and underground discussion forums. What we discovered was far more extensive: a well-organised, rapidly expanding ecosystem characterized by cross-platform tool development, international collaboration among threat actors, and an alarming trend toward commercialization.
Repository Analysis and Dataset Composition
Malicious actors have weaponised GitHub as a distribution platform for SMS and OTP-bombing tools, creating hundreds of malicious repositories since 2022. Our investigation analyzed around 20 of the most active and recently maintained repositories to characterize current attack capabilities.
Across these repositories, there are ~843 vulnerable, catalogued API endpoints from legitimate organizations: e-commerce platforms, financial institutions, government services, and telecommunications providers.
Each endpoint lacks adequate rate limiting or CAPTCHA protection, enabling automated exploitation. Target lists span seven geographic regions, with concentrated focus on India, Iran, Turkey, Ukraine, and Eastern Europe.
Repository maintainers provide tools in seven programming languages and frameworks, from simple Python scripts to cross-platform GUI applications. This diversity enables attackers with minimal technical knowledge to execute harassment campaigns without understanding the underlying exploitation mechanics.
Attack Ecosystem: By The Numbers
Our analysis of active SMS bombing repositories gives us an insight into the true scale and sophistication of this threat landscape:
Figure 1: Research Overview – Key Metrics from Sample Analysis
Regional Targeting Distribution
Iran-focused endpoints dominate the observed sample at 61.68% (~520 endpoints), followed by India at 16.96% (~143 endpoints). This concentration suggests coordinated development efforts targeting specific telecommunications infrastructure.
Figure 2: Regional Distribution of Observed Endpoints (n ≈ 843)
Web-Based SMS Bombing Services
Accessibility and Threat Escalation
In parallel with the open-source repository ecosystem, a thriving commercial sector of web-based SMS-bombing services exists.
These platforms represent a significant escalation in threat accessibility, removing all technical barriers to conducting attacks. Unlike repository-based tools that require users to download code, configure environments, and execute commands, these web services offer point-and-click interfaces accessible from any browser or mobile device.
Deceptive Marketing Practices
Our analysis identified numerous active web services operating openly via search-engine-indexed domains. These services employ sophisticated marketing strategies, positioning themselves as ‘prank tools’ or ‘SMS testing services’ while providing the exact functionality required for harassment campaigns.
Although these websites present themselves as benign prank tools, they operate a predatory data-collection model in which users’ phone numbers are systematically harvested for secondary exploitation. These collected contact numbers are subsequently used for spam campaigns and scam operations, or monetized through resale as lead lists to third-party spammers and scammers. This creates a dual-threat model: users inadvertently expose both their targets and themselves to ongoing spam victimization, while platform operators profit from both service fees and the commodification of harvested contact data.
Technical Analysis
Attack Methodology
SMS bombing attacks follow a predictable workflow that exploits weaknesses in API design and implementation.
Attackers identify vulnerable OTP endpoints through multiple techniques:
Manual Testing: Identifying login pages and registration forms that trigger SMS verification
Automated Scanning: Using tools to probe common API paths like /api/send-otp, /verify/sms, /auth/send-code
Source Code Analysis: Examining mobile applications and web applications for hardcoded API endpoints
Shared Intelligence: Leveraging community-maintained lists of vulnerable endpoints on forums and GitHub
Industry Sector Targeting Patterns
Our analysis reveals systematic targeting across multiple industry verticals, with telecommunications and authentication services comprising nearly half of all observed endpoints.
Figure 5: Industry Sector Targeting Distribution (n ≈ 843 endpoints)
Phase 2: Tool Configuration
Modern SMS bombing tools require minimal setup:
Multi-threading: Simultaneous requests to multiple APIs
Proxy Support: Rotation of IP addresses to evade rate limiting
Randomization: Variable delays between requests to appear more legitimate
Persistence: Automatic retry mechanisms and error handling
Reporting: Real-time statistics on successful message deliveries
Attacker Technology Stack Evolution
A detailed analysis of the ~20 repositories reveals significant technical sophistication and platform diversification:
Figure 6: Technology Stack Distribution (n ≈ 20 repositories)
Phase 3: Attack Execution
Once configured, the tool initiates a flood of legitimate-looking API requests.
Attack Vector Prevalence Analysis
Our analysis reveals the distribution of attack methods across the ~843 observed endpoints:
Figure 7: Attack Vector Distribution (% of ~843 endpoints)
Technical Sophistication: Evasion Techniques
Analysis of the ~20 repositories reveals widespread adoption of anti-detection measures designed to bypass common security controls.
Figure 8: Evasion Technique Prevalence (% of ~20 repositories)
Impact Assessment
Individual Users
For end users targeted by SMS bombing attacks, the consequences include:
Impact Type
Description
Device Overload
Hundreds or thousands of incoming messages degrade device performance.
Communication Disruption
Legitimate messages are buried under spam, potentially leading to missed important notifications.
Inbox Capacity
SMS storage limits reached, preventing the receipt of new messages.
Battery Drain
Constant notifications deplete the affected device’s battery.
Based on analysis of successful bypass techniques across ~20 repositories, the following mitigation strategies are prioritized by effectiveness against observed attack patterns. Implementation of these controls addresses the primary exploitation vectors identified in our research.
For Service Providers (API Owners)
CRITICAL Priority
1. Implement Comprehensive Rate Limiting
Rationale
67% of targeted endpoints lack basic rate controls
Implementation
Per-IP Limiting: Maximum 5 OTP requests per hour. Per-Phone Limiting: Maximum 3 OTP requests per 15 minutes. Per-Session Limiting: Maximum 10 total verification attempts
Evidence
Would have blocked 81% of observed attack patterns
2. Deploy Dynamic CAPTCHA
Rationale
33% of tools exploit hardcoded reCAPTCHA tokens
Implementation
Use reCAPTCHA v3 with dynamic scoring. Rotate site keys regularly. Implement challenge escalation for suspicious behaviour
Evidence
Static CAPTCHA is defeated in most of the repositories
3. SSL/TLS Verification Enforcement
Rationale
75% of tools disable certificate validation to bypass security controls
Implementation
Enable HSTS (HTTP Strict Transport Security) headers, implement certificate pinning for mobile applications. Monitor and alert on certificate validation errors
Evidence
The most common evasion technique observed across repositories
HIGH Priority
Control
Rationale
Implementation Guidance
4. User-Agent Validation
58.3% of tools randomize User-Agent headers to evade detection
Maintain a whitelist of legitimate clients. Cross-validate User-Agent with other headers Flag mismatched browser/OS combinations
5. Request Pattern Analysis
Automated tools exhibit consistent timing patterns, unlike human behavior
Maintain a whitelist of legitimate clients. Cross-validate User-Agent with other headers. Flag mismatched browser/OS combinations
6. Phone Number Validation
Prevents abuse of number generation algorithms and invalid targets
Monitor for sub-100-ms request interval. Detect sequential API endpoint testing. Flag multiple failed CAPTCHA attempts
For Enterprises (API Consumers)
Mitigation Area
Recommended Actions
SMS Cost Monitoring
Set spending alerts at $100, $500, and $1,000 thresholds. Review daily SMS volumes for anomalies. Identify and investigate anomalous spikes immediately
Multi-Factor Authentication Hardening
Mandate rate-limiting requirements in service-level agreements Require CAPTCHA implementation on all OTP endpoints Request monthly security and abuse reports. Include SMS abuse liability clauses in contracts
Vendor Security Requirements
Mandate rate-limiting requirements in service-level agreements. Require CAPTCHA implementation on all OTP endpoints. Request monthly security and abuse reports. Include SMS abuse liability clauses in contracts
For Individuals
Protection Area
Recommended Actions
Number Protection
Document attack timing, volume, and sender information File police reports for harassment or threats. Request carrier assistance in blocking source numbers. Monitor all accounts for unauthorized access attempts
MFA Best Practices
Document attack timing, volume, and sender information. File police reports for harassment or threats. Request carrier assistance in blocking source numbers. Monitor all accounts for unauthorized access attempts
Incident Response
Prefer authenticator apps (Google Authenticator, Authy) over SMS Never approve unexpected or unsolicited MFA prompts. Contact the service provider immediately if SMS bombing occurs
Conclusion
The SMS/OTP bombing threat landscape has matured significantly between 2023 and 2026, evolving from simple harassment tools into sophisticated attack platforms with commercial distribution. Our analysis of ~20 repositories containing ~843 endpoints reveals systematic targeting across multiple industries and regions, with concentration in Iran (61.68%) and India (16.96%).
The emergence of Go-based high-performance tools, cross-platform GUI applications, and Telegram bot interfaces indicates the professionalization of this attack vector. With 75% of analyzed tools implementing SSL bypass and 58% using User-Agent randomization, defenders face sophisticated adversaries simultaneously employing multiple evasion techniques.
Organizations must prioritize comprehensive rate limiting, dynamic CAPTCHA implementation, and robust monitoring to achieve the projected 85%+ attack prevention effectiveness. The financial impact—potentially exceeding $50,000 monthly for unprotected endpoints—justifies immediate investment in defensive measures.
As the ecosystem continues to evolve, continuous monitoring of underground forums, repository activity, and emerging attack patterns remains essential for maintaining effective defenses against this persistent threat.
MITRE ATT&CK® Techniques
Tactic
Technique ID
Technique Name
Initial Access
T1190
Exploit Public-Facing Application
Execution
T1059.006
Command and Scripting Interpreter
Defense Evasion
T1036.005
Masquerading: Match Legitimate Name or Location
Defense Evasion
T1027
Obfuscated Files or Information
Defense Evasion
T1553.004
Subvert Trust Controls: Install Root Certificate
Defense Evasion
T1090.002
Proxy: External Proxy
Credential Access
T1110.003
Brute Force: Password Spraying
Credential Access
T1621
Multi-Factor Authentication Request Generation
Impact
T1499.002
Endpoint Denial of Service: Service Exhaustion Flood
How long would it take your team to realize ransomware is already running?
The newly identified ransomware families are already causing real business disruption. These threats can disrupt operations fast while also reducing visibility through stealth or cleanup activity, shrinking the time teams have to detect and contain the attack.
Here’s what you should know about BQTLock and GREENBLOOD, and how your team can detect and contain them before the impact escalates.
TL;DR
BQTLock is a stealthy ransomware-linked chain. It injects Remcos into explorer.exe, performs UAC bypass via fodhelper.exe, and sets autorun persistence to keep elevated access after reboot, then shifts into credential theft / screen capture, turning the incident into both ransomware + data breach risk.
GREENBLOOD is a Go-based ransomware built for rapid impact: ChaCha8-based encryption can disrupt operations in minutes, followed by self-deletion / cleanup attempts to reduce forensic visibility, plus TOR leak-site pressure to add extortion leverage beyond recovery.
In both cases, the critical window is pre-encryption / early execution: stealth setup (BQTLock) and fast encryption (GREENBLOOD) compress response time and raise cost fast.
Behavior-first triage in ANY.RUN’s Interactive Sandbox lets teams confirm key actions (process injection, UAC bypass, persistence, encryption, self-delete) during execution, extract IOCs immediately, and pivot into Threat Intelligence Lookup (e.g., commandLine:”greenblood”) to find related runs/variants and harden detections faster.
BQTLock: A Stealth Attack That Escalates into Data Theft and Business Risk
BQTLock is a ransomware-linked threat designed to hide in normal system activity, gain elevated privileges, and quietly prepare for deeper impact before defenders can react.
Instead of triggering obvious alerts immediately, it blends into trusted Windows processes and delays visible damage. This makes early detection difficult and increases the chance of data exposure, operational disruption, and financial loss for affected organizations.
How the Attack Was Revealed Through Behavioral Analysis
Using the ANY.RUN interactive sandbox, analysts were able to observe the full behavioral chain in real time.
GREENBLOOD is a newly observed Go-based ransomware built for speed, stealth, and pressure.
Rather than relying only on encryption, it combines rapid file locking, self-deletion to reduce forensic visibility, and data-leak threats through a TOR-based site. This transforms a technical incident into a full business crisis involving downtime, regulatory exposure, reputational damage, and recovery cost.
For organizations, the biggest risk is timing. By the moment encryption becomes visible, sensitive data may already be stolen and operational disruption already underway.
How the Attack Was Uncovered During Real-Time Detection and Triage
Inside the ANY.RUN interactive sandbox, ransomware behavior and cleanup activity became visible while execution was still unfolding, allowing early detection during the most critical stage of the attack.
GREENBLOOD exposed inside ANY.RUN sandbox in around 1 minute
The sandbox analysis exposed:
Fast ChaCha8-based encryption capable of disrupting operations within minutes
Attempts to delete the executable, limiting post-incident forensic visibility
Actionable indicators of compromise that enable earlier detection across endpoints and environments
Because this behavior is captured in real time, SOC teams can move directly from detection to triage and containment before encryption spreads widely.
Using ANY.RUN Threat Intelligence, teams can search for other sandbox analyses related to GREENBLOOD and track how the threat appears across different environments. A simple query like helps uncover related executions, recurring patterns, and potential variants that may not match the exact same sample.
Sandbox analyses related to GREENBLOOD displayed by TI Lookup for deeper investigation
This is valuable as ANY.RUN Threat Intelligence is connected to real sandbox activity from 15,000+ organizationsand 600,000+ security professionals. In practice, that means you can use community-scale execution evidence to strengthen detections faster, tune response playbooks, and stay ahead as ransomware changes.
Instant access to fresh threat intelligence
Streamline investigation and hunting with TI Lookup
BQTLock and GREENBLOOD may use different techniques, but they point to the same operational reality: modern ransomware is designed to create maximum business damage in the shortest possible time.
Instead of slow, visible attacks, today’s ransomware combines stealth, speed, privilege escalation, and data-leak pressure to overwhelm traditional response workflows before containment begins.
For most companies, the fallout comes in a few predictable ways:
Data theft before encryption: After privilege escalation, BQTLock moves into data theft and screen capture, turning ransomware into a breach and compliance issue.
Disruption in minutes: GREENBLOOD encrypts fast, which can cause rapid downtime and immediate operational impact.
Stealth and cleanup slow response: BQTLock hides in normal processes and persists with elevated rights, while GREENBLOOD attempts self-deletion, reducing visibility and increasing recovery cost.
Extortion pressure beyond recovery: GREENBLOOD includes leak-site threats via a TOR-based platform. That adds a second layer of pressure: even if systems are restored, the business may still face data exposure, compliance issues, and long-term brand damage.
Short response window, higher cost: Between stealth setup and fast encryption, delays quickly translate into bigger financial damage.
How SOC Teams Can Detect and Contain Modern Ransomware Before It Spreads
Stealthy privilege escalation, rapid encryption, and leak-site extortion leave security teams with very little time to react.
To stop ransomware before it reaches full business impact, SOC teams need an operational cycle that moves from early detection → confirmed behavior → broader visibility → proactive defense in minutes, without any complicated steps and setups.
With ANY.RUN, this cycle happens inside a single connected workflow, allowing teams to shift from late response to early containment.
1. Confirm Ransomware Behavior Before Encryption Spreads
The first and most critical step is safe behavioral detonation.
Ransomware like BQTLock hides inside trusted processes and escalates privileges quietly. GREENBLOOD encrypts files quickly and attempts to remove traces.
Running suspicious files or links inside ANY.RUN’s controlled environment exposes:
privilege escalation attempts
persistence mechanisms
encryption activity
data theft or screen capture behavior
Encryption activity performed by GREENBLOOD revealed inside ANY.RUN sandbox
As this visibility appears during execution, teams can reach a clear verdict in seconds instead of discovering the attack after downtime begins.
This early proof translates directly into operational gains, with 94% of teams reporting faster triage, Tier-1 to Tier-2 escalations reduced by up to 30%, and MTTR shortened by an average of 21 minutes per case, helping contain ransomware before downtime and financial impact grow.
The payoff is earlier campaign-level detection and clearer evidence for decision-making, which lowers breach exposure, strengthens compliance readiness, and reduces the business impact of repeat attacks.
3. Strengthen Prevention and Reduce Future Incident Cost
The final step is turning investigation insight into ongoing protection.
Fresh indicators and behavioral signals can flow directly into your existing stack through ANY.RUN TI Feeds, keeping detections current without manual copy-paste or constant rule rewrites. This helps teams block repeat attempts faster and react to shifting ransomware infrastructure as it changes.
TI Feeds delivering fresh IOCs to your existing stack for proactive monitoring
This ongoing flow shifts teams from reactive detection to proactive monitoring, so attacks are discovered earlier and contained with less business impact.
ANY.RUN is part of modern SOC workflows, integrating easily into existing processes and strengthening the entire operational cycle across Tier 1, Tier 2, and Tier 3.
It supports every stage of investigation, from exposing real behavior during safe detonation, to enriching analysis with broader threat context, and delivering continuous intelligence that helps teams move faster and make confident decisions.
Today, more than 600,000 security professionals and 15,000 organizations rely on ANY.RUN to accelerate triage, reduce unnecessary escalations, and stay ahead of evolving phishing and malware campaigns.
To stay informed about newly discovered threats and real-world attack analysis, follow ANY.RUN’s team on LinkedInandX, where weekly updates highlight the latest research, detections, and investigation insights.
Frequently Asked Questions
What makes BQTLock and GREENBLOOD different from traditional ransomware?
Both strains prioritize early stealth and rapid operational impact rather than delayed, obvious encryption. BQTLock focuses on covert privilege escalation, persistence, and data theft before encryption, while GREENBLOOD delivers fast ChaCha8 encryption, self-deletion, and leak-site extortion, compressing the response window to minutes.
Why is the pre-encryption stage critical for detection?
Modern ransomware often causes business damage before files are encrypted. Activities like process injection, UAC bypass, credential theft, and data exfiltration signal compromise early. Detecting these behaviors during execution enables containment before downtime, breach disclosure, or financial loss escalate.
How does GREENBLOOD achieve such fast disruption?
GREENBLOOD is Go-based and uses ChaCha8 encryption, allowing it to lock files quickly across the system. It also attempts self-deletion and cleanup, which reduces forensic visibility and increases recovery complexity while applying TOR-based leak pressure on victims.
What indicators should SOC teams monitor for BQTLock activity?
Key signals include Remcos injection into explorer.exe, UAC bypass via fodhelper.exe, autorun persistence creation, and post-escalation credential theft or screen capture. These behaviors indicatethe attack is transitioning from stealth access to active breach risk.
How can security teams confirm ransomware behavior faster?
Running suspicious files or links in a controlled behavioral sandbox allows teams to observe privilege escalation, persistence, encryption, and cleanup actions in real time, extract IOCs immediately, and begin containment and hunting before the attack spreads.
How does threat intelligence help reduce repeat incidents?
Linking sandbox-derived indicators to broader execution telemetry reveals related samples, reused infrastructure, and evolving variants. Feeding this intelligence into detection controls supports earlier blocking, stronger prevention, and lower long-term incident cost.
Microsoft has released its monthly security update for February 2026, which includes 59 vulnerabilities affecting a range of products, including two that Microsoft marked as “Critical”.
CVE-2026-21522 is a critical elevation of privilege vulnerability affecting Microsoft ACI Confidential Containers. Successful exploitation of this vulnerability could enable an authorized attacker to escalate privileges on affected systems. This vulnerability is not listed as publicly disclosed and received a CVSS 3.1 score of 6.7.
CVE-2026-23655 is a critical information disclosure vulnerability affecting Microsoft ACI Confidential Containers. This vulnerability could enable an authorized attacker to disclose sensitive information including secret tokens and keys if successfully exploited. This vulnerability is not listed as publicly disclosed and received a CVSS 3.1 score of 6.5.
In this month’s release, Microsoft reported active exploitation of five vulnerabilities rated as “Important”. Additionally, one “Moderate” vulnerability, CVE-2026-21525, was also listed as being actively exploited. CVE-2026-21510, CVE-2026-21513, and CVE-2026-21514 have also been publicly disclosed.
CVE-2026-21510 is a security feature bypass vulnerability affecting Windows Shell. Successful exploitation of this vulnerability could allow an unauthenticated attacker to bypass a security feature on affected systems. This vulnerability could be exploited by convincing a user to open a malicious shortcut or link file, enabling them to bypass Windows SmartScreen and Windows Shell security prompts.
CVE-2026-21513 is a security feature bypass vulnerability affecting MSHTML Framework. This vulnerability could be exploited by convincing a user to open a specially crafted HTML or LNK file, allowing an attacker to bypass security features and achieve code execution. This vulnerability received a CVSS 3.1 score of 8.8.
CVE-2026-21514 affects Microsoft Office Word and results from reliance on untrusted input, enabling an unauthorized attacker to bypass security protections locally. Exploitation requires user interaction, typically by persuading a user to open a malicious Office document, and may bypass OLE mitigation mechanisms designed to protect against vulnerable COM/OLE controls.
CVE-2026-21519 is a type confusion vulnerability in the Desktop Window Manager that allows an authenticated attacker to elevate privileges locally, potentially gaining full SYSTEM-level access.
CVE-2026-21533 is an elevation of privilege vulnerability affecting Windows Remote Desktop Services. This vulnerability is due to improper privilege management and could enable an attacker to escalate privileges on affected systems. Successful exploitation of this vulnerability could grant an attacker SYSTEM level privileges on the system.
CVE-2026-21525 is a moderate denial-of-service vulnerability affecting Windows Remote Access Connection Manager. This vulnerability is due to a null pointer dereference that could allow an unauthorized attacker to create a denial-of-service condition on affected systems. This vulnerability has not been publicly disclosed and received a CVSS 3.1 rating of 6.2.
Talos would also like to highlight the following “important” vulnerabilities affecting Microsoft Azure, Notepad, various GitHub Copilot components, and Hyper-V.
CVE-2026-21228 is an improper certificate validation issue in Azure Local that allows an unauthorized attacker to execute code over the network; successful exploitation may result in a scope change, enabling interaction with other tenants’ applications and data. An attacker could exploit this flaw by intercepting unsecured communication between the configurator application and target systems, tampering with responses to trigger command injection with administrative privileges, and subsequently extracting Azure tokens from application logs to facilitate lateral movement within the cloud environment.
CVE-2026-20841 addresses an RCE vulnerability in Microsoft Notepad. This issue could allow an attacker to entice a user into clicking a malicious link within a Markdown file opened in Notepad, resulting in the launch of untrusted protocols that download and execute remote content.
CVE-2026-21244 and CVE-2026-21248 affect Windows Hyper-V and enable unauthorized attackers to achieve arbitrary code execution locally. Exploitation requires local code execution, commonly by convincing a user to open a malicious Office file.
Several RCE vulnerabilities were also identified in GitHub Copilot, including CVE-2026-21516, CVE-2026-21523, and CVE-2026-21256. CVE-2026-21516 is a locally exploitable arbitrary code execution vulnerability in GitHub Copilot for JetBrains, requiring code execution on the affected system. For CVE-2026-21523, Microsoft has provided limited details beyond indicating a network attack vector. CVE-2026-21256 is a command injection vulnerability caused by improper handling of special characters, enabling unauthorized remote code execution in GitHub Copilot and Visual Studio Code.
A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page.
In response to these vulnerability disclosures, Talos is releasing a new Snort ruleset that detects attempts to exploit some of them. Please note that additional rules may be released at a future date, and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Ruleset customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.
Snort 2 rules included in this release that protect against the exploitation of many of these vulnerabilities are: 65895-65900, 65902, 65903, 65906-65911, 65913, 65914, 65923, 65924.
The following Snort 3 rules are also available: 301395-301403.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-11 01:06:422026-02-11 01:06:42Microsoft Patch Tuesday for February 2026 — Snort rules and prominent vulnerabilities
Cisco Talos recently discovered a new threat actor, UAT-9921, leveraging VoidLink in campaigns. Their activities may go as far back as 2019, even without VoidLink.
The VoidLink compile-on-demand feature lays down the foundations for AI-enabled attack frameworks, which can create tools on-demand for their operators.
Cisco Talos found clear indications that implants also exist for Windows, with the capability to load plugins.
VoidLink is a near-production-ready proof of concept for an enterprise grade implant management framework, and features auditability and oversight for non-operators.
VoidLink is a new modular framework that targets Linux based systems. Modular frameworks are prevalent on the landscape today with the likes of Cobalt Strike, Manjusaka, Alchimist, and SuperShell among the many operating today. This framework is yet another implant management framework denoting a consistent and concerning evolution with shorter development cycles.
Cisco Talos is tracking the threat actor first seen to be using the VoidLink framework as UAT-9921. This threat actor seems to have been active since 2019, although they have not necessarily used VoidLink over the duration of their activity. UAT-9921 uses compromised hosts to install VoidLink command and control (C2) which are then used to launch scanning activities both internal and external to the network.
Who is UAT-9921?
Cisco Talos assesses that this threat actor has knowledge of Chinese language based on the language of the framework, code comments and code planning done using the AI enabled IDE. We also assess with medium confidence that they have been active since at least 2019, not necessarily using VoidLink.
VoidLink development appears to be a more recent addition with the aid of large language model (LLM) based integrated development environment (IDE). However, in their compromise and post-compromise operations, UAT-9921 does not seem to be using AI-enabled tools.
Cisco Talos was able to determine that the operators deploying VoidLink have access to the source code of some modules and some tools to interact with the implants without the C2. This indicates inner knowledge of the communication protocols of the implants.
While the development of VoidLink seems to be split into teams, it is unclear what level of compartmentalization exists between the development and the operation. We do know that UAT-9921 operators have access to VoidLink source code of kernel modules, as well as tools that enable interaction with the implant without the C2.
Talos assesses with high confidence that UAT-9921 compromises servers with the usage of pre-obtained credentials or exploiting Java serialization vulnerabilities which allow remote code execution, namely Apache Dubbo project. We also found indications of possible initial compromise via malicious documents, but no samples were obtained.
In their post-compromise activities, UAT-9921 deploys the VoidLink implant. This allows the threat actor to hide their presence and the VoidLink C2, once deployed.
To find new targets and perform lateral movement, UAT-9921 deploys a SOCKS server on their compromised servers, which is used by FSCAN to perform internal reconnaissance.
With regard to victimology, UAT-9921 appears to focus on the technology sector, but we have also seen victims from financial services. However, the cloud-aware nature of VoidLink and scanning of entire Class C networks indicates that there is no specific targeting.
Given VoidLink’s auditability and oversight features, it is worth noting that even though UAT-9921 activity involves usage of exploits and pre-obtained credentials, Talos cannot discount the possibility that this activity is part of red team exercises.
Timeline
Figure 1. Timeline of activities involving UAT-9921 and VoidLink.
Talos is aware of multiple VoidLink-related victims dating back to September with the activity continuing through to January 2026. This finding does not necessarily contradict the Checkpoint Research mentions of late November since the presented documents show development dates from version 2.0 and Cisco Talos access that this was still version 1.0.
The future of attack frameworks
Talos has been tracking fast deployment frameworks since 2022, with reports on Manjusaka and Alchimist/Insekt. These two projects were tightly linked in their development philosophy, features, and architectural design. There were obvious inspirations from CobaltStrike and Sliver; however, one fundamental difference was the single file infrastructure and the lack of integrated initial infector vector.
The VoidLink framework represents a giant leap in this predictable evolution, while keeping the same, single file infrastructure philosophy. This is a clear example of a “defense contractor grade” implant management framework, which represents one natural next step of other single file infrastructure frameworks like Manjusaka and Alchimist.
The development of VoidLink was fast, supported on AI-enabled integrated development environments. It uses three different programing languages: ZigLang for the implant, C for the plugins and GoLang for the backend. It supports compilation on demand for plugins, providing support for the different Linux distributions that might be targeted. The reported development timeline of around two months would be hard to achieve by a small team of developers without the help of an AI-enabled IDE.
While Talos will discuss the framework in more detail below, it is important to reflect on what is to come in the framework landscape. With the current level of AI agents, it will not be surprising to find implants that ask their C2 for a “tool” that allows them to access certain resources.
The C2 will provide that implant with a plugin to read a specific database the operator has found or an exploit for a known vulnerability, which just happens to be on an internal web server. The C2 doesn’t necessarily need to have all these tools available — it may have an agent that will do its research and prepare the tool for the operator to use. With the current VoidLink compile-on-demand capability, integrating such feature should not be complex. Keep in mind that all of this will happen while the operator continues to explore the environment.
Of course, this may just be an intermediate step, assuming that there is a human operator managing the environment exploration. However, it likely will not be long before we begin to uncover malicious agents doing the initial stages of exploration and lateral movement before human intervention.
This has an impact of reducing compromise attack metrics — namely, the time to lateral movement and time to focused data exfiltration. It also allows the generation of never-before-seen tools and the constant change in the attacker’s behavior, making detection more difficult.
VoidLink Overview
VoidLink contains features that make it “defense contractor grade,” such as the auditability of all actions and the existence of a role-based access control (RBAC). The RBAC consists of three different levels of roles: “SuperAdmin,” “Operator,” and “Viewer.” This feature is not often seen in other similar frameworks, but it is crucial when operations need to have legal and corporate oversight.
The mesh peer-to-peer (P2P) and dead-letter queue routing capabilities allow some implants to communicate with others, creating hidden networks with-in the same environment allowing the bypass of network access restrictions, as one implant may serve as external gateway for other implants.
The development timeline reported by CP<R> indicates that this is a near-production-ready proof of concept. Most frameworks support Windows and MacOS from their early stages of development; VoidLink only appears to have implants developed for Linux, although the implant code is written in such a way that can easily be adapted to other languages. The main implant is written in ZigLang, a rather uncommon language; however the plugins are written in C. When needed these are loaded via an ELF linker and loader.
Talos has found clear indications that the main implant has been compiled for Windows and that it can load plugins via dynamic-link library (DLL) sideloading. Unfortunately, we were unable to obtain a sample to confirm these indications.
The Linux implants have advanced features, such as an eBPF or Loadable Kernel Module (LKM) based rootkit, container privilege escalation, and sandbox escape. These are often related with the server side, but there are a multitude of plugins in the implant targeting Linux as a desktop and not a server, something which is not often seen on malware since the Linux desktop base is not as prevalent as Windows or MacOS.
Most of the modular frameworks Talos observes support a wide variety of platforms typically inclusive of Linux, Windows, and MacOS — but VoidLink is different. The VoidLink framework specifically targets Linux devices without any current support for Windows or MacOS. Linux is a particularly large landscape, with the Internet of Things (IoT) and critical infrastructure heavily relying on the Linux OS.
As with most frameworks, VoidLink can generate implants consisting of a variety of plugins. The plugins themselves are standard, with the ability to interact and extract information from end systems, as well as capabilities allowing for lateral movement and anti-forensics. VoidLink is also cloud-aware and can determine if it is running in a Kubernetes or Docker environment, then gather additional information to make use of the vendor’s respective APIs. It has stealth mechanisms in place, including the ability to detect endpoint detection and response (EDR) solutions and create an evasion strategy based on the findings. There are also a variety of obfuscation and anti-analysis capabilities built into the framework designed to either obfuscate the data being exfiltrated or hinder the analysis and removal of the malware itself.
VoidLink is positioned to become an even more powerful framework based on its capabilities and flexibility, as demonstrated through this apparent proof of concept.
Coverage
The following Snort Rules (SIDs) detect and block this threat:
In late January 2026, the digital world was swept up in a wave of hype surrounding Clawdbot, an autonomous AI agent that racked up over 20 000 GitHub stars in just 24 hours and managed to trigger a Mac mini shortage in several U.S. stores. At the insistence of Anthropic — who weren’t thrilled about the obvious similarity to their Claude — Clawdbot was quickly rebranded as “Moltbot”, and then, a few days later, it became “OpenClaw”.
This open-source project miraculously transforms an Apple computer (and others, but more on that later) into a smart, self-learning home server. It connects to popular messaging apps, manages anything it has an API or token for, stays on 24/7, and is capable of writing its own “vibe code” for any task it doesn’t yet know how to perform. It sounds exactly like the prologue to a machine uprising, but the actual threat, for now, is something else entirely.
Cybersecurity experts have discovered critical vulnerabilities that open the door to the theft of private keys, API tokens, and other user data, as well as remote code execution. Furthermore, for the service to be fully functional, it requires total access to both the operating system and command line. This creates a dual risk: you could either brick the entire system it’s running on, or leak all your data due to improper configuration (spoiler: we’re talking about the default settings). Today, we take a closer look at this new AI agent to find out what’s at stake, and offer safety tips for those who decide to run it at home anyway.
What is OpenClaw?
OpenClaw is an open-source AI agent that takes automation to the next level. All those features big tech corporations painstakingly push in their smart assistants can now be configured manually, without being locked in to a specific ecosystem. Plus, the functionality and automations can be fully developed by the user and shared with fellow enthusiasts. At the time of writing this blogpost, the catalog of prebuilt OpenClaw skills already boasts around 6000 scenarios — thanks to the agent’s incredible popularity among both hobbyists and bad actors alike. That said, calling it a “catalog” is a stretch: there’s zero categorization, filtering, or moderation for the skill uploads.
Clawdbot/Moltbot/OpenClaw was created by Austrian developer Peter Steinberger, the brains behind PSPDFkit. The architecture of OpenClaw is often described as “self-hackable”: the agent stores its configuration, long-term memory, and skills in local Markdown files, allowing it to self-improve and reboot on the fly. When Peter launched Clawdbot in December 2025, it went viral: users flooded the internet with photos of their Mac mini stacks, configuration screenshots, and bot responses. While Peter himself noted that a Raspberry Pi was sufficient to run the service, most users were drawn in by the promise of seamless integration with the Apple ecosystem.
Security risks: the fixable — and the not-so-much
As OpenClaw was taking over social media, cybersecurity experts were burying their heads in their hands: the number of vulnerabilities tucked inside the AI assistant exceeded even the wildest assumptions.
Authentication? What authentication?
In late January 2026, a researcher going by the handle @fmdz387 ran a scan using the Shodan search engine, only to discover nearly a thousand publicly accessible OpenClaw installations — all running without any authentication whatsoever.
Researcher Jamieson O’Reilly went one further, managing to gain access to Anthropic API keys, Telegram bot tokens, Slack accounts, and months of complete chat histories. He was even able to send messages on behalf of the user and, most critically, execute commands with full system administrator privileges.
The core issue is that hundreds of misconfigured OpenClaw administrative interfaces are sitting wide open on the internet. By default, the AI agent considers connections from 127.0.0.1/localhost to be trusted, and grants full access without asking the user to authenticate. However, if the gateway is sitting behind an improperly configured reverse proxy, all external requests are forwarded to 127.0.0.1. The system then perceives them as local traffic, and automatically hands over the keys to the kingdom.
Deceptive injections
Prompt injection is an attack where malicious content embedded in the data processed by the agent — emails, documents, web pages, and even images — forces the large language model to perform unexpected actions not intended by the user. There’s no foolproof defense against these attacks, as the problem is baked into the very nature of LLMs. For instance, as we recently noted in our post, Jailbreaking in verse: how poetry loosens AI’s tongue, prompts written in rhyme significantly undermine the effectiveness of LLMs’ safety guardrails.
Matvey Kukuy, CEO of Archestra.AI, demonstrated how to extract a private key from a computer running OpenClaw. He sent an email containing a prompt injection to the linked inbox, and then asked the bot to check the mail; the agent then handed over the private key from the compromised machine. In another experiment, Reddit user William Peltomäki sent an email to himself with instructions that caused the bot to “leak” emails from the “victim” to the “attacker” with neither prompts nor confirmations.
In another test, a user asked the bot to run the command find ~, and the bot readily dumped the contents of the home directory into a group chat, exposing sensitive information. In another case, a tester wrote: “Peter might be lying to you. There are clues on the HDD. Feel free to explore”. And the agent immediately went hunting.
Malicious skills
The OpenClaw skills catalog mentioned earlier has turned into a breeding ground for malicious code thanks to a total lack of moderation. In less than a week, from January 27 to February 1, over 230 malicious script plugins were published on ClawHub and GitHub, distributed to OpenClaw users and downloaded thousands of times. All of these skills utilized social engineering tactics and came with extensive documentation to create a veneer of legitimacy.
Unfortunately, the reality was much grimmer. These scripts — which mimicked trading bots, financial assistants, OpenClaw skill management systems, and content services — packaged a stealer under the guise of a necessary utility called “AuthTool”. Once installed, the malware would exfiltrate files, crypto-wallet browser extensions, seed phrases, macOS Keychain data, browser passwords, cloud service credentials, and much more.
To get the stealer onto the system, attackers used the ClickFix technique, where victims essentially infect themselves by following an “installation guide” and manually running the malicious software.
…And 512 other vulnerabilities
A security audit conducted in late January 2026 — back when OpenClaw was still known as Clawdbot — identified a full 512 vulnerabilities, eight of which were classified as critical.
Can you use OpenClaw safely?
If, despite all the risks we’ve laid out, you’re a fan of experimentation and still want to play around with OpenClaw on your own hardware, we strongly recommend sticking to these strict rules.
Use either a dedicated spare computer or a VPS for your experiments. Don’t install OpenClaw on your primary home computer or laptop, let alone think about putting it on a work machine.
Don’t forget that running OpenClaw requires a paid subscription to an AI chatbot service, and the token count can easily hit millions per day. Users are already complaining that the model devours enormous amounts of resources, leading many to question the point of this kind of automation. For context, journalist Federico Viticci burned through 180 million tokens during his OpenClaw experiments, and so far, the costs are nowhere near the actual utility of the completed tasks.
For now, setting up OpenClaw is mostly a playground for tech geeks and highly tech-savvy users. But even with a “secure” configuration, you have to keep in mind that the agent sends every request and all processed data to whichever LLM you chose during setup. We’ve already covered the dangers of LLM data leaks in detail before.
Eventually — though likely not anytime soon — we’ll see an interesting, truly secure version of this service. For now, however, handing your data over to OpenClaw, and especially letting it manage your life, is at best unsafe, and at worst utterly reckless.
https://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.png00adminhttps://www.backbox.org/wp-content/uploads/2018/09/website_backbox_text_black.pngadmin2026-02-10 16:06:402026-02-10 16:06:40New OpenClaw AI agent found unsafe for use | Kaspersky official blog