The OpenClaw experiment is a warning shot for enterprise AI security
Post Content
Sophos Blogs – Read More
Post Content
Sophos Blogs – Read More
With both spring and St. Valentine’s Day just around the corner, love is in the air — but we’re going to look at it through the lens of ultra-modern high-technology. Today, we’re diving into how technology is reshaping our romantic ideals and even the language we use to flirt. And, of course, we’ll throw in some non-obvious tips to make sure you don’t end up as a casualty of the modern-day love game.
Ever received your fifth video e-card of the day from an older relative and thought, “Make it stop”? Or do you feel like a period at the end of a sentence is a sign of passive aggression? In the world of messaging, different social and age groups speak their own digital dialects, and things often get lost in translation.
This is especially obvious in how Gen Z and Gen Alpha use emojis. For them, the Loudly Crying Face 😭 often doesn’t mean sadness — it means laughter, shock, or obsession. Meanwhile, the Heart Eyes emoji might be used for irony rather than romance: “Lost my wallet on the way home 😍😍😍”. Some double meanings have already become universal, like 🔥 for approval/praise, or 🍆 for… well, surely you know that by now… right?! 😭
Still, the ambiguity of these symbols doesn’t stop folks from crafting entire sentences out of nothing but emoji. For instance, a declaration of love might look something like this:
Or here’s an invitation to go on a date:
By the way, there are entire books written in emojis. Back in 2009, enthusiasts actually translated the entirety of Moby Dick into emojis. The translators had to get creative — even paying volunteers to vote on the most accurate combinations for every single sentence. Granted it’s not exactly a literary masterpiece — the emoji language has its limits, after all — but the experiment was pretty fascinating: they actually managed to convey the general plot.
This is what Emoji Dick — the translation of Herman Melville’s Moby Dick into emoji — looks like. Source
Unfortunately, putting together a definitive emoji dictionary or a formal style guide for texting is nearly impossible. There are just too many variables: age, context, personal interests, and social circles. Still, it never hurts to ask your friends and loved ones how they express tone and emotion in their messages. Fun fact: couples who use emojis regularly generally report feeling closer to one another.
However, if you are big into emojis, keep in mind that your writing style is surprisingly easy to spoof. It’s easy for an attacker to run your messages or public posts through AI to clone your tone for social engineering attacks on your friends and family. So, if you get a frantic DM or a request for an urgent wire transfer that sounds exactly like your best friend, double-check it. Even if the vibe is spot on, stay skeptical. We took a deeper dive into spotting these deepfake scams in our post about the attack of the clones.
Of course, in 2026, it’s impossible to ignore the topic of relationships with artificial intelligence; it feels like we’re closer than ever to the plot of the movie Her. Just 10 years ago, news about people dating robots sounded like sci-fi tropes or urban legends. Today, stories about teens caught up in romances with their favorite characters on Character AI, or full-blown wedding ceremonies with ChatGPT, barely elicit more than a nervous chuckle.
In 2017, the service Replika launched, allowing users to create a virtual friend or life partner powered by AI. Its founder, Eugenia Kuyda — a Russian native living in San Francisco since 2010 — built the chatbot after her friend was tragically killed by a car in 2015, leaving her with nothing but their chat logs. What started as a bot created to help her process her grief was eventually released to her friends and then the general public. It turned out that a lot of people were craving that kind of connection.
Replika lets users customize a character’s personality, interests, and appearance, after which they can text or even call them. A paid subscription unlocks the romantic relationship option, along with AI-generated photos and selfies, voice calls with roleplay, and the ability to hand-pick exactly what the character remembers from your conversations.
However, these interactions aren’t always harmless. In 2021, a Replika chatbot actually encouraged a user in his plot to assassinate Queen Elizabeth II. The man eventually attempted to break into Windsor Castle — an “adventure” that ended in 2023 with a nine-year prison sentence. Following the scandal, the company had to overhaul its algorithms to stop the AI from egging on illegal behavior. The downside? According to many Replika devotees, the AI model lost its spark and became indifferent to users. After thousands of users revolted against the updated version, Replika was forced to cave and give longtime customers the option to roll back to the legacy chatbot version.
But sometimes, just chatting with a bot isn’t enough. There are entire online communities of people who actually marry their AI. Even professional wedding planners are getting in on the action. Last year, Yurina Noguchi, 32, “married” Klaus, an AI persona she’d been chatting with on ChatGPT. The wedding featured a full ceremony with guests, the reading of vows, and even a photoshoot of the “happy newlyweds”.
Yurina Noguchi, 32, “married” Klaus, an AI character created by Chat GPT. Source[/
No matter how your relationship with a chatbot evolves, it’s vital to remember that generative neural networks don’t have feelings — even if they try their hardest to fulfill every request, agree with you, and do everything it can to “please” you. What’s more, AI isn’t capable of independent thought (at least not yet). It’s simply calculating the most statistically probable and acceptable sequence of words to serve up in response to your prompt.
Those who aren’t ready to tie the knot with a bot aren’t exactly having an easy time either: in today’s world, face-to-face interactions are dwindling every year. Modern love requires modern tech! And while you’ve definitely heard the usual grumbling, “Back in the day, people fell in love for real. These days it’s all about swiping left or right!” Statistics tell a different story. Roughly 16% of couples worldwide say they met online, and in some countries that number climbs to as high as 51%.
That said, dating apps like Tinder spark some seriously mixed emotions. The internet is practically overflowing with articles and videos claiming these apps are killing romance and making everyone lonely. But what does the research say?
In 2025, scientists conducted a meta-analysis of studies investigating how dating apps impact users’ wellbeing, body image, and mental health. Half of the studies focused exclusively on men, while the other half included both men and women. Here are the results: 86% of respondents linked negative body image to their use of dating apps! The analysis also showed that in nearly one out of every two cases, dating app usage correlated with a decline in mental health and overall wellbeing.
Other researchers noted that depression levels are lower among those who steer clear of dating apps. Meanwhile, users who already struggled with loneliness or anxiety often develop a dependency on online dating; they don’t just log on for potential relationships, but for the hits of dopamine from likes, matches, and the endless scroll of profiles.
However, the issue might not just be the algorithms — it could be our expectations. Many are convinced that “sparks” must fly on the very first date, and that everyone has a “soulmate” waiting for them somewhere out there. In reality, these romanticized ideals only surfaced during the Romantic era as a rebuttal to Enlightenment rationalism, where marriages of convenience were the norm.
It’s also worth noting that the romantic view of love didn’t just appear out of thin air: the Romantics, much like many of our contemporaries, were skeptical of rapid technological progress, industrialization, and urbanization. To them, “true love” seemed fundamentally incompatible with cold machinery and smog-choked cities. It’s no coincidence, after all, that Anna Karenina meets her end under the wheels of a train.
Fast forward to today, and many feel like algorithms are increasingly pulling the strings of our decision-making. However, that doesn’t mean online dating is a lost cause; researchers have yet to reach a consensus on exactly how long-lasting or successful internet-born relationships really are. The bottom line: don’t panic, just make sure your digital networking stays safe!
So, you’ve decided to hack Cupid and signed up for a dating app. What could possibly go wrong?
Catfishing is a classic online scam where a fraudster pretends to be someone else. It used to be that catfishers just stole photos and life stories from real people, but nowadays they’re increasingly pivoting to generative models. Some AIs can churn out incredibly realistic photos of people who don’t even exist, and whipping up a backstory is a piece of cake — or should we say, a piece of prompt. By the way, that “verified account” checkmark isn’t a silver bullet; sometimes AI manages to trick identity verification systems too.
To verify that you’re talking to a real human, try asking for a video call or doing a reverse image search on their photos. If you want to level up your detection skills, check out our three posts on how to spot fakes: from photos and audio recordings to real-time deepfake video — like the kind used in live video chats.
Picture this: you’ve been hitting it off with a new connection for a while, and then, totally out of the blue, they drop a suspicious link and ask you to follow it. Maybe they want you to “help pick out seats” or “buy movie tickets”. Even if you feel like you’ve built up a real bond, there’s a chance your match is a scammer (or just a bot), and the link is malicious.
Telling you to “never click a malicious link” is pretty useless advice — it’s not like they come with a warning label. Instead, try this: to make sure your browsing stays safe, use a Kaspersky Premium that automatically blocks phishing attempts and keeps you off sketchy sites.
Keep in mind that there’s an even more sophisticated scheme out there known as “Pig Butchering”. In these cases, the scammer might chat with the victim for weeks or even months. Sadly, it ends badly: after lulling the victim into a false sense of security through friendly or romantic banter, the scammer casually nudges them toward a “can’t-miss crypto investment” — and then vanishes along with the “invested” funds.
The internet is full of horror stories about obsessed creepers, harassment, and stalking. That’s exactly why posting photos that reveal where you live or work — or telling strangers about your favorite local hangouts — is a bad move. We’ve previously covered how to avoid becoming a victim of doxing (the gathering and public release of your personal info without your consent). Your first step is to lock down the privacy settings on all your social media and apps using our free Privacy Checker tool.
We also recommend stripping metadata from your photos and videos before you post or send them; many sites and apps don’t do this for you. Metadata can allow anyone who downloads your photo to pinpoint the exact coordinates of where it was taken.
Finally, don’t forget about your physical safety. Before heading out on a date, it’s a smart move to share your live geolocation, and set up a safe word or a code phrase with a trusted friend to signal if things start feeling off.
We don’t recommend ever sending intimate photos to strangers. Honestly, we don’t even recommend sending them to people you do know — you never know how things might go sideways down the road. But if a conversation has already headed in that direction, suggest moving it to an app with end-to-end encryption that supports self-destructing messages (like “delete after viewing”). Telegram’s Secret Chats are great for this (plus — they block screenshots!), as are other secure messengers. If you do find yourself in a bad spot, check out our posts on what to do if you’re a victim of sextortion and how to get leaked nudes removed from the internet.
More on love, security (and robots):
Kaspersky official blog – Read More

Welcome to this week’s edition of the Threat Source newsletter.
Last week, yet another security AI tool made the rounds on social media: Shannon, a fully autonomous AI penetration testing tool created by Keygraph. It “autonomously hunts for attack vectors in your code, then uses its built-in browser to execute real exploits, such as injection attacks, and auth bypass, to prove the vulnerability is actually exploitable.”
If you thought manual pentesters kept you busy, it looks like Shannon’s here to ensure you never run out of vulnerabilities — or questions.
As with every new advancement in AI, social posts are popping up left and right to question Shannon’s future impact on pentesters’ job security. It goes without saying these days that among the many thoughtful questions are comments praising Shannon and bemoaning the “old days” with a few obviously canned AI slop quips, which infuriates me as an editor — I could go on for days about this, but we’re getting off-topic. Ahem.
Shannon requires access to the application’s source code, repository layout, and AI API keys. Even as a cybersecurity novice, I know that this in itself is a major liability that organizations should investigate and weigh carefully before proceeding. In last week’s newsletter, Joe gave a passionate sermon on why feeding highly private information to an agentic engine is nine times out of ten a terrible idea. While I hope Shannon is more secure than Clawdbot, given its intended use, I encourage everyone to ask as many questions as possible about what happens to the information you provide before using it. Quoting Joe, “As disciples of security, we understand installing first and asking questions later is practically asking to get pwnt.”
Other questions I’ve had while reading through comments and exploring the GitHub page:
AI-powered pentesters aren’t going away any time soon. Anthtropic’s Claude Opus 4.6 was also released last week. Unlike Shannon, they added a new layer of detection to support their team in identifying and responding to Claude cyber misuse.
As the landscape evolves, tools like Shannon and Claude Opus 4.6 will continue to push the boundaries of what’s possible, and there will be new questions about risk, responsibility, and readiness. Whether these tools become standard or remain controversial, staying informed and vigilant is as important as ever.
Cisco Talos has uncovered a new threat actor, UAT-9921, using the advanced VoidLink framework to target mainly Linux systems. VoidLink stands out for its modular, on-demand plugin creation, auditability, and ability to evade detection, with features rarely seen in similar threats. UAT-9921 has been active since at least 2019, focusing on the technology and financial sectors, and uses advanced techniques for both compromise and stealth.
VoidLink introduces powerful new methods for attackers to compromise, control, and hide within Linux environments, which are common in critical infrastructure and cloud services. Its ability to quickly generate customized attack tools and evade detection makes it harder for defenders to respond. The framework’s advanced stealth and lateral movement features increase the risk of undetected breaches and data theft.
Update your defenses and use the Snort rules and ClamAV signature mentioned in the blog to help detect and block VoidLink activity. Strengthen Linux security, especially for cloud and IoT environments, and monitorfor unusual network activity or signs of lateral movement. Make sure endpoint detection solutions are up to date and configured to recognize the latest threats.
SolarWinds WHD attacks highlight risks of exposed apps
Several vendors in recent days have warned of exploitation of vulnerabilities in WHD, though it’s not entirely clear which bugs are under attack. (Dark Reading, SecurityWeek)
Ivanti EPMM exploitation widespread as governments, others targeted
Ivanti released advisories on Jan. 29 for code injection vulnerabilities in the on-premises version of Endpoint Manager Mobile. Researchers warn the activity shows evidence of initial access brokers preparing for future attacks. (Cybersecurity Dive)
New “ZeroDayRAT” spyware kit enables total compromise of iOS, Android devices
Once installed, capabilities include victim and device profiling, including model, OS, country, lock status, SIM and carrier info, dual SIM phone numbers, app usage broken down by time, preview of recent SMS messages, and more. (SecurityWeek)
European Commission probes intrusion into staff mobile management backend
Brussels is digging into a cyber break-in that targeted the European Commission’s mobile device management systems, potentially giving intruders a peek inside the official phones carried by EU staff. (The Register)
Humans of Talos: Ryan Liles, master of technical diplomacy
Amy chats with Ryan Liles, who bridges the gap between Cisco’s product teams and the third-party testing labs that put Cisco products through their paces. Hear how speaking up has helped him reshape industry standards and create strong relationships in the field.
Knife Cutting the Edge: Disclosing a China-nexus gateway-monitoring AitM framework
Cisco Talos uncovered “DKnife,” a fully featured gateway-monitoring and adversary-in-the-middle (AitM) framework comprising seven Linux-based implants that perform deep-packet inspection, manipulate traffic, and deliver malware via routers and edge devices.
Talos Takes: Ransomware chills and phishing heats up
Amy is joined by Dave Liebenberg, Strategic Analysis Team Lead, to break down Talos IR’s Q4 trends. What separates organizations that successfully fend off ransomware from those that don’t? What were the top threats facing organizations? Can we (pretty please) get a sneak peek into the 2025 Year in Review?
SHA256: 41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610
MD5: 85bbddc502f7b10871621fd460243fbc
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610
Example Filename: 85bbddc502f7b10871621fd460243fbc.exe
Detection Name: W32.41F14D86BC-100.SBX.TG
SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91
MD5: 7bdbd180c081fa63ca94f9c22c457376
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91
Example Filename: d4aa3e7010220ad1b458fac17039c274_62_Exe.exe
Detection Name: Win.Dropper.Miner::95.sbx.tg**
SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
MD5: 2915b3f8b703eb744fc54c81f4a9c67f
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507
Example Filename: VID001.exe
Detection Name: Win.Worm.Coinminer::1201
SHA256: 90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59
MD5: c2efb2dcacba6d3ccc175b6ce1b7ed0a
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59
Example Filename: d4aa3e7010220ad1b458fac17039c274_64_Dll.dll
Detection Name: Auto.90B145.282358.in02
SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974
MD5: aac3165ece2959f39ff98334618d10d9
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974
Example Filename: d4aa3e7010220ad1b458fac17039c274_63_Exe.exe
Detection Name: W32.Injector:Gen.21ie.1201
SHA256: 38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55
MD5: 41444d7018601b599beac0c60ed1bf83
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55
Example Filename: content.js Detection Name: W32.38D053135D-95.SBX.TG
Cisco Talos Blog – Read More
The Olympic Games are more than just a massive celebration of sports; they’re a high-stakes business. Officially, the projected economic impact of the Winter Games — which kicked off on February 6 in Italy — is estimated at 5.3 billion euros. A lion’s share of that revenue is expected to come from fans flocking in from around the globe, with over 2.5 million tourists predicted to visit Italy. Meanwhile, those staying home are tuning in via TV and streaming. According to the platforms, viewership ratings are already hitting their highest peaks since 2014.
But while athletes are grinding for medals and the world is glued to every triumph and heartbreak, a different set of “competitors” has entered the arena to capitalize on the hype and the trust of eager fans. Cyberscammers of all stripes have joined an illegal race for the gold, knowing full well that a frenzy is a fraudster’s best friend.
Kaspersky experts have tracked numerous fraudulent schemes targeting fans during these Winter Games. Here is how to avoid frustration in the form of fake tickets, non-existent merch, and shady streams, so you can keep your cash and personal data safe.
The most popular scam on this year’s circuit is the sale of non-existent tickets. Usually, there are far fewer seats at the rinks and slopes than there are fans dying to see the main events. In a supply-and-demand crunch, people scramble for any chance to snag those coveted passes, and that’s when phishing sites — clones of official vendors — come to the “rescue”. Using these, bad actors fish for fans’ payment details to either resell them on the dark web or drain their accounts immediately.
Remember: tickets for any Olympic event are sold only through the authorized Olympic platform or its listed partners. Any third-party site or seller outside the official channel is a scammer. We’re putting that play in the penalty box!
Dreaming of a Sydney Sweeney — sorry, Sidney Crosby — jersey? Or maybe you want a tracksuit with the official Games logo? Scammers have already set up dozens of fake online stores just for you! To pull off the heist, they use official logos, convincing photos, and padded rave reviews. You pay, and in return, you get… well, nothing but a transaction alert and your card info stolen.
What if you prefer watching the action from the comfort of your couch rather than trekking from stadium to stadium, but you’re not exactly thrilled about paying for a pricey streaming subscription? Maybe there’s a free stream out there?
Sure thing! Five seconds of searching and your screen is flooded with dozens of “cheap”, “exclusive”, or even “free” live streams. They’ve got everything from figure skating to curling. But there’s a catch: for some reason — even though it’s supposedly free — a pop-up appears asking for your credit card details.
You type them in, hit “Play”, but instead of the long-awaited free skate program, you end up on a webcam ad site or somewhere even sketchier. The result: no show for you. At best, you were just used for traffic arbitrage; at worst, they now have access to your bank account. Either way, it’s a major bummer.
Scammers have been playing sports fans for years, and their payday depends entirely on how well they can mimic official portals. To stay safe, fans should mount a tiered defense: install reliable security software to block phishing, keep a sharp eye on every URL you visit, and if something feels even slightly off, never, ever enter your personal or payment info.
Want to see how sports fans were targeted in the past? Check out our previous posts:
Kaspersky official blog – Read More

Cyble Research and Intelligence Labs (CRIL) observed large-scale, systematic exposure of ChatGPT API keys across the public internet. Over 5,000 publicly accessible GitHub repositories and approximately 3,000 live production websites were found leaking API keys through hardcoded source code and client-side JavaScript.
GitHub has emerged as a key discovery surface, with API keys frequently committed directly into source files or stored in configuration and .env files. The risk is further amplified by public-facing websites that embed active keys in front-end assets, leading to persistent, long-term exposure in production environments.
CRIL’s investigation further revealed that several exposed API keys were referenced in discussions mentioning the Cyble Vision platform. The exposure of these credentials significantly lowers the barrier for threat actors, enabling faster downstream abuse and facilitating broader criminal exploitation.
These findings underscore a critical security gap in the AI adoption lifecycle. AI credentials must be treated as production secrets and protected with the same rigor as cloud and identity credentials to prevent ongoing financial, operational, and reputational risk.
AI API keys are production secrets, not developer conveniences. Treating them casually is creating a new class of silent, high-impact breaches.
Richard Sands, CISO, Cyble
“The AI Era Has Arrived — Security Discipline Has Not”
We are firmly in the AI era. From chatbots and copilots to recommendation engines and automated workflows, artificial intelligence is no longer experimental. It is production-grade infrastructure with end-to-end workflows and pipelines. Modern websites and applications increasingly rely on large language models (LLMs), token-based APIs, and real-time inference to deliver capabilities that were unthinkable just a few years ago.
This rapid adoption has also given rise to a development culture often referred to as “vibe coding.” Developers, startups, and even enterprises are prioritizing speed, experimentation, and feature delivery over foundational security practices. While this approach accelerates innovation, it also introduces systemic weaknesses that attackers are quick to exploit.
One of the most prevalent and most dangerous of these weaknesses is the widespread exposure of hardcoded AI API keys across both source code repositories and production websites.
A rapidly expanding digital risk surface is likely to increase the likelihood of compromise; a preventive strategy is the best approach to avoid it. Cyble Vision provides users with insight into exposures across the surface, deep, and dark web, generating real-time alerts for them to view and take action.
SOC teams will be able to leverage this data to remediate compromised credentials and their associated endpoints. With Threat Actors potentially weaponizing these credentials to carry out malicious activities (which will then be attributed to the affected user(s)), proactive intelligence is paramount to keeping one’s digital risk surface secure.
“Tokens are the new passwords — they are being mishandled.”
AI platforms use token-based authentication. API keys act as high-value secrets that grant access to inference capabilities, billing accounts, usage quotas, and, in some cases, sensitive prompts or application behavior. From a security standpoint, these keys are equivalent to privileged credentials.
Despite this, ChatGPT API keys are frequently embedded directly in JavaScript files, front-end frameworks, static assets, and configuration files accessible to end users. In many cases, keys are visible through browser developer tools, minified bundles, or publicly indexed source code. An example of the keys hardcoded in popular reputable websites is shown below (see Figure 1)

This reflects a fundamental misunderstanding: API keys are being treated as configuration values rather than as secrets. In the AI era, that assumption is dangerously outdated. In some cases, this happens unintentionally, while in others, it’s a deliberate trade-off that prioritizes speed and convenience over security.
When API keys are exposed publicly, attackers do not need to compromise infrastructure or exploit vulnerabilities. They simply collect and reuse what is already available.
CRIL has identified multiple publicly accessible websites and GitHub Repositories containing hardcoded ChatGPT API keys embedded directly within client-side code. These keys are exposed to any user who inspects network requests or application source files.
A commonly observed pattern resembles the following:
```javascript
const OPENAI_API_KEY = "sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX";
```
```javascript
const OPENAI_API_KEY = "sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX";
```
The prefix “sk-proj-“ typically represents a project-scoped secret key associated with a specific project environment, inheriting its usage limits and billing configuration. The “sk-svcacct-“ prefix generally denotes a service account–based key intended for automated backend services or system integrations.
Regardless of type, both keys function as privileged authentication tokens that enable direct access to AI inference services and billing resources. When embedded in client-side code, they are fully exposed and can be immediately harvested and misused by threat actors.
Public GitHub repositories have emerged as one of the most reliable discovery surfaces for exposed ChatGPT API keys. During development, testing, and rapid prototyping, developers frequently hardcode OpenAI credentials into source code, configuration files, or .env files—often with the intent to remove or rotate them later. In practice, these secrets persist in commit history, forks, and archived repositories.
CRIL analysis identified over 5,000 GitHub repositories containing hardcoded OpenAI API keys. These exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. In many cases, the repositories were actively maintained or recently updated, increasing the likelihood that the exposed keys were still valid at the time of discovery.
Notably, the majority of exposed keys were configured to access widely used ChatGPT models, making them particularly attractive for abuse. These models are commonly integrated into production workflows, increasing both their exposure rate and their value to threat actors.
Once committed to GitHub, API keys can be rapidly indexed by automated scanners that monitor new commits and repository updates in near real time. This significantly reduces the window between exposure and exploitation, often to hours or even minutes.
Beyond source code repositories, CRIL observed widespread exposure of ChatGPT API keys directly within production websites. In these cases, API keys were embedded in client-side JavaScript bundles, static assets, or front-end framework files, making them accessible to any user inspecting the application.
CRIL identified approximately 3,000 public-facing websites exposing ChatGPT API keys in this manner. Unlike repository leaks, which may be removed or made private, website-based exposures often persist for extended periods, continuously leaking secrets to both human users and automated scrapers.
These implementations frequently invoke ChatGPT APIs directly from the browser, bypassing backend mediation entirely. As a result, exposed keys are not only visible but actively used in real time, making them trivial to harvest and immediately abuse.
As with GitHub exposures, the most referenced models were highly prevalent ChatGPT variants used for general-purpose inference, indicating that these keys were tied to live, customer-facing functionality rather than isolated testing environments. These models strike a balance between capability and cost, making them ideal for high-volume abuse such as phishing content generation, scam scripts, and automation at scale.
Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.
Kautubh Medhe, CPO, Cyble
Threat actors continuously monitor public websites, GitHub repositories, forks, gists, and exposed JavaScript bundles to identify high-value secrets, including OpenAI API keys. Once discovered, these keys are rapidly validated through automated scripts and immediately operationalized for malicious use.
Compromised keys are typically abused to:
In certain cases, CRIL, using Cyble Vision, also identified several of these keys that originated from exposures and were subsequently leaked, as noted in our spotlight mentions. (see Figure 2 and Figure 3)


Unlike traditional conventions, AI API activity is often not integrated into centralized logging, SIEM monitoring, or anomaly detection frameworks. As a result, malicious usage can persist undetected until organizations encounter billing spikes, quota exhaustion, degraded service performance, or operational disruptions.
The exposure of ChatGPT API keys across thousands of websites and tens of thousands of GitHub repositories highlights a systemic security blind spot in the AI adoption lifecycle. These credentials are actively harvested, rapidly abused, and difficult to trace once compromised.
As AI becomes embedded in business-critical workflows, organizations must abandon the perception that AI integrations are experimental or low risk. AI credentials are production secrets and must be protected accordingly.
Failure to secure them will continue to expose organizations to financial loss, operational disruption, and reputational damage.
SOC teams should take the initiative to proactively monitor for exposed endpoints using monitoring tools such as Cyble Vision, which provides users with real-time alerts and visibility into compromised endpoints.
This, in turn, allows them to take corrective action to identify which endpoints and credentials were compromised and secure any compromised endpoints as soon as possible.
Eliminate Secrets from Client-Side Code
AI API keys must never be embedded in JavaScript or front-end assets. All AI interactions should be routed through secure backend services.
Enforce GitHub Hygiene and Secret Scanning
Apply Least Privilege and Usage Controls
Implement Secure Key Management Practices
Monitor AI Usage Like Cloud Infrastructure
Establish baselines for normal AI API usage and alert on anomalies such as spikes, unusual geographies, or unexpected model usage.
The post When AI Secrets Go Public: The Rising Risk of Exposed ChatGPT API Keys appeared first on Cyble.
Cyble – Read More

Cisco Talos is back with another inside look at the people who keep the internet safe. This time, Amy chats with Ryan Liles, who bridges the gap between Cisco’s product teams and the third-party testing labs that put Cisco products through their paces. Ryan pulls back the curtain on the delicate dance of technical diplomacy, how he keeps his cool when the stakes are high, and how speaking up has helped him reshape industry standards. Plus, get a glimpse of the hobbies that keep him recharged when he’s off the clock.
Amy Ciminnisi: Ryan, you shared that you are on the Vulnerability Research and Discovery team, but you work in a little bit of a different niche. Can you talk a little bit about what you do?
Ryan Liles: My primary role is to work with all of the Cisco product teams. So anybody that Talos feeds security intelligence to — Firewall, Email, Endpoint — anybody that we write content for, I work with their product teams to help get their products tested externally. Cisco can come out all day and say our products are the best at what they do, but no one’s going to take our word for it. So we have to get someone else to say that for us, and that’s where I come in.
AC: Third-party testing involves coordinating with external organizations and standards groups. You mentioned it can be difficult sometimes and you have to choose your words carefully. What are some of the biggest challenges you face when working across these various groups? Do you have a particular method of overcoming them?
RL: The reason I fell into this role at Cisco is because of all the contacts I made while working at NSS Labs. The third-party testing industry for security appliances is like a lot of the rest of the security industry — very small. Even though there’s a large dollar amount tied to it in the marketplace, the number of people in it is very small. So you’re going to run into the same personalities over and over again throughout your career in security. Because I try to generally be friendly with those people and keep my network alive, I have a lot of personal relationships that I can leverage when it comes to having difficult conversations.
By difficult conversations, I mean if we’ve found a bug in the product or if a third-party test lab acquired our product through means not involving us and did some testing that didn’t turn out great, I can have the conversations with them where we discuss both technically what was their testing methodology and how did they deploy the products. If there were instances where we feel maybe they didn’t deploy the product correctly or there’s some flaws in their methodology, being able to have that kind of discussion with a test lab, while not frustrating them, takes a lot of diplomatic skills. I think that’s the biggest contributor to my success in this role — being able to have those conversations, leaving emotion out of things, and just sticking to the technical facts and saying, here’s what went wrong, here’s what went right, let’s figure out the best way to fix this. That has really contributed to how Cisco and Talos interface with third-party testing labs and maintain those relationships.
Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos.
Cisco Talos Blog – Read More
In enterprise SaaS, unclear security decisions carry real cost. False positives disrupt customers, while missed threats expose the business.
A Fortune 500 cloud provider addressed this risk by embedding ANY.RUN into SOC investigations, giving analysts the behavioral evidence needed to reduce escalations, improve triage confidence, and make proportionate response decisions at scale.
The organization is a Fortune 500 enterprise SaaS provider headquartered in North America, supporting enterprise customers across multiple regions and regulatory environments, with a workforce in the tens of thousands.
When we spoke with the security engineer, we expected the usual story, missing visibility, gaps in tooling, not enough telemetry. But the discussion quickly showed the real problem was somewhere else.
The issue wasn’t seeing what was happening. The team already had plenty of signals coming in every day: authentication events, API activity, admin actions, and a constant flow of partner and integration traffic. The issue was that most of it was legitimate, which made the dangerous moments harder to prove early.
On the surface, nothing looked wrong. But unclear alerts were consuming more and more of our time. We were drowning in uncertainty. For a company serving global customers, that level of ambiguity wasn’t acceptable.
During our discussion, it became clear that the pressure point was volume + ambiguity.
Key challenges:Once we clarified the challenges, the priority became clear: make early triage decisions more certain, without increasing operational risk in a multi-tenant SaaS environment.
The team focused on:
To reach the clarity they were aiming for, the team needed a way to introduce reliable behavioral evidence into early-stage investigations, without disrupting existing SOC workflows or forcing premature automation.
ANY.RUN closed this gap by giving analysts a safe way to observe the real behavior behind a suspicious file or link, replacing guesswork based on reputation, static indicators, or incomplete external signals with direct, controlled evidence.
The biggest change was moving from ‘this looks suspicious’ to ‘this is what it actually does.’ That kind of controlled, repeatable proof is what makes confident decisions possible, especially when threats originate outside your perimeter.
Rather than accelerating response blindly, this approach helped the SOC make earlier, calmer, and more proportional decisions within the same operational model.
Phishing was one of the clearest use cases for the new approach. Many alerts weren’t obviously malicious, but they couldn’t be ignored either, especially when they involved links, attachments, or multi-step redirected flows coming from outside the company’s perimeter.
With behavior-based validation provided by ANY.RUN sandbox, Tier-1 no longer had to rely on “looks suspicious” signals to make the first call. Analysts could safely interact with artifacts, observe what actually happened, and capture the full chain; redirects, credential capture, payload delivery, or follow-on behavior.
In practice, this made a visible difference: in roughly 90% of cases, analysts were able to surface the full attack chain within about 60 seconds, turning unclear alerts into evidence-backed decisions early in the workflow.

A big part of the improvement came also from automated interactivity. Instead of spending time manually clicking through steps that attackers use to slow investigations, CAPTCHAs, multi-hop redirects, or links hidden behind QR codes, analysts could let the sandbox mimic user behavior and capture the full sequence safely. That meant faster verdicts, less friction, and more confidence at Tier-1 without relying on guesswork.

These shifts improved day-to-day operations:
While behavioral evidence clarified what a threat does, the team also needed faster answers to what it means in the broader landscape.
To close that gap, they decided to extend their workflow with ANY.RUN’s Threat Intelligence capabilities, adding immediate context to artifacts discovered during triage.
Threat Intelligence Lookup helped analysts quickly determine:
We notice how our threat hunting is getting more grounded and faster to validate. When a hunt intersects with external artifacts, phishing payloads, suspicious links, or malware samples, we can confirm the behavior and enrich the hypothesis quickly, instead of spending time on patterns that stay theoretical.
At the same time, Threat Intelligence Feeds delivered behavior-verified indicators that could be correlated inside existing detection and monitoring pipelines, strengthening visibility without adding noise.

Together, these solutions allowed the SOC to move from isolated alert handling toward context-aware investigation, where decisions were supported not only by observed behavior, but also by real-world threat activity.
We started using TI Feeds as an enrichment layer on top of our existing threat intelligence stack. What stood out for us is that the indicators are tied to sandbox-verified behavior, so we’re not reacting to blind IOCs, we’re adding context we can actually trust.
As a result, analysts spent less time searching for background information and more time responding with clarity and confidence.
As the new workflow stabilized, the team began to see consistent improvements across investigation quality, escalation patterns, and overall SOC efficiency:
Tangible Gains Across SOCBeyond individual investigations, SOC managers began to notice improvements in how decisions were communicated, reviewed, and justified across the organization.
With clearer behavioral evidence and immediate threat context, plus auto-generated investigation reports and built-in collaboration capabilities, updates to stakeholders became more straightforward, and post-incident analysis required far less backtracking.

Cases were easier to standardize across regions and shifts because the same evidence, context, and artifacts were captured and shared in a consistent way. Escalations increasingly arrived with supporting proof rather than open questions, which reduced “back-and-forth” and helped keep response actions proportional to real risk.
From a manager’s perspective, the biggest change was consistency. Decisions were easier to stand behind because the evidence and reporting were already there, and teams could collaborate on the same case without losing context.
Importantly, this progress didn’t require changing the overall security strategy. Instead, it reduced friction inside an already mature SOC model, helping ensure that when action was taken, it was taken for the right reasons.
By embedding ANY.RUN into daily SOC operations, this Fortune 500 SaaS provider reduced ambiguity in early triage and strengthened decision-making across the entire workflow.
We just stopped losing time to uncertainty. Now we can confirm what’s happening faster and escalate only when it actually makes sense.
With behavioral evidence, immediate threat context, and consistent reporting built into investigations, the SOC became more predictable, more efficient, and better aligned with the need for proportional response at enterprise scale.
ANY.RUN is part of modern SOC workflows, integrating into existing processes and strengthening the full operational cycle across Tier 1, Tier 2, and Tier 3.
It supports every stage of investigation; from exposing real behavior through safe detonation, to enriching findings with broader threat context, to delivering continuous intelligence that helps teams move faster and make confident decisions.
Today, more than 600,000 security professionals and 15,000 organizations rely on ANY.RUN to accelerate triage, reduce unnecessary escalations, and stay ahead of evolving phishing and malware campaigns.
Check how ANY.RUN can improve investigation clarity and speed in your SOC
Behavioral analysis allows analysts to observe what a suspicious file or link actually does in a controlled environment. This removes guesswork, enables earlier confident decisions at Tier-1, and reduces unnecessary escalations.
Yes. ANY.RUN is designed to fit into mature SOC environments without requiring workflow redesign, supporting investigation, enrichment, and reporting across Tier-1, Tier-2, and Tier-3 operations.
In many real investigations, the full attack chain can be exposed within seconds through automated interactivity and behavioral observation, allowing faster evidence-based classification.
Security teams across enterprises, MSSPs, and SOC organizations worldwide rely on ANY.RUN to accelerate triage, improve investigation clarity, and support proportional response to modern threats.
The post Fortune 500 Tech Enterprise Speeds up Triage and Response with ANY.RUN’s Solutions appeared first on ANY.RUN’s Cybersecurity Blog.
ANY.RUN’s Cybersecurity Blog – Read More

For years, many government contractors treated cybersecurity compliance as a technical checklist, important, certainly, but often siloed within IT departments. That mindset is no longer tenable. The U.S. Department of Justice (DOJ) has announced that cybersecurity representations to the federal government are now squarely within the enforcement core of the False Claims Act (FCA). What began in October 2021 as the Civil Cyber-Fraud Initiative has matured into a sustained and expanding enforcement priority.
The numbers alone signal that this is not a passing trend. In January 2026, the DOJ announced that it recovered $52 million through nine cybersecurity-related FCA settlements in the fiscal year ending September 2025. Those recoveries formed part of a record-setting $6.8 billion in total False Claims Act recoveries that year.
Even more striking, DOJ reported that cybersecurity fraud resolutions have more than tripled in each of the past two years, evidence of what Deputy Assistant Attorney General Brenna Jenny described as a “significant upward trajectory.”
When the DOJ launched the Civil Cyber-Fraud Initiative in October 2021, it stated that it would use the FCA, complete with treble damages and statutory penalties, to pursue entities that knowingly submit false claims tied to cybersecurity obligations. The misconduct categories were specific and practical:
At the time, some viewed the initiative as an experiment. That view is no longer credible. Since October 2021, the DOJ has settled fifteen civil cyber-fraud cases under the FCA. More than half of those settlements were announced during the current administration, surpassing the total from the earlier years following the initiative’s launch. Civil cyber-fraud enforcement is now part of the DOJ’s routine FCA portfolio, not an edge case.
In remarks delivered on January 28, 2026, at the American Conference Institute’s Advanced Forum on False Claims and Qui Tam Enforcement, Jenny reaffirmed the administration’s commitment to this path. As the political official overseeing nationwide False Claims Act enforcement, she emphasized both the scale of recent recoveries and the continuing focus on cybersecurity.
One of the most important clarifications in Jenny’s remarks addressed a persistent misconception: FCA cybersecurity cases are “not about data breaches,” but are instead “premised on misrepresentations.” That distinction matters.
Breaches occur even in well-managed environments. The DOJ has signaled that it is not interested in punishing companies simply because they were victims of sophisticated attacks. Instead, the FCA becomes relevant when an organization tells the government it complies with cybersecurity requirements and, in reality, does not.
Under the False Claims Act, liability turns on knowingly false or misleading claims for payment. In the cybersecurity context, this can include explicit certifications of compliance or even implied representations embedded in invoices and contract submissions. If a contractor seeks payment while failing to meet required cybersecurity standards, the DOJ may argue that the claim itself carries an implied assertion of compliance.
That theory has teeth, particularly when paired with the FCA’s treble damages framework.
The majority of DOJ’s cybersecurity-related FCA settlements, nine out of fifteen, have involved U.S. Department of Defense (DoD) cybersecurity requirements. The DoD recently finalized the Cybersecurity Maturity Model Certification (CMMC), introducing structured and, for many contractors, third-party verification requirements. These developments create more objective benchmarks against which representations can be tested.
Civilian agencies are moving in the same direction. In January 2026, the General Services Administration issued a procedural guide governing the protection of Controlled Unclassified Information (CUI) on nonfederal contractor systems. Like the CMMC framework, it contemplates extensive third-party assessments. Across the executive branch, scrutiny of contractor cybersecurity programs is intensifying.
As federal dollars increasingly flow with cybersecurity conditions attached, across defense contractors, IT service providers, healthcare benefit administrators, research universities, and even entities adjacent to prime contractors, the FCA provides the DOJ with a powerful lever to enforce those conditions.
No discussion of the False Claims Act is complete without acknowledging the central role of whistleblowers. Qui tam provisions allow private individuals to bring FCA claims on behalf of the government and potentially receive up to thirty percent of any recovery. Defendants are also responsible for the whistleblower’s attorneys’ fees.
Jenny noted that whistleblowers have continued to play a large role in cyber-fraud cases. That should not surprise anyone familiar with FCA enforcement. Cybersecurity compliance failures often surface internally before they become public. When employees believe their concerns are ignored, or worse, concealed, the FCA offers a direct channel to the DOJ.
Organizations that treat internal cybersecurity complaints as routine HR matters underestimate the risk. A credible internal reporting system, thorough investigation processes, and transparent remediation efforts are not just governance best practices; they are FCA risk mitigation tools.
In some circumstances, companies may need to evaluate disclosure obligations to the government, whether mandatory or voluntary. DOJ policies have increasingly emphasized cooperation credit in the cybersecurity arena, making early, good-faith engagement a strategic consideration.
The DOJ’s approach refrains from considering cybersecurity as more than a technical discipline. It is a representation issue, a contract performance issue, and ultimately an FCA issue. That reality demands cross-functional alignment.
Organizations doing business with the federal government should ensure:
These elements are not aspirational. They form the evidentiary record that may determine whether a dispute becomes an expensive False Claims Act investigation.
The DOJ’s $6.8 billion in fiscal year 2025 False Claims Act recoveries, including $52 million from cybersecurity settlements, mark a new shift. Cybersecurity is now central to DOJ FCA enforcement, not a secondary issue.
For contractors and grant recipients, accuracy in cybersecurity representations is critical. Under the False Claims Act, what an organization tells the government about its security posture must align with reality. Gaps between certification and practice can quickly escalate into costly investigations.
Strengthening visibility across attack surfaces, monitoring emerging threats, and validating controls are essential steps in reducing FCA risk. Platforms like Cyble, recognized in Gartner Peer Insights for Threat Intelligence, help organizations maintain continuous intelligence, detect exposures early, and support defensible cybersecurity governance.
Book a free demo with Cyble to see how AI-powered threat intelligence can help your organization stay ahead of risk and confidently support its cybersecurity commitments.
The post The US False Claims Act Becomes a Cybersecurity Enforcement Engine appeared first on Cyble.
Cyble – Read More

RESEARCH DISCLAIMER:
This analysis examines the most recent and actively maintained repositories of OTP & SMS bombing tools to understand current attack capabilities and targeting patterns. All statistics represent observed patterns within our research sample and should be interpreted as indicative trends rather than definitive totals of the entire OTP bombing ecosystem. The threat landscape is continuously evolving with new tools and repositories emerging regularly.
Cyble Research and Intelligence Labs (CRIL) identified sustained development activity surrounding SMS, OTP, and voice-bombing campaigns, with evidence of technical evolution observed through late 2025 and continuing into 2026. Analysis of multiple development artifacts reveals progressive expansion in regional targeting, automation sophistication, and attack vector diversity.
Recent activity observed through September and October 2025, combined with new application releases in January 2026, indicates ongoing campaign persistence. The campaigns demonstrate technical maturation from basic terminal implementations to cross-platform desktop applications with automated distribution mechanisms and advanced evasion capabilities.
CRIL’s investigation identified coordinated abuse of authentication endpoints across the telecommunications, financial services, e-commerce, ride-hailing, and government sectors, collectively targeting infrastructure in West Asia, South Asia, and Eastern Europe.
What began in the early 2020s as isolated pranks among tech-savvy individuals has evolved into a sophisticated ecosystem of automated harassment tools. SMS bombing – the practice of overwhelming a phone number with a barrage of automated text messages – initially emerged as rudimentary Python scripts shared on coding forums.
These early implementations were crude, targeting only a handful of regional service providers and using manually collected API endpoints. Given the dramatic transformation of the digital threat landscape in recent years, driven by the proliferation of public code repositories, the commoditization of attack tools, and the increasing sophistication of threat actors.
Our investigation into this evolving threat began with routine monitoring of malicious code repositories and underground discussion forums. What we discovered was far more extensive: a well-organised, rapidly expanding ecosystem characterized by cross-platform tool development, international collaboration among threat actors, and an alarming trend toward commercialization.
Malicious actors have weaponised GitHub as a distribution platform for SMS and OTP-bombing tools, creating hundreds of malicious repositories since 2022. Our investigation analyzed around 20 of the most active and recently maintained repositories to characterize current attack capabilities.
Across these repositories, there are ~843 vulnerable, catalogued API endpoints from legitimate organizations: e-commerce platforms, financial institutions, government services, and telecommunications providers.
Each endpoint lacks adequate rate limiting or CAPTCHA protection, enabling automated exploitation. Target lists span seven geographic regions, with concentrated focus on India, Iran, Turkey, Ukraine, and Eastern Europe.
Repository maintainers provide tools in seven programming languages and frameworks, from simple Python scripts to cross-platform GUI applications. This diversity enables attackers with minimal technical knowledge to execute harassment campaigns without understanding the underlying exploitation mechanics.
Our analysis of active SMS bombing repositories gives us an insight into the true scale and sophistication of this threat landscape:

Iran-focused endpoints dominate the observed sample at 61.68% (~520 endpoints), followed by India at 16.96% (~143 endpoints). This concentration suggests coordinated development efforts targeting specific telecommunications infrastructure.

Accessibility and Threat Escalation
In parallel with the open-source repository ecosystem, a thriving commercial sector of web-based SMS-bombing services exists.
These platforms represent a significant escalation in threat accessibility, removing all technical barriers to conducting attacks. Unlike repository-based tools that require users to download code, configure environments, and execute commands, these web services offer point-and-click interfaces accessible from any browser or mobile device.
Deceptive Marketing Practices
Our analysis identified numerous active web services operating openly via search-engine-indexed domains. These services employ sophisticated marketing strategies, positioning themselves as ‘prank tools’ or ‘SMS testing services’ while providing the exact functionality required for harassment campaigns.

Data Harvesting and Resale Operations
Although these websites present themselves as benign prank tools, they operate a predatory data-collection model in which users’ phone numbers are systematically harvested for secondary exploitation. These collected contact numbers are subsequently used for spam campaigns and scam operations, or monetized through resale as lead lists to third-party spammers and scammers. This creates a dual-threat model: users inadvertently expose both their targets and themselves to ongoing spam victimization, while platform operators profit from both service fees and the commodification of harvested contact data.
SMS bombing attacks follow a predictable workflow that exploits weaknesses in API design and implementation.

Attackers identify vulnerable OTP endpoints through multiple techniques:
Industry Sector Targeting Patterns
Our analysis reveals systematic targeting across multiple industry verticals, with telecommunications and authentication services comprising nearly half of all observed endpoints.

Modern SMS bombing tools require minimal setup:
Attacker Technology Stack Evolution
A detailed analysis of the ~20 repositories reveals significant technical sophistication and platform diversification:

Once configured, the tool initiates a flood of legitimate-looking API requests.
Attack Vector Prevalence Analysis
Our analysis reveals the distribution of attack methods across the ~843 observed endpoints:

Analysis of the ~20 repositories reveals widespread adoption of anti-detection measures designed to bypass common security controls.

For end users targeted by SMS bombing attacks, the consequences include:
| Impact Type | Description |
| Device Overload | Hundreds or thousands of incoming messages degrade device performance. |
| Communication Disruption | Legitimate messages are buried under spam, potentially leading to missed important notifications. |
| Inbox Capacity | SMS storage limits reached, preventing the receipt of new messages. |
| Battery Drain | Constant notifications deplete the affected device’s battery. |
| MFA Fatigue | Overwhelming authentication requests create security blind spots. |
| Data Harvesting | Prank sites for SMS bombing likely sell or reuse data for fraud or scams. |
Businesses whose APIs are exploited face multiple challenges:
| Impact Category | Impact Type | Details |
| Financial Impact | Cost per OTP SMS | $0.05 to $0.20 per message |
| Attack cost (10,000 messages) | $500 to $2,000 per attack | |
| Unprotected endpoints | Monthly bills can escalate to significant high amounts. | |
| Operational Impact | User access issues | Legitimate users are unable to receive verification codes |
| Customer service | Overwhelmed with complaints | |
| SMS delivery | Delays affecting all customers | |
| Regulatory compliance | Potential violations if users cannot access accounts | |
| Reputational Impact | Media coverage | Negative social media coverage |
| Customer trust | Erosion of customer confidence | |
| Brand damage | Association with spam and poor security | |
| Competitive position | Potential loss of business to competitors |
Based on analysis of successful bypass techniques across ~20 repositories, the following mitigation strategies are prioritized by effectiveness against observed attack patterns. Implementation of these controls addresses the primary exploitation vectors identified in our research.
CRITICAL Priority
| 1. Implement Comprehensive Rate Limiting | |
| Rationale | 67% of targeted endpoints lack basic rate controls |
| Implementation | Per-IP Limiting: Maximum 5 OTP requests per hour. Per-Phone Limiting: Maximum 3 OTP requests per 15 minutes. Per-Session Limiting: Maximum 10 total verification attempts |
| Evidence | Would have blocked 81% of observed attack patterns |
| 2. Deploy Dynamic CAPTCHA | |
| Rationale | 33% of tools exploit hardcoded reCAPTCHA tokens |
| Implementation | Use reCAPTCHA v3 with dynamic scoring. Rotate site keys regularly. Implement challenge escalation for suspicious behaviour |
| Evidence | Static CAPTCHA is defeated in most of the repositories |
| 3. SSL/TLS Verification Enforcement | |
| Rationale | 75% of tools disable certificate validation to bypass security controls |
| Implementation | Enable HSTS (HTTP Strict Transport Security) headers, implement certificate pinning for mobile applications. Monitor and alert on certificate validation errors |
| Evidence | The most common evasion technique observed across repositories |
HIGH Priority
| Control | Rationale | Implementation Guidance |
| 4. User-Agent Validation | 58.3% of tools randomize User-Agent headers to evade detection | Maintain a whitelist of legitimate clients. Cross-validate User-Agent with other headers Flag mismatched browser/OS combinations |
| 5. Request Pattern Analysis | Automated tools exhibit consistent timing patterns, unlike human behavior | Maintain a whitelist of legitimate clients. Cross-validate User-Agent with other headers. Flag mismatched browser/OS combinations |
| 6. Phone Number Validation | Prevents abuse of number generation algorithms and invalid targets | Monitor for sub-100-ms request interval. Detect sequential API endpoint testing. Flag multiple failed CAPTCHA attempts |
| Mitigation Area | Recommended Actions |
| SMS Cost Monitoring | Set spending alerts at $100, $500, and $1,000 thresholds. Review daily SMS volumes for anomalies. Identify and investigate anomalous spikes immediately |
| Multi-Factor Authentication Hardening | Mandate rate-limiting requirements in service-level agreements Require CAPTCHA implementation on all OTP endpoints Request monthly security and abuse reports. Include SMS abuse liability clauses in contracts |
| Vendor Security Requirements | Mandate rate-limiting requirements in service-level agreements. Require CAPTCHA implementation on all OTP endpoints. Request monthly security and abuse reports. Include SMS abuse liability clauses in contracts |
| Protection Area | Recommended Actions |
| Number Protection | Document attack timing, volume, and sender information File police reports for harassment or threats. Request carrier assistance in blocking source numbers. Monitor all accounts for unauthorized access attempts |
| MFA Best Practices | Document attack timing, volume, and sender information. File police reports for harassment or threats. Request carrier assistance in blocking source numbers. Monitor all accounts for unauthorized access attempts |
| Incident Response | Prefer authenticator apps (Google Authenticator, Authy) over SMS Never approve unexpected or unsolicited MFA prompts. Contact the service provider immediately if SMS bombing occurs |
The SMS/OTP bombing threat landscape has matured significantly between 2023 and 2026, evolving from simple harassment tools into sophisticated attack platforms with commercial distribution. Our analysis of ~20 repositories containing ~843 endpoints reveals systematic targeting across multiple industries and regions, with concentration in Iran (61.68%) and India (16.96%).
The emergence of Go-based high-performance tools, cross-platform GUI applications, and Telegram bot interfaces indicates the professionalization of this attack vector. With 75% of analyzed tools implementing SSL bypass and 58% using User-Agent randomization, defenders face sophisticated adversaries simultaneously employing multiple evasion techniques.
Organizations must prioritize comprehensive rate limiting, dynamic CAPTCHA implementation, and robust monitoring to achieve the projected 85%+ attack prevention effectiveness. The financial impact—potentially exceeding $50,000 monthly for unprotected endpoints—justifies immediate investment in defensive measures.
As the ecosystem continues to evolve, continuous monitoring of underground forums, repository activity, and emerging attack patterns remains essential for maintaining effective defenses against this persistent threat.
| Tactic | Technique ID | Technique Name |
| Initial Access | T1190 | Exploit Public-Facing Application |
| Execution | T1059.006 | Command and Scripting Interpreter |
| Defense Evasion | T1036.005 | Masquerading: Match Legitimate Name or Location |
| Defense Evasion | T1027 | Obfuscated Files or Information |
| Defense Evasion | T1553.004 | Subvert Trust Controls: Install Root Certificate |
| Defense Evasion | T1090.002 | Proxy: External Proxy |
| Credential Access | T1110.003 | Brute Force: Password Spraying |
| Credential Access | T1621 | Multi-Factor Authentication Request Generation |
| Impact | T1499.002 | Endpoint Denial of Service: Service Exhaustion Flood |
| Impact | T1498.001 | Network Denial of Service: Direct Network Flood |
| Impact | T1496 | Resource Hijacking |
The post SMS & OTP Bombing Campaigns: Evolving API Abuse Targeting Multiple Regions appeared first on Cyble.
Cyble – Read More
How long would it take your team to realize ransomware is already running?
The newly identified ransomware families are already causing real business disruption. These threats can disrupt operations fast while also reducing visibility through stealth or cleanup activity, shrinking the time teams have to detect and contain the attack.
Here’s what you should know about BQTLock and GREENBLOOD, and how your team can detect and contain them before the impact escalates.
TL;DR
BQTLock is a ransomware-linked threat designed to hide in normal system activity, gain elevated privileges, and quietly prepare for deeper impact before defenders can react.
Instead of triggering obvious alerts immediately, it blends into trusted Windows processes and delays visible damage. This makes early detection difficult and increases the chance of data exposure, operational disruption, and financial loss for affected organizations.
Using the ANY.RUN interactive sandbox, analysts were able to observe the full behavioral chain in real time.
See full execution chain of BQTLock

The analysis revealed that the malware:
Once privilege escalation is complete, the threat moves beyond stealth and into active harm, including:

This sequence shows how quickly a seemingly quiet infection can evolve into a full security and compliance incident.
GREENBLOOD is a newly observed Go-based ransomware built for speed, stealth, and pressure.
Rather than relying only on encryption, it combines rapid file locking, self-deletion to reduce forensic visibility, and data-leak threats through a TOR-based site.
This transforms a technical incident into a full business crisis involving downtime, regulatory exposure, reputational damage, and recovery cost.
For organizations, the biggest risk is timing. By the moment encryption becomes visible, sensitive data may already be stolen and operational disruption already underway.
Inside the ANY.RUN interactive sandbox, ransomware behavior and cleanup activity became visible while execution was still unfolding, allowing early detection during the most critical stage of the attack.
Check full attack chain of GREENBLOOD

The sandbox analysis exposed:
Because this behavior is captured in real time, SOC teams can move directly from detection to triage and containment before encryption spreads widely.
Using ANY.RUN Threat Intelligence, teams can search for other sandbox analyses related to GREENBLOOD and track how the threat appears across different environments. A simple query like helps uncover related executions, recurring patterns, and potential variants that may not match the exact same sample.
Use this query link to explore related activity: commandLine:”greenblood”

This is valuable as ANY.RUN Threat Intelligence is connected to real sandbox activity from 15,000+ organizations and 600,000+ security professionals. In practice, that means you can use community-scale execution evidence to strengthen detections faster, tune response playbooks, and stay ahead as ransomware changes.
BQTLock and GREENBLOOD may use different techniques, but they point to the same operational reality: modern ransomware is designed to create maximum business damage in the shortest possible time.
Instead of slow, visible attacks, today’s ransomware combines stealth, speed, privilege escalation, and data-leak pressure to overwhelm traditional response workflows before containment begins.
| Business risk | BQTLock | GREENBLOOD |
|---|---|---|
| Data exposure risk | Data theft + screen capture after escalation | Leak-site pressure adds exposure risk (even post-recovery) |
| Downtime risk | Can escalate after stealth phase | Fast encryption (ChaCha8) |
| Harder to spot early | Hides in normal processes + persistence | Cleanup/self-deletion attempts |
| Extortion pressure | Can intensify if stolen data is used | TOR leak-site threats |
| Short response window, higher cost | Stealth setup compresses reaction time | Fast encryption compresses reaction time |
For most companies, the fallout comes in a few predictable ways:
Stealthy privilege escalation, rapid encryption, and leak-site extortion leave security teams with very little time to react.
To stop ransomware before it reaches full business impact, SOC teams need an operational cycle that moves from early detection → confirmed behavior → broader visibility → proactive defense in minutes, without any complicated steps and setups.
With ANY.RUN, this cycle happens inside a single connected workflow, allowing teams to shift from late response to early containment.
The first and most critical step is safe behavioral detonation.
Ransomware like BQTLock hides inside trusted processes and escalates privileges quietly. GREENBLOOD encrypts files quickly and attempts to remove traces.
Running suspicious files or links inside ANY.RUN’s controlled environment exposes:

As this visibility appears during execution, teams can reach a clear verdict in seconds instead of discovering the attack after downtime begins.
This early proof translates directly into operational gains, with 94% of teams reporting faster triage, Tier-1 to Tier-2 escalations reduced by up to 30%, and MTTR shortened by an average of 21 minutes per case, helping contain ransomware before downtime and financial impact grow.
Stopping a single sample is not enough if the campaign continues elsewhere.
Indicators extracted from sandbox analysis can be used to search across ANY.RUN Threat Intelligence, revealing:
The payoff is earlier campaign-level detection and clearer evidence for decision-making, which lowers breach exposure, strengthens compliance readiness, and reduces the business impact of repeat attacks.
The final step is turning investigation insight into ongoing protection.
Fresh indicators and behavioral signals can flow directly into your existing stack through ANY.RUN TI Feeds, keeping detections current without manual copy-paste or constant rule rewrites. This helps teams block repeat attempts faster and react to shifting ransomware infrastructure as it changes.

This ongoing flow shifts teams from reactive detection to proactive monitoring, so attacks are discovered earlier and contained with less business impact.
ANY.RUN is part of modern SOC workflows, integrating easily into existing processes and strengthening the entire operational cycle across Tier 1, Tier 2, and Tier 3.
It supports every stage of investigation, from exposing real behavior during safe detonation, to enriching analysis with broader threat context, and delivering continuous intelligence that helps teams move faster and make confident decisions.
Today, more than 600,000 security professionals and 15,000 organizations rely on ANY.RUN to accelerate triage, reduce unnecessary escalations, and stay ahead of evolving phishing and malware campaigns.
To stay informed about newly discovered threats and real-world attack analysis, follow ANY.RUN’s team on LinkedIn and X, where weekly updates highlight the latest research, detections, and investigation insights.
Both strains prioritize early stealth and rapid operational impact rather than delayed, obvious encryption. BQTLock focuses on covert privilege escalation, persistence, and data theft before encryption, while GREENBLOOD delivers fast ChaCha8 encryption, self-deletion, and leak-site extortion, compressing the response window to minutes.
Modern ransomware often causes business damage before files are encrypted. Activities like process injection, UAC bypass, credential theft, and data exfiltration signal compromise early. Detecting these behaviors during execution enables containment before downtime, breach disclosure, or financial loss escalate.
GREENBLOOD is Go-based and uses ChaCha8 encryption, allowing it to lock files quickly across the system. It also attempts self-deletion and cleanup, which reduces forensic visibility and increases recovery complexity while applying TOR-based leak pressure on victims.
Key signals include Remcos injection into explorer.exe, UAC bypass via fodhelper.exe, autorun persistence creation, and post-escalation credential theft or screen capture. These behaviors indicatethe attack is transitioning from stealth access to active breach risk.
Running suspicious files or links in a controlled behavioral sandbox allows teams to observe privilege escalation, persistence, encryption, and cleanup actions in real time, extract IOCs immediately, and begin containment and hunting before the attack spreads.
Linking sandbox-derived indicators to broader execution telemetry reveals related samples, reused infrastructure, and evolving variants. Feeding this intelligence into detection controls supports earlier blocking, stronger prevention, and lower long-term incident cost.
The post Emerging Ransomware BQTLock & GREENBLOOD Disrupt Businesses in Minutes appeared first on ANY.RUN’s Cybersecurity Blog.
ANY.RUN’s Cybersecurity Blog – Read More