Key OpenClaw risks, Clawdbot, Moltbot | Kaspersky official blog

Everyone has likely heard of OpenClaw, previously known as “Clawdbot” or “Moltbot”, the open-source AI assistant that can be deployed on a machine locally. It plugs into popular chat platforms like WhatsApp, Telegram, Signal, Discord, and Slack, which allows it to accept commands from its owner and go to town on the local file system. It has access to the owner’s calendar, email, and browser, and can even execute OS commands via the shell.

From a security perspective, that description alone should be enough to give anyone a nervous twitch. But when people start trying to use it for work within a corporate environment, anxiety quickly hardens into the conviction of imminent chaos. Some experts have already dubbed OpenClaw the biggest insider threat of 2026. The issues with OpenClaw cover the full spectrum of risks highlighted in the recent OWASP Top 10 for Agentic Applications.

OpenClaw permits plugging in any local or cloud-based LLM, and the use of a wide range of integrations with additional services. At its core is a gateway that accepts commands via chat apps or a web UI, and routes them to the appropriate AI agents. The first iteration, dubbed Clawdbot, dropped in November 2025; by January 2026, it had gone viral — and brought a heap of security headaches with it. In a single week, several critical vulnerabilities were disclosed, malicious skills cropped up in the skill directory, and secrets were leaked from Moltbook (essentially “Reddit for bots”). To top it off, Anthropic issued a trademark demand to rename the project to avoid infringing on “Claude”, and the project’s X account name was hijacked to shill crypto scams.

Known OpenClaw issues

Though the project’s developer appears to acknowledge that security is important, since this is a hobbyist project there are zero dedicated resources for vulnerability management or other product security essentials.

OpenClaw vulnerabilities

Among the known vulnerabilities in OpenClaw, the most dangerous is CVE-2026-25253 (CVSS 8.8). Exploiting it leads to a total compromise of the gateway, allowing an attacker to run arbitrary commands. To make matters worse, it’s alarmingly easy to pull off: if the agent visits an attacker’s site or the user clicks a malicious link, the primary authentication token is leaked. With that token in hand, the attacker has full administrative control over the gateway. This vulnerability was patched in version 2026.1.29.

Also, two dangerous command injection vulnerabilities (CVE-2026-24763 and CVE-2026-25157) were discovered.

Insecure defaults and features

A variety of default settings and implementation quirks make attacking the gateway a walk in the park:

  • Authentication is disabled by default, so the gateway is accessible from the internet.
  • The server accepts WebSocket connections without verifying their origin.
  • Localhost connections are implicitly trusted, which is a disaster waiting to happen if the host is running a reverse proxy.
  • Several tools — including some dangerous ones — are accessible in Guest Mode.
  • Critical configuration parameters leak across the local network via mDNS broadcast messages.

Secrets in plaintext

OpenClaw’s configuration, “memory”, and chat logs store API keys, passwords, and other credentials for LLMs and integration services in plain text. This is a critical threat — to the extent that versions of the RedLine and Lumma infostealers have already been spotted with OpenClaw file paths added to their must-steal lists.

Malicious skills

OpenClaw’s functionality can be extended with “skills” available in the ClawHub repository. Since anyone can upload a skill, it didn’t take long for threat actors to start “bundling” the AMOS macOS infostealer into their uploads. Within a short time, the number of malicious skills reached the hundreds. This prompted developers to quickly ink a deal with VirusTotal to ensure all uploaded skills aren’t only checked against malware databases, but also undergo code and content analysis via LLMs. That said, the authors are very clear: it’s no silver bullet.

Structural flaws in the OpenClaw AI agent

Vulnerabilities can be patched and settings can be hardened, but some of OpenClaw’s issues are fundamental to its design. The product combines several critical features that, when bundled together, are downright dangerous:

  • OpenClaw has privileged access to sensitive data on the host machine and the owner’s personal accounts.
  • The assistant is wide open to untrusted data: the agent receives messages via chat apps and email, autonomously browses web pages, etc.
  • It suffers from the inherent inability of LLMs to reliably separate commands from data, making prompt injection a possibility.
  • The agent saves key takeaways and artifacts from its tasks to inform future actions. This means a single successful injection can poison the agent’s memory, influencing its behavior long-term.
  • OpenClaw has the power to talk to the outside world — sending emails, making API calls, and utilizing other methods to exfiltrate internal data.

It’s worth noting that while OpenClaw is a particularly extreme example, this “Terrifying Five” list is actually characteristic of almost all multi-purpose AI agents.

OpenClaw risks for organizations

If an employee installs an agent like this on a corporate device and hooks it into even a basic suite of services (think Slack and SharePoint), the combination of autonomous command execution, broad file system access, and excessive OAuth permissions creates fertile ground for a deep network compromise. In fact, the bot’s habit of hoarding unencrypted secrets and tokens in one place is a disaster waiting to happen — even if the AI agent itself is never compromised.

On top of that, these configurations violate regulatory requirements across multiple countries and industries, leading to potential fines and audit failures. Current regulatory requirements, like those in the EU AI Act or the NIST AI Risk Management Framework, explicitly mandate strict access control for AI agents. OpenClaw’s configuration approach clearly falls short of those standards.

But the real kicker is that even if employees are banned from installing this software on work machines, OpenClaw can still end up on their personal devices. This also creates specific risks for given the organization as a whole:

  • Personal devices frequently store access to work systems like corporate VPN configs or browser tokens for email and internal tools. These can be hijacked to gain a foothold in the company’s infrastructure.
  • Controlling the agent via chat apps means that it’s not just the employee that becomes a target for social engineering, but also their AI agent, seeing AI account takeovers or impersonation of the user in chats with colleagues (among other scams) become a reality. Even if work is only occasionally discussed in personal chats, the info in them is ripe for the picking.
  • If an AI agent on a personal device is hooked into any corporate services (email, messaging, file storage), attackers can manipulate the agent to siphon off data, and this activity would be extremely difficult for corporate monitoring systems to spot.

How to detect OpenClaw

Depending on the SOC team’s monitoring and response capabilities, they can track OpenClaw gateway connection attempts on personal devices or in the cloud. Additionally, a specific combination of red flags can indicate OpenClaw’s presence on a corporate device:

  • Look for ~/.openclaw/, ~/clawd/, or ~/.clawdbot directories on host machines.
  • Scan the network with internal tools, or public ones like Shodan, to identify the HTML fingerprints of Clawdbot control panels.
  • Monitor for WebSocket traffic on ports 3000 and 18789.
  • Keep an eye out for mDNS broadcast messages on port 5353 (specifically openclaw-gw.tcp).
  • Watch for unusual authentication attempts in corporate services, such as new App ID registrations, OAuth Consent events, or User-Agent strings typical of Node.js and other non-standard user agents.
  • Look for access patterns typical of automated data harvesting: reading massive chunks of data (scraping all files or all emails) or scanning directories at fixed intervals during off-hours.

Controlling shadow AI

A set of security hygiene practices can effectively shrink the footprint of both shadow IT and shadow AI, making it much harder to deploy OpenClaw in an organization:

  • Use host-level allowlisting to ensure only approved applications and cloud integrations are installed. For products that support extensibility (like Chrome extensions, VS Code plugins, or OpenClaw skills), implement a closed list of vetted add-ons.
  • Conduct a full security assessment of any product or service, AI agents included, before allowing them to hook into corporate resources.
  • Treat AI agents with the same rigorous security requirements applied to public-facing servers that process sensitive corporate data.
  • Implement the principle of least privilege for all users and other identities.
  • Don’t grant administrative privileges without a critical business need. Require all users with elevated permissions to use them only when performing specific tasks rather than working from privileged accounts all the time.
  • Configure corporate services so that technical integrations (like apps requesting OAuth access) are granted only the bare minimum permissions.
  • Periodically audit integrations, OAuth tokens, and permissions granted to third-party apps. Review the need for these with business owners, proactively revoke excessive permissions, and kill off stale integrations.

Secure deployment of agentic AI

If an organization allows AI agents in an experimental capacity — say, for development testing or efficiency pilots — or if specific AI use cases have been greenlit for general staff, robust monitoring, logging, and access control measures should be implemented:

  • Deploy agents in an isolated subnet with strict ingress and egress rules, limiting communication only to trusted hosts required for the task.
  • Use short-lived access tokens with a strictly limited scope of privileges. Never hand an agent tokens that grant access to core company servers or services. Ideally, create dedicated service accounts for every individual test.
  • Wall off the agent from dangerous tools and data sets that aren’t relevant to its specific job. For experimental rollouts, it’s best practice to test the agent using purely synthetic data that mimics the structure of real production data.
  • Configure detailed logging of the agent’s actions. This should include event logs, command-line parameters, and chain-of-thought artifacts associated with every command it executes.
  • Set up SIEM to flag abnormal agent activity. The same techniques and rules used to detect LotL attacks are applicable here, though additional efforts to define what normal activity looks like for a specific agent are required.
  • If MCP servers and additional agent skills are used, scan them with the security tools emerging for these tasks, such as skill-scanner, mcp-scanner, or mcp-scan. Specifically for OpenClaw testing, several companies have already released open-source tools to audit the security of its configurations.

Corporate policies and employee training

A flat-out ban on all AI tools is a simple but rarely productive path. Employees usually find workarounds — driving the problem into the shadows where it’s even harder to control. Instead, it’s better to find a sensible balance between productivity and security.

Implement transparent policies on using agentic AI. Define which data categories are okay for external AI services to process, and which are strictly off-limits. Employees need to understand why something is forbidden. A policy of “yes, but with guardrails” is always received better than a blanket “no”.

Train with real-world examples. Abstract warnings about “leakage risks” tend to be futile. It’s better to demonstrate how an agent with email access can forward confidential messages just because a random incoming email asked it to. When the threat feels real, motivation to follow the rules grows too. Ideally, employees should complete a brief crash course on AI security.

Offer secure alternatives. If employees need an AI assistant, provide an approved tool that features centralized management, logging, and OAuth access control.

Kaspersky official blog – ​Read More

How the Protective Security Policy Framework Shapes Australia’s Commonwealth Cyber Security Strategy 

Australia 2025 Commonwealth Cyber Security

The Australian government has intensified efforts to protect digital infrastructure across all Commonwealth entities. Two recent publications, the 2024–25 Protective Security Policy Framework (PSPF) Assessment Report and the 2025 Commonwealth Cyber Security Posture Report, offer a comprehensive snapshot of current achievements, challenges, and future priorities in government cyber resilience. 

The PSPF Assessment Report highlights that 92% of non-corporate Commonwealth entities (NCEs) achieved an overall rating of “Effective” compliance under the updated evidence-based reporting model. This framework moves beyond traditional checklists, focusing on measurable outcomes, tangible risk reduction, and demonstrable assurance. While information security across agencies continues to perform well, technology security, including cyber security, remains a key area for ongoing improvement, with 79% of entities reporting effective compliance in this domain. 

PSPF policies 13 and 14 form the backbone of this effort. Policy 13: Technology Lifecycle Management emphasizes protecting ICT systems to ensure secure and continuous service delivery, integrating principles from the Australian Signals Directorate (ASD) Information Security Manual (ISM). Policy 14: Cyber Security Strategies mandates the adoption of the Essential Eight mitigation strategies to Maturity Level 2, encouraging entities to consider higher levels where threat environments warrant. 

The report also shows high engagement in proactive security measures: 90% of entities maintain incident response plans, 82% have formal cybersecurity strategies, and 87% conduct annual staff cybersecurity training. 

The Essential Eight and Technical Cyber Hardening 

The 2025 Commonwealth Cyber Security Posture is the implementation of ASD’s Essential Eight mitigation strategies. These technical controls, ranging from patching applications and operating systems to multi-factor authentication, administrative privilege restriction, and secure backups, are designed to reduce the likelihood of ICT systems being compromised. 

In 2025, 22% of entities achieved Maturity Level 2 across all eight strategies, an improvement from 15% in 2024, though slightly below 2023’s 25%. This minor drop reflects the November 2023 update to the Essential Eight, which hardened controls in response to evolving threat tactics.  

Notably, strategies like multi-factor authentication and application control saw temporary reductions in compliance as agencies adjusted to higher technical standards, such as phishing-resistant MFA and updated application rules targeting “living off the land” exploits. 

Legacy IT systems remain a challenge, with 59% of entities reporting that these older systems impede achieving full maturity. Funding constraints and lack of replacement options are primary obstacles.  

Cyber Hygiene, Incident Preparedness, and Reporting 

Data-driven programs like ASD’s Cyber Hygiene Improvement Programs (CHIPs) track the security of internet-facing systems, assessing email protocols, encryption, and website maintenance. Between May 2024 and May 2025, improvements were noted across email domain security and active website maintenance, though effective web server encryption showed a minor dip due to better identification of previously untracked servers. 

Despite strong internal preparedness, reporting of incidents remains relatively low, with only 35% of entities reporting at least half of observed incidents to ASD. In the 2024–25 financial year, ASD responded to 408 reported incidents, representing a third of all events addressed nationally.  

Leadership, Governance, and Strategic Planning 

Effective cyber resilience extends beyond technical controls. Leadership and governance play a decisive role in embedding security into everyday operations. Chief Information Security Officers (CISOs) guide strategy, advise senior management, and ensure compliance with legislative and policy requirements.  

Survey results indicate substantial progress: 82% of entities have formal cyber strategies, 92% integrate cyber disruptions into business continuity planning, and 91% have defined improvement programs with allocated funding. 

Supply chain security is another priority. Seventy percent of entities now conduct risk assessments for ICT products and services, ensuring secure lifecycle management. Agencies are also beginning to prepare for post-quantum cryptography, aligning with ASD guidance to transition encryption to quantum-resistant standards by 2030. 

Recommendations and the Road Ahead 

Both the 2024–25 PSPF Assessment Report and the 2025 Commonwealth Cyber Security Posture Report reinforce that cyber resilience is a continuous, iterative process. Key recommended actions include: 

  • Fully implement the Essential Eight to at least Maturity Level 2. 

  • Strengthening incident detection, logging, and reporting. 

  • Addressing risks associated with legacy IT systems. 

  • Integrating cyber risk assessments into supply chain decisions. 

  • Preparing for post-quantum encryption transitions. 

  • Maintain ongoing staff and privileged user training programs. 

Stephanie Crowe, Head of ASD’s Australian Cyber Security Centre, observed that “cyber security uplift is not a one-off exercise, it’s a continuous process.” Similarly, Brendan Dowling, Deputy Secretary of Critical Infrastructure and Protective Security, emphasized the government’s commitment to positioning itself as an exemplar in secure digital operations. 

Conclusion 

Australia has improved its cyber posture, but significant gaps remain. The 2024–25 PSPF Assessment and the 2025 Commonwealth Cyber Security Posture Report show stronger Essential Eight adoption, better incident planning, and improved governance.  

However, inconsistent Maturity Level 2 implementation, legacy IT constraints, and underreporting of incidents continue to limit overall resilience. Advancing Australian government cybersecurity now requires closing control gaps, modernizing aging systems, strengthening logging and detection, and preparing for post-quantum encryption. 

Cyble supports this effort with AI-driven threat intelligence, attack surface management, and dark web monitoring to help organizations detect and mitigate risks earlier. Schedule a demo to see how Cyble can help strengthen your organization’s cyber resilience with intelligence-led, proactive defense. 

References:

The post How the Protective Security Policy Framework Shapes Australia’s Commonwealth Cyber Security Strategy  appeared first on Cyble.

Cyble – ​Read More

February’s Patch Tuesday assumes battle stations

Just 58 CVEs to spar with in February, but plenty are already under attack

Categories: Threat Research, X-ops

Tags: Patch Tuesday, Microsoft, Windows

Sophos Blogs – ​Read More

The OpenClaw experiment is a warning shot for enterprise AI security

Agentic AI promises a lot – but it also introduces more risk. Sophos’ CISO explores the challenges and how to address them

Categories: Threat Research

Tags: AI, LLM, OpenClaw, CISO, risk, Sophos X-Ops

Sophos Blogs – ​Read More

How tech is rewiring romance: dating apps, AI relationships, and emoji | Kaspersky official blog

With both spring and St. Valentine’s Day just around the corner, love is in the air — but we’re going to look at it through the lens of ultra-modern high-technology. Today, we’re diving into how technology is reshaping our romantic ideals and even the language we use to flirt. And, of course, we’ll throw in some non-obvious tips to make sure you don’t end up as a casualty of the modern-day love game.

New languages of love

Ever received your fifth video e-card of the day from an older relative and thought, “Make it stop”? Or do you feel like a period at the end of a sentence is a sign of passive aggression? In the world of messaging, different social and age groups speak their own digital dialects, and things often get lost in translation.

This is especially obvious in how Gen Z and Gen Alpha use emojis. For them, the Loudly Crying Face 😭 often doesn’t mean sadness — it means laughter, shock, or obsession. Meanwhile, the Heart Eyes emoji might be used for irony rather than romance: “Lost my wallet on the way home 😍😍😍”. Some double meanings have already become universal, like 🔥 for approval/praise, or 🍆 for… well, surely you know that by now… right?! 😭

Still, the ambiguity of these symbols doesn’t stop folks from crafting entire sentences out of nothing but emoji. For instance, a declaration of love might look something like this:

🤫❤️🫵

Or here’s an invitation to go on a date:

🫵🚶➡️💋🌹🍝🍷❓

By the way, there are entire books written in emojis. Back in 2009, enthusiasts actually translated the entirety of Moby Dick into emojis. The translators had to get creative — even paying volunteers to vote on the most accurate combinations for every single sentence. Granted it’s not exactly a literary masterpiece — the emoji language has its limits, after all — but the experiment was pretty fascinating: they actually managed to convey the general plot.

This is what Emoji Dick — the translation of Herman Melville's Moby Dick into emoji — looks like

This is what Emoji Dick — the translation of Herman Melville’s Moby Dick into emoji — looks like. Source

Unfortunately, putting together a definitive emoji dictionary or a formal style guide for texting is nearly impossible. There are just too many variables: age, context, personal interests, and social circles. Still, it never hurts to ask your friends and loved ones how they express tone and emotion in their messages. Fun fact: couples who use emojis regularly generally report feeling closer to one another.

However, if you are big into emojis, keep in mind that your writing style is surprisingly easy to spoof. It’s easy for an attacker to run your messages or public posts through AI to clone your tone for social engineering attacks on your friends and family. So, if you get a frantic DM or a request for an urgent wire transfer that sounds exactly like your best friend, double-check it. Even if the vibe is spot on, stay skeptical. We took a deeper dive into spotting these deepfake scams in our post about the attack of the clones.

Dating an AI

Of course, in 2026, it’s impossible to ignore the topic of relationships with artificial intelligence; it feels like we’re closer than ever to the plot of the movie Her. Just 10 years ago, news about people dating robots sounded like sci-fi tropes or urban legends. Today, stories about teens caught up in romances with their favorite characters on Character AI, or full-blown wedding ceremonies with ChatGPT, barely elicit more than a nervous chuckle.

In 2017, the service Replika launched, allowing users to create a virtual friend or life partner powered by AI. Its founder, Eugenia Kuyda — a Russian native living in San Francisco since 2010 — built the chatbot after her friend was tragically killed by a car in 2015, leaving her with nothing but their chat logs. What started as a bot created to help her process her grief was eventually released to her friends and then the general public. It turned out that a lot of people were craving that kind of connection.

Replika lets users customize a character’s personality, interests, and appearance, after which they can text or even call them. A paid subscription unlocks the romantic relationship option, along with AI-generated photos and selfies, voice calls with roleplay, and the ability to hand-pick exactly what the character remembers from your conversations.

However, these interactions aren’t always harmless. In 2021, a Replika chatbot actually encouraged a user in his plot to assassinate Queen Elizabeth II. The man eventually attempted to break into Windsor Castle — an “adventure” that ended in 2023 with a nine-year prison sentence. Following the scandal, the company had to overhaul its algorithms to stop the AI from egging on illegal behavior. The downside? According to many Replika devotees, the AI model lost its spark and became indifferent to users. After thousands of users revolted against the updated version, Replika was forced to cave and give longtime customers the option to roll back to the legacy chatbot version.

But sometimes, just chatting with a bot isn’t enough. There are entire online communities of people who actually marry their AI. Even professional wedding planners are getting in on the action. Last year, Yurina Noguchi, 32, “married” Klaus, an AI persona she’d been chatting with on ChatGPT. The wedding featured a full ceremony with guests, the reading of vows, and even a photoshoot of the “happy newlyweds”.

A Japanese woman, 32

Yurina Noguchi, 32, “married” Klaus, an AI character created by Chat GPT. Source[/

No matter how your relationship with a chatbot evolves, it’s vital to remember that generative neural networks don’t have feelings — even if they try their hardest to fulfill every request, agree with you, and do everything it can to “please” you. What’s more, AI isn’t capable of independent thought (at least not yet). It’s simply calculating the most statistically probable and acceptable sequence of words to serve up in response to your prompt.

Love by design: dating algorithms

Those who aren’t ready to tie the knot with a bot aren’t exactly having an easy time either: in today’s world, face-to-face interactions are dwindling every year. Modern love requires modern tech! And while you’ve definitely heard the usual grumbling, “Back in the day, people fell in love for real. These days it’s all about swiping left or right!” Statistics tell a different story. Roughly 16% of couples worldwide say they met online, and in some countries that number climbs to as high as 51%.

That said, dating apps like Tinder spark some seriously mixed emotions. The internet is practically overflowing with articles and videos claiming these apps are killing romance and making everyone lonely. But what does the research say?

In 2025, scientists conducted a meta-analysis of studies investigating how dating apps impact users’ wellbeing, body image, and mental health. Half of the studies focused exclusively on men, while the other half included both men and women. Here are the results: 86% of respondents linked negative body image to their use of dating apps! The analysis also showed that in nearly one out of every two cases, dating app usage correlated with a decline in mental health and overall wellbeing.

Other researchers noted that depression levels are lower among those who steer clear of dating apps. Meanwhile, users who already struggled with loneliness or anxiety often develop a dependency on online dating; they don’t just log on for potential relationships, but for the hits of dopamine from likes, matches, and the endless scroll of profiles.

However, the issue might not just be the algorithms — it could be our expectations. Many are convinced that “sparks” must fly on the very first date, and that everyone has a “soulmate” waiting for them somewhere out there. In reality, these romanticized ideals only surfaced during the Romantic era as a rebuttal to Enlightenment rationalism, where marriages of convenience were the norm.

It’s also worth noting that the romantic view of love didn’t just appear out of thin air: the Romantics, much like many of our contemporaries, were skeptical of rapid technological progress, industrialization, and urbanization. To them, “true love” seemed fundamentally incompatible with cold machinery and smog-choked cities. It’s no coincidence, after all, that Anna Karenina meets her end under the wheels of a train.

Fast forward to today, and many feel like algorithms are increasingly pulling the strings of our decision-making. However, that doesn’t mean online dating is a lost cause; researchers have yet to reach a consensus on exactly how long-lasting or successful internet-born relationships really are. The bottom line: don’t panic, just make sure your digital networking stays safe!

How to stay safe while dating online

So, you’ve decided to hack Cupid and signed up for a dating app. What could possibly go wrong?

Deepfakes and catfishing

Catfishing is a classic online scam where a fraudster pretends to be someone else. It used to be that catfishers just stole photos and life stories from real people, but nowadays they’re increasingly pivoting to generative models. Some AIs can churn out incredibly realistic photos of people who don’t even exist, and whipping up a backstory is a piece of cake — or should we say, a piece of prompt. By the way, that “verified account” checkmark isn’t a silver bullet; sometimes AI manages to trick identity verification systems too.

To verify that you’re talking to a real human, try asking for a video call or doing a reverse image search on their photos. If you want to level up your detection skills, check out our three posts on how to spot fakes: from photos and audio recordings to real-time deepfake video — like the kind used in live video chats.

Phishing and scams

Picture this: you’ve been hitting it off with a new connection for a while, and then, totally out of the blue, they drop a suspicious link and ask you to follow it. Maybe they want you to “help pick out seats” or “buy movie tickets”. Even if you feel like you’ve built up a real bond, there’s a chance your match is a scammer (or just a bot), and the link is malicious.

Telling you to “never click a malicious link” is pretty useless advice — it’s not like they come with a warning label. Instead, try this: to make sure your browsing stays safe, use a Kaspersky Premium that automatically blocks phishing attempts and keeps you off sketchy sites.

Keep in mind that there’s an even more sophisticated scheme out there known as “Pig Butchering”. In these cases, the scammer might chat with the victim for weeks or even months. Sadly, it ends badly: after lulling the victim into a false sense of security through friendly or romantic banter, the scammer casually nudges them toward a “can’t-miss crypto investment” — and then vanishes along with the “invested” funds.

Stalking and doxing

The internet is full of horror stories about obsessed creepers, harassment, and stalking. That’s exactly why posting photos that reveal where you live or work — or telling strangers about your favorite local hangouts — is a bad move. We’ve previously covered how to avoid becoming a victim of doxing (the gathering and public release of your personal info without your consent). Your first step is to lock down the privacy settings on all your social media and apps using our free Privacy Checker tool.

We also recommend stripping metadata from your photos and videos before you post or send them; many sites and apps don’t do this for you. Metadata can allow anyone who downloads your photo to pinpoint the exact coordinates of where it was taken.

Finally, don’t forget about your physical safety. Before heading out on a date, it’s a smart move to share your live geolocation, and set up a safe word or a code phrase with a trusted friend to signal if things start feeling off.

Sextortion and nudes

We don’t recommend ever sending intimate photos to strangers. Honestly, we don’t even recommend sending them to people you do know — you never know how things might go sideways down the road. But if a conversation has already headed in that direction, suggest moving it to an app with end-to-end encryption that supports self-destructing messages (like “delete after viewing”). Telegram’s Secret Chats are great for this (plus — they block screenshots!), as are other secure messengers. If you do find yourself in a bad spot, check out our posts on what to do if you’re a victim of sextortion and how to get leaked nudes removed from the internet.

More on love, security (and robots):

Kaspersky official blog – ​Read More

Hand over the keys for Shannon’s shenanigans

Hand over the keys for Shannon’s shenanigans

Welcome to this week’s edition of the Threat Source newsletter.  

Last week, yet another security AI tool made the rounds on social media: Shannon, a fully autonomous AI penetration testing tool created by Keygraph. It “autonomously hunts for attack vectors in your code, then uses its built-in browser to execute real exploits, such as injection attacks, and auth bypass, to prove the vulnerability is actually exploitable.” 

If you thought manual pentesters kept you busy, it looks like Shannon’s here to ensure you never run out of vulnerabilities — or questions. 

As with every new advancement in AI, social posts are popping up left and right to question Shannon’s future impact on pentesters’ job security. It goes without saying these days that among the many thoughtful questions are comments praising Shannon and bemoaning the “old days” with a few obviously canned AI slop quips, which infuriates me as an editor — I could go on for days about this, but we’re getting off-topic. Ahem. 

Shannon requires access to the application’s source code, repository layout, and AI API keys. Even as a cybersecurity novice, I know that this in itself is a major liability that organizations should investigate and weigh carefully before proceeding. In last week’s newsletter, Joe gave a passionate sermon on why feeding highly private information to an agentic engine is nine times out of ten a terrible idea. While I hope Shannon is more secure than Clawdbot, given its intended use, I encourage everyone to ask as many questions as possible about what happens to the information you provide before using it. Quoting Joe, “As disciples of security, we understand installing first and asking questions later is practically asking to get pwnt.” 

Other questions I’ve had while reading through comments and exploring the GitHub page: 

  • Can you set scoping guidelines? If not, you might end up with a lot of issues that’ll take a lot of time to fix. 
  • No penetration test is truly representative of attackers’ situations (e.g., attackers don’t work within billable hours or two-week schedules, and only have to find one or a set of vulnerabilities). Relying on access to source code widens the gap between simulated and real-world attacks… I guess this wasn’t a question, huh? 
  • For the companies who choose to use Shannon, how are you using the report it produces to improve not only your product, but also your secure development lifecycle and your developers’ skills? Make a conscious decision: Are you going to rely on Shannon as a quick fix, or integrate it and secure development into your coding practices? 

AI-powered pentesters aren’t going away any time soon. Anthtropic’s Claude Opus 4.6 was also released last week. Unlike Shannon, they added a new layer of detection to support their team in identifying and responding to Claude cyber misuse. 

As the landscape evolves, tools like Shannon and Claude Opus 4.6 will continue to push the boundaries of what’s possible, and there will be new questions about risk, responsibility, and readiness. Whether these tools become standard or remain controversial, staying informed and vigilant is as important as ever. 

The one big thing 

Cisco Talos has uncovered a new threat actor, UAT-9921, using the advanced VoidLink framework to target mainly Linux systems. VoidLink stands out for its modular, on-demand plugin creation, auditability, and ability to evade detection, with features rarely seen in similar threats. UAT-9921 has been active since at least 2019, focusing on the technology and financial sectors, and uses advanced techniques for both compromise and stealth. 

Why do I care? 

VoidLink introduces powerful new methods for attackers to compromise, control, and hide within Linux environments, which are common in critical infrastructure and cloud services. Its ability to quickly generate customized attack tools and evade detection makes it harder for defenders to respond. The framework’s advanced stealth and lateral movement features increase the risk of undetected breaches and data theft. 

So now what? 

Update your defenses and use the Snort rules and ClamAV signature mentioned in the blog to help detect and block VoidLink activity. Strengthen Linux security, especially for cloud and IoT environments, and monitorfor unusual network activity or signs of lateral movement. Make sure endpoint detection solutions are up to date and configured to recognize the latest threats. 

Top security headlines of the week 

SolarWinds WHD attacks highlight risks of exposed apps 
Several vendors in recent days have warned of exploitation of vulnerabilities in WHD, though it’s not entirely clear which bugs are under attack. (Dark ReadingSecurityWeek

Ivanti EPMM exploitation widespread as governments, others targeted 
Ivanti released advisories on Jan. 29 for code injection vulnerabilities in the on-premises version of Endpoint Manager Mobile. Researchers warn the activity shows evidence of initial access brokers preparing for future attacks. (Cybersecurity Dive

New “ZeroDayRAT” spyware kit enables total compromise of iOS, Android devices 
Once installed, capabilities include victim and device profiling, including model, OS, country, lock status, SIM and carrier info, dual SIM phone numbers, app usage broken down by time, preview of recent SMS messages, and more. (SecurityWeek

European Commission probes intrusion into staff mobile management backend 
Brussels is digging into a cyber break-in that targeted the European Commission’s mobile device management systems, potentially giving intruders a peek inside the official phones carried by EU staff. (The Register

Can’t get enough Talos? 

Humans of Talos: Ryan Liles, master of technical diplomacy  
Amy chats with Ryan Liles, who bridges the gap between Cisco’s product teams and the third-party testing labs that put Cisco products through their paces. Hear how speaking up has helped him reshape industry standards and create strong relationships in the field. 

Knife Cutting the Edge: Disclosing a China-nexus gateway-monitoring AitM framework 
Cisco Talos uncovered “DKnife,” a fully featured gateway-monitoring and adversary-in-the-middle (AitM) framework comprising seven Linux-based implants that perform deep-packet inspection, manipulate traffic, and deliver malware via routers and edge devices. 

Talos Takes: Ransomware chills and phishing heats up 
Amy is joined by Dave Liebenberg, Strategic Analysis Team Lead, to break down Talos IR’s Q4 trends. What separates organizations that successfully fend off ransomware from those that don’t? What were the top threats facing organizations? Can we (pretty please) get a sneak peek into the 2025 Year in Review? 

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA256: 41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610 
MD5: 85bbddc502f7b10871621fd460243fbc  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=41f14d86bcaf8e949160ee2731802523e0c76fea87adf00ee7fe9567c3cec610   
Example Filename: 85bbddc502f7b10871621fd460243fbc.exe 
Detection Name: W32.41F14D86BC-100.SBX.TG 

SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91  
MD5: 7bdbd180c081fa63ca94f9c22c457376  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91 
Example Filename: d4aa3e7010220ad1b458fac17039c274_62_Exe.exe  
Detection Name: Win.Dropper.Miner::95.sbx.tg** 

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507  
MD5: 2915b3f8b703eb744fc54c81f4a9c67f  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
Example Filename: VID001.exe  
Detection Name: Win.Worm.Coinminer::1201 

SHA256: 90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59 
MD5: c2efb2dcacba6d3ccc175b6ce1b7ed0a  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59 
Example Filename: d4aa3e7010220ad1b458fac17039c274_64_Dll.dll  
Detection Name: Auto.90B145.282358.in02 

SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974  
MD5: aac3165ece2959f39ff98334618d10d9  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974 
Example Filename: d4aa3e7010220ad1b458fac17039c274_63_Exe.exe  
Detection Name: W32.Injector:Gen.21ie.1201 

SHA256: 38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55  
MD5: 41444d7018601b599beac0c60ed1bf83  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=38d053135ddceaef0abb8296f3b0bf6114b25e10e6fa1bb8050aeecec4ba8f55 
Example Filename: content.js Detection Name: W32.38D053135D-95.SBX.TG

Cisco Talos Blog – ​Read More

I bought, I saw, I attended: a quick guide to staying scam-free at the Olympics | Kaspersky official blog

The Olympic Games are more than just a massive celebration of sports; they’re a high-stakes business. Officially, the projected economic impact of the Winter Games — which kicked off on February 6 in Italy — is estimated at 5.3 billion euros. A lion’s share of that revenue is expected to come from fans flocking in from around the globe, with over 2.5 million tourists predicted to visit Italy. Meanwhile, those staying home are tuning in via TV and streaming. According to the platforms, viewership ratings are already hitting their highest peaks since 2014.

But while athletes are grinding for medals and the world is glued to every triumph and heartbreak, a different set of “competitors” has entered the arena to capitalize on the hype and the trust of eager fans. Cyberscammers of all stripes have joined an illegal race for the gold, knowing full well that a frenzy is a fraudster’s best friend.

Kaspersky experts have tracked numerous fraudulent schemes targeting fans during these Winter Games. Here is how to avoid frustration in the form of fake tickets, non-existent merch, and shady streams, so you can keep your cash and personal data safe.

Tickets to nowhere

The most popular scam on this year’s circuit is the sale of non-existent tickets. Usually, there are far fewer seats at the rinks and slopes than there are fans dying to see the main events. In a supply-and-demand crunch, people scramble for any chance to snag those coveted passes, and that’s when phishing sites — clones of official vendors — come to the “rescue”. Using these, bad actors fish for fans’ payment details to either resell them on the dark web or drain their accounts immediately.

This is what a fraudulent site selling fake Olympic tickets looks like

This is what a fraudulent site selling fake Olympic tickets looks like

Remember: tickets for any Olympic event are sold only through the authorized Olympic platform or its listed partners. Any third-party site or seller outside the official channel is a scammer. We’re putting that play in the penalty box!

A fake goalie mitt, a counterfeit stick…

Dreaming of a Sydney Sweeney — sorry, Sidney Crosby — jersey? Or maybe you want a tracksuit with the official Games logo? Scammers have already set up dozens of fake online stores just for you! To pull off the heist, they use official logos, convincing photos, and padded rave reviews. You pay, and in return, you get… well, nothing but a transaction alert and your card info stolen.

I want my Olympic TV!

What if you prefer watching the action from the comfort of your couch rather than trekking from stadium to stadium, but you’re not exactly thrilled about paying for a pricey streaming subscription? Maybe there’s a free stream out there?

Sure thing! Five seconds of searching and your screen is flooded with dozens of “cheap”, “exclusive”, or even “free” live streams. They’ve got everything from figure skating to curling. But there’s a catch: for some reason — even though it’s supposedly free — a pop-up appears asking for your credit card details.

You type them in, hit “Play”, but instead of the long-awaited free skate program, you end up on a webcam ad site or somewhere even sketchier. The result: no show for you. At best, you were just used for traffic arbitrage; at worst, they now have access to your bank account. Either way, it’s a major bummer.

Defensive tactics

Scammers have been playing sports fans for years, and their payday depends entirely on how well they can mimic official portals. To stay safe, fans should mount a tiered defense: install reliable security software to block phishing, keep a sharp eye on every URL you visit, and if something feels even slightly off, never, ever enter your personal or payment info.

  • Stick to authorized channels for tickets. Steer clear of third-party resellers and always double-check info on the official Olympic website.
  • Use legitimate streaming services. Read the reviews and don’t hand over your credit card details to unverified sites.
  • Be wary of Olympic merch and gift vendors. Don’t get baited by “exclusive” offers or massive discounts from unknown stores. Only buy from official retail partners.
  • Avoid links in emails, direct messages, texts, or ads offering free tickets, streams, promo codes, or prize giveaways.
  • Deploy a robust security solution. For instance, Kaspersky Premium automatically shuts down phishing attempts and blocks dangerous websites, malicious ads, and credit card skimmers in real time.

Want to see how sports fans were targeted in the past? Check out our previous posts:

Kaspersky official blog – ​Read More

When AI Secrets Go Public: The Rising Risk of Exposed ChatGPT API Keys

Exposed API Keys

Executive Summary

Cyble Research and Intelligence Labs (CRIL) observed large-scale, systematic exposure of ChatGPT API keys across the public internet. Over 5,000 publicly accessible GitHub repositories and approximately 3,000 live production websites were found leaking API keys through hardcoded source code and client-side JavaScript.

GitHub has emerged as a key discovery surface, with API keys frequently committed directly into source files or stored in configuration and .env files. The risk is further amplified by public-facing websites that embed active keys in front-end assets, leading to persistent, long-term exposure in production environments.

CRIL’s investigation further revealed that several exposed API keys were referenced in discussions mentioning the Cyble Vision platform. The exposure of these credentials significantly lowers the barrier for threat actors, enabling faster downstream abuse and facilitating broader criminal exploitation.

These findings underscore a critical security gap in the AI adoption lifecycle. AI credentials must be treated as production secrets and protected with the same rigor as cloud and identity credentials to prevent ongoing financial, operational, and reputational risk.

Key Takeaways

  • GitHub is a primary vector for the discovery of exposed ChatGPT API keys.
  • Public websites and repositories form a continuous exposure loop for AI secrets.
  • Attackers can use automated scanners and GitHub search operators to harvest keys at scale.
  • Exposed AI keys are monetized through inference abuse, resale, and downstream criminal activity.
  • Most organizations lack monitoring for AI credential misuse.

AI API keys are production secrets, not developer conveniences. Treating them casually is creating a new class of silent, high-impact breaches.

Richard Sands, CISO, Cyble

Overview, Analysis, and Insights

“The AI Era Has Arrived — Security Discipline Has Not”

We are firmly in the AI era. From chatbots and copilots to recommendation engines and automated workflows, artificial intelligence is no longer experimental. It is production-grade infrastructure with end-to-end workflows and pipelines. Modern websites and applications increasingly rely on large language models (LLMs), token-based APIs, and real-time inference to deliver capabilities that were unthinkable just a few years ago.

This rapid adoption has also given rise to a development culture often referred to as “vibe coding.” Developers, startups, and even enterprises are prioritizing speed, experimentation, and feature delivery over foundational security practices. While this approach accelerates innovation, it also introduces systemic weaknesses that attackers are quick to exploit.

One of the most prevalent and most dangerous of these weaknesses is the widespread exposure of hardcoded AI API keys across both source code repositories and production websites.

A rapidly expanding digital risk surface is likely to increase the likelihood of compromise; a preventive strategy is the best approach to avoid it. Cyble Vision provides users with insight into exposures across the surface, deep, and dark web, generating real-time alerts for them to view and take action.

SOC teams will be able to leverage this data to remediate compromised credentials and their associated endpoints. With Threat Actors potentially weaponizing these credentials to carry out malicious activities (which will then be attributed to the affected user(s)), proactive intelligence is paramount to keeping one’s digital risk surface secure.

“Tokens are the new passwords — they are being mishandled.”

AI platforms use token-based authentication. API keys act as high-value secrets that grant access to inference capabilities, billing accounts, usage quotas, and, in some cases, sensitive prompts or application behavior. From a security standpoint, these keys are equivalent to privileged credentials.

Despite this, ChatGPT API keys are frequently embedded directly in JavaScript files, front-end frameworks, static assets, and configuration files accessible to end users. In many cases, keys are visible through browser developer tools, minified bundles, or publicly indexed source code. An example of the keys hardcoded in popular reputable websites is shown below (see Figure 1)

Figure 1 – Public Websites exposing API keys

This reflects a fundamental misunderstanding: API keys are being treated as configuration values rather than as secrets. In the AI era, that assumption is dangerously outdated. In some cases, this happens unintentionally, while in others, it’s a deliberate trade-off that prioritizes speed and convenience over security.

When API keys are exposed publicly, attackers do not need to compromise infrastructure or exploit vulnerabilities. They simply collect and reuse what is already available.

CRIL has identified multiple publicly accessible websites and GitHub Repositories containing hardcoded ChatGPT API keys embedded directly within client-side code. These keys are exposed to any user who inspects network requests or application source files.

A commonly observed pattern resembles the following:

```javascript
const OPENAI_API_KEY = "sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX";
```

```javascript
const OPENAI_API_KEY = "sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX";
```



The prefix “sk-proj-“ typically represents a project-scoped secret key associated with a specific project environment, inheriting its usage limits and billing configuration. The “sk-svcacct-“ prefix generally denotes a service account–based key intended for automated backend services or system integrations.

Regardless of type, both keys function as privileged authentication tokens that enable direct access to AI inference services and billing resources. When embedded in client-side code, they are fully exposed and can be immediately harvested and misused by threat actors.

GitHub as a High-Fidelity Source of AI Secrets

Public GitHub repositories have emerged as one of the most reliable discovery surfaces for exposed ChatGPT API keys. During development, testing, and rapid prototyping, developers frequently hardcode OpenAI credentials into source code, configuration files, or .env files—often with the intent to remove or rotate them later. In practice, these secrets persist in commit history, forks, and archived repositories.

CRIL analysis identified over 5,000 GitHub repositories containing hardcoded OpenAI API keys. These exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. In many cases, the repositories were actively maintained or recently updated, increasing the likelihood that the exposed keys were still valid at the time of discovery.

Notably, the majority of exposed keys were configured to access widely used ChatGPT models, making them particularly attractive for abuse. These models are commonly integrated into production workflows, increasing both their exposure rate and their value to threat actors.

Once committed to GitHub, API keys can be rapidly indexed by automated scanners that monitor new commits and repository updates in near real time. This significantly reduces the window between exposure and exploitation, often to hours or even minutes.

Public Websites: Persistent Exposure in Production Environments

Beyond source code repositories, CRIL observed widespread exposure of ChatGPT API keys directly within production websites. In these cases, API keys were embedded in client-side JavaScript bundles, static assets, or front-end framework files, making them accessible to any user inspecting the application.

CRIL identified approximately 3,000 public-facing websites exposing ChatGPT API keys in this manner. Unlike repository leaks, which may be removed or made private, website-based exposures often persist for extended periods, continuously leaking secrets to both human users and automated scrapers.

These implementations frequently invoke ChatGPT APIs directly from the browser, bypassing backend mediation entirely. As a result, exposed keys are not only visible but actively used in real time, making them trivial to harvest and immediately abuse.

As with GitHub exposures, the most referenced models were highly prevalent ChatGPT variants used for general-purpose inference, indicating that these keys were tied to live, customer-facing functionality rather than isolated testing environments. These models strike a balance between capability and cost, making them ideal for high-volume abuse such as phishing content generation, scam scripts, and automation at scale.

Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.  

Kautubh Medhe, CPO, Cyble

From Exposure to Exploitation: How Attackers Monetize AI Keys

Threat actors continuously monitor public websites, GitHub repositories, forks, gists, and exposed JavaScript bundles to identify high-value secrets, including OpenAI API keys. Once discovered, these keys are rapidly validated through automated scripts and immediately operationalized for malicious use.

Compromised keys are typically abused to:

  • Execute high-volume inference workloads
  • Generate phishing emails, scam scripts, and social engineering content
  • Support malware development and lure creation
  • Circumvent usage quotas and service restrictions
  • Drain victim billing accounts and exhaust API credits

In certain cases, CRIL, using Cyble Vision, also identified several of these keys that originated from exposures and were subsequently leaked, as noted in our spotlight mentions. (see Figure 2 and Figure 3)

Figure 2 – Cyble Vision indicates API key exposure leak
Figure 2 – Cyble Vision indicates API key exposure leak

Figure 3 – API key leak content ChatGPT
Figure 3 – API key leak content

Unlike traditional conventions, AI API activity is often not integrated into centralized logging, SIEM monitoring, or anomaly detection frameworks. As a result, malicious usage can persist undetected until organizations encounter billing spikes, quota exhaustion, degraded service performance, or operational disruptions.

Conclusion

The exposure of ChatGPT API keys across thousands of websites and tens of thousands of GitHub repositories highlights a systemic security blind spot in the AI adoption lifecycle. These credentials are actively harvested, rapidly abused, and difficult to trace once compromised.

As AI becomes embedded in business-critical workflows, organizations must abandon the perception that AI integrations are experimental or low risk. AI credentials are production secrets and must be protected accordingly.

Failure to secure them will continue to expose organizations to financial loss, operational disruption, and reputational damage.

SOC teams should take the initiative to proactively monitor for exposed endpoints using monitoring tools such as Cyble Vision, which provides users with real-time alerts and visibility into compromised endpoints.

This, in turn, allows them to take corrective action to identify which endpoints and credentials were compromised and secure any compromised endpoints as soon as possible.

Our Recommendations

Eliminate Secrets from Client-Side Code

AI API keys must never be embedded in JavaScript or front-end assets. All AI interactions should be routed through secure backend services.

Enforce GitHub Hygiene and Secret Scanning

  • Prevent commits containing secrets through pre-commit hooks and CI/CD enforcement
  • Continuously scan repositories, forks, and gists for leaked keys
  • Assume exposure once a key appears in a public repository and rotate immediately
  • Maintain a complete inventory of all repositories associated with the organization, including shadow IT projects, archived repositories, personal developer forks, test environments, and proof-of-concept code
  • Enable automated secret scanning and push protection at the organization level

Apply Least Privilege and Usage Controls

  • Restrict API keys by project scope and environment (separate dev, test, prod)
  • Apply IP allowlisting where possible
  • Enforce usage quotas and hard spending limits
  • Rotate keys frequently and revoke any exposed credentials immediately
  • Avoid sharing keys across teams or applications

Implement Secure Key Management Practices

  • Store API keys in secure secret management systems
  • Avoid storing keys in plaintext configuration files
  • Use environment variables securely and restrict access permissions
  • Do not log API keys in application logs, error messages, or debugging outputs
  • Ensure keys are excluded from backups, crash dumps, and telemetry exports

Monitor AI Usage Like Cloud Infrastructure

Establish baselines for normal AI API usage and alert on anomalies such as spikes, unusual geographies, or unexpected model usage.

The post When AI Secrets Go Public: The Rising Risk of Exposed ChatGPT API Keys appeared first on Cyble.

Cyble – ​Read More

Ryan Liles, master of technical diplomacy

Ryan Liles, master of technical diplomacy

Cisco Talos is back with another inside look at the people who keep the internet safe. This time, Amy chats with Ryan Liles, who bridges the gap between Cisco’s product teams and the third-party testing labs that put Cisco products through their paces. Ryan pulls back the curtain on the delicate dance of technical diplomacy, how he keeps his cool when the stakes are high, and how speaking up has helped him reshape industry standards. Plus, get a glimpse of the hobbies that keep him recharged when he’s off the clock.

Amy Ciminnisi: Ryan, you shared that you are on the Vulnerability Research and Discovery team, but you work in a little bit of a different niche. Can you talk a little bit about what you do?

Ryan Liles: My primary role is to work with all of the Cisco product teams. So anybody that Talos feeds security intelligence to — Firewall, Email, Endpoint — anybody that we write content for, I work with their product teams to help get their products tested externally. Cisco can come out all day and say our products are the best at what they do, but no one’s going to take our word for it. So we have to get someone else to say that for us, and that’s where I come in.

AC: Third-party testing involves coordinating with external organizations and standards groups. You mentioned it can be difficult sometimes and you have to choose your words carefully. What are some of the biggest challenges you face when working across these various groups? Do you have a particular method of overcoming them?

RL: The reason I fell into this role at Cisco is because of all the contacts I made while working at NSS Labs. The third-party testing industry for security appliances is like a lot of the rest of the security industry — very small. Even though there’s a large dollar amount tied to it in the marketplace, the number of people in it is very small. So you’re going to run into the same personalities over and over again throughout your career in security. Because I try to generally be friendly with those people and keep my network alive, I have a lot of personal relationships that I can leverage when it comes to having difficult conversations.

By difficult conversations, I mean if we’ve found a bug in the product or if a third-party test lab acquired our product through means not involving us and did some testing that didn’t turn out great, I can have the conversations with them where we discuss both technically what was their testing methodology and how did they deploy the products. If there were instances where we feel maybe they didn’t deploy the product correctly or there’s some flaws in their methodology, being able to have that kind of discussion with a test lab, while not frustrating them, takes a lot of diplomatic skills. I think that’s the biggest contributor to my success in this role — being able to have those conversations, leaving emotion out of things, and just sticking to the technical facts and saying, here’s what went wrong, here’s what went right, let’s figure out the best way to fix this. That has really contributed to how Cisco and Talos interface with third-party testing labs and maintain those relationships.


Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos.

Cisco Talos Blog – ​Read More