How businesses should respond to employees using personal AI apps

A recent MIT report, The GenAI Divide: State of AI in Business 2025, brought on a significant cooling of tech stocks. While the report offers interesting observations on the economics and organization of AI implementation in business, it also contains valuable insights for cybersecurity teams. The authors weren’t concerned with security issues: the words “security”, “cybersecurity”, or “safety” don’t even appear in the report. However, its findings can and should be considered when planning new corporate AI security policies.

The key observation is that while only 40% of surveyed organizations have purchased an LLM subscription, 90% of employees regularly use personal AI-powered tools for work tasks. And this “shadow AI economy” — the term used in the report — is said to be more effective than the official one. A mere 5% of corporations see economic benefit from their AI implementations, whereas employees are successfully boosting their personal productivity.

The top-down approach to AI implementation is often unsuccessful. Therefore, the authors recommend “learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives”. So how does this advice align with cybersecurity rules?

A complete ban on shadow AI

A policy favored by many CISOs is to test and implement — or better yet, build one’s own — AI tools and then simply ban all others. This approach can be economically inefficient, potentially causing the company to fall behind its competitors. It’s also difficult to enforce, as ensuring compliance can be both challenging and expensive. Nevertheless, for some highly regulated industries or for business units that handle extremely sensitive data, a prohibitive policy might be the only option. The following methods can be used to implement it:

  • Block access to all popular AI tools at the network level using a network filtering tool.
  • Configure a DLP system to monitor and block data from being transferred to AI applications and services; this includes preventing the copying and pasting of large text blocks via the clipboard.
  • Use an application allowlist policy on corporate devices to prevent employees from running third-party applications that could be used for direct AI access or to bypass other security measures.
  • Prohibit the use of personal devices for work-related tasks.
  • Use additional tools, such as video analytics, to detect and limit employees’ ability to take pictures of their computer screens with personal smartphones.
  • Establish a company-wide policy that prohibits the use of any AI tools except those on a management-approved list and deployed by corporate security teams. This policy should be formally documented, and employees should receive appropriate training.

Unrestricted use of AI

If the company considers the risks of using AI tools to be insignificant, or has departments that don’t handle personal or other sensitive data, the use of AI by these teams can be all but unrestricted. By setting a short list of hygiene measures and restrictions, the company can observe LLM usage habits, identify popular services, and use this data to plan future actions and refine their security measures. Even with this democratic approach, it’s still necessary to:

Balanced restrictions on AI use

When it comes to company-wide AI usage, neither extreme — a total ban or total freedom — is likely to fit. More versatile would be a policy that allows for different levels of AI access based on the type of data being used. Full implementation of such a policy requires:

  • A specialized AI proxy that both cleans queries on-the-fly by removing specific types of sensitive data (such as names or customer IDs), and uses role-based access control to block inappropriate use cases.
  • An IT self-service portal for employees to declare their use of AI tools — from basic models and services to specialized applications and browser extensions.
  • A solution (NGFW, CASB, DLP, or other) for detailed monitoring and control of AI usage at the level of specific requests for each service.
  • Only for companies that build software: modified CI/CD pipelines and SAST/DAST tools to automatically identify AI-generated code, and flag it for additional verification steps.
  • As with the unrestricted scenario, regular employee training, surveys, and robust security for both work and personal devices.

Armed with the listed requirements, a policy needs to be developed that covers different departments and various types of information. It might look something like this:

Data type Public-facing AI (from personal devices and accounts) External AI service (via a corporate AI proxy) On-premise or trusted cloud AI tools
Public data (such as ad copy) Permitted (declared via the company portal) Permitted (logged) Permitted (logged)
General internal data (such as email content) Discouraged but not blocked. Requires declaration Permitted (logged) Permitted (logged)
Confidential data (such as application source code, legal or HR communications) Blocked by DLP/CASB/NGFW Permitted for specific, manager-approved scenarios (personal data must be removed; code requires both automated and manual checks) Permitted (logged, with personal data removed as needed)
High-impact regulated data (financial, medical, and so on) Prohibited Prohibited Permitted with CISO approval, subject to regulatory storage requirements
Highly critical and classified data Prohibited Prohibited Prohibited (exceptions possible only with board of directors approval)

 

To enforce the policy, a multi-layered organizational approach is necessary in addition to technical tools. First and foremost, employees need to be trained on the risks associated with AI — from data leaks and hallucinations to prompt injections. This training should be mandatory for everyone in the organization.

After the initial training, it’s essential to develop more detailed policies and provide advanced training for department heads. This will empower them to make informed decisions about whether to approve or deny requests to use specific data with public AI tools.

Initial policies, criteria, and measures are just the beginning; they need to be regularly updated. This involves analyzing data, refining real-world AI use cases, and monitoring popular tools. A self-service portal is needed as a stress-free environment where employees can explain what AI tools they’re using and for what purposes. This valuable feedback enriches your analytics, helps build a business case for AI adoption, and provides a role-based model for applying the right security policies.

Finally, a multi-tiered system for responding to violations is a must. Possible steps:

  • An automated warning, and a mandatory micro-training course on the given violation.
  • A private meeting between the employee and their department head and an information security officer.
  • A temporary ban on AI-powered tools.
  • Strict disciplinary action through HR.

A comprehensive approach to AI security

The policies discussed here cover a relatively narrow range of risks associated with the use of SaaS solutions for generative AI. To create a full-fledged policy that addresses the whole spectrum of relevant risks, see our guidelines for securely implementing AI systems, developed by Kaspersky in collaboration with other trusted experts.

Kaspersky official blog – ​Read More

How to protect your cookies and session ID | Kaspersky official blog

Open any website, and the first thing you’ll likely see is a pop-up notification about the use of cookies. You’re usually given the option to accept all cookies, accept only necessary ones, or flatly reject them. Regardless of your choice, you probably won’t notice a difference, and the notification disappears from the screen anyway.

Today, we dive a little deeper into the cookie jar: what cookies are for, what types exist, how attackers can intercept them, what the risks are, and how to stay safe.

What are cookies?

When you visit a website, it sends a cookie to your browser. This is a small text file that contains data about you, your system, and the actions you’ve taken on the site. Your browser stores this data on your device and sends it back to the server every time you return to that site. This simplifies your interaction with the site: you don’t have to log in on every single page; sites remember your display settings; online stores keep items in your cart; streaming services know at which episode you stopped watching — the benefits are limitless.

Cookies can store your login, password, security tokens, phone number, residential address, bank details, and session ID. Let’s take a closer look at the session identifier.

A session ID is a unique code assigned to each user when they sign in to a website. If a third party manages to intercept this code, the web server will see them as a legitimate user. Here’s a simple analogy: imagine you can enter your office by means of an electronic pass with a unique code. If your pass is stolen, the thief — whether they look like you or not — can open any door you have access to without any trouble. Meanwhile, the security system will believe that it’s you entering. Sounds like a scene from a crime TV show, doesn’t it? The same thing happens online: if a hacker steals a cookie with your session ID, they can sign in to a website you were already signed in to, under your name, without needing to enter a username and password; sometimes they can even bypass two-factor authentication. In 2023, hackers stole all three of the YouTube channels of the famous tech blogger Linus Sebastian – “Linus Tech Tips” and two other Linus Media Group YouTube channels with tens of millions of subscribers — and this is exactly how they did it. We’ve already covered that case in detail.

What types of cookies are there?

Now let’s sort through the different cookie varieties. All cookies can be classified according to a number of characteristics.

By storage time

  • Temporary, or session cookies. These are only used while you’re on the website. They’re deleted as soon as you leave. They’re required for things like keeping you signed in as you navigate from page to page, or remembering your selected language and region.
  • Persistent cookies. These remain on your device after you leave the site. They spare you the need to accept or decline cookie policies every time you visit. They typically last for about a year.

It’s possible for session cookies to become persistent. For example, if you check a box like “Remember me”, “Save settings”, or some such on a website, the data will be saved in a persistent cookie.

By source

  • First-party cookies. These are generated by the website itself. They allow the website to function properly and visitors to get a proper experience. They may also be used for analytics and marketing purposes.
  • Third-party cookies. These are collected by external services. They’re used to display ads and collect advertising statistics, among other things. This category also includes cookies from analytics services like Google Analytics and social media platforms. These cookies store your sign-in credentials, allowing you to like a page or share content on social media with a single click.

By importance

  • Required, or essential cookies. These support core website features, such as selling products on an e-commerce platform. In this case, each user has a personal account, and essential cookies store their login, password, and session ID.
  • Optional cookies. These are used to track user behavior and help tailor ads more precisely. Most optional cookies belong to external parties and don’t affect your ability to use all of the site’s features.

By storage technology

  • These cookies are stored in text files in the browser’s standard storage. When you clear your browser data, they’re deleted, and after that, the websites that sent them will no longer recognize you.
  • There are two special subtypes: supercookies and evercookies, which store data in a non-standard way. Supercookies are embedded in website headers and stored in non-standard locations, which allows them to avoid being deleted by the browser’s cleanup function. Evercookies can be restored using JavaScript even after being deleted. This means they can be used for persistent and difficult-to-control user tracking.

The same cookie can fall into multiple categories: for example, most optional cookies are third-party, while required cookies include temporary ones responsible for the security of a specific browsing session. For more details on how and when all these types of cookies are used, read the full report on Securelist.

How session IDs are stolen through session hijacking

Cookies that contain a session ID are the most tempting targets for hackers. Theft of a session ID is also known as session hijacking. Let’s examine some of the most interesting and widespread methods.

Session sniffing

Session hijacking is possible by monitoring or “sniffing” the internet traffic between the user and the website. This type of attack happens on websites that use the less secure HTTP protocol instead of HTTPS. With HTTP, cookie files are transmitted in plain text within the headers of HTTP requests, meaning they’re not encrypted. A malicious actor can easily intercept the traffic between you and the website you’re on, and extract cookies.

These attacks often occur on public Wi-Fi networks, especially if not protected by either the WPA2 or WPA3 protocols. For this reason, we recommend exercising extreme caution with public hotspots. It’s much safer to use mobile data. If you’re traveling abroad, it’s a good idea to use an Kaspersky eSIM Store.

Cross-site scripting (XSS)

Cross-site scripting consistently ranks among the top web-security vulnerabilities, and with good reason. This type of attack allows malicious actors to gain access to a site’s data — including the cookie files that contain the coveted session IDs.

Here’s how it works: the attacker finds a vulnerability in the website’s source code and injects a malicious script; that done, all that remains is for you to visit the infected page and you can kiss your cookies goodbye. The script gains full access to your cookies and sends them to the attacker.

Cross-site request forgery (CSRF/XSRF)

Unlike other types of attacks, cross-site request forgery exploits the trust relationship between a website and your browser. An attacker tricks an authenticated user’s browser into performing an unintended action without their knowledge, such as changing a password or deleting data like uploaded videos.

For this type of attack, the threat actor creates a web page or email containing a malicious link, HTML code, or a script with a request to the vulnerable website. Simply opening the page or email, or clicking the link, is enough for the browser to automatically send the malicious request to the target site. All of your cookies for that site will be attached to the request. Believing that it was you who requested, say, the password change or channel deletion, the site will carry out the attackers’ request on your behalf.

That’s why we recommend not opening links received from strangers, and installing a Kaspersky Password Manager that can alert you to malicious links or scripts.

Predictable session IDs

Sometimes, attackers don’t need to use complex schemes — they can simply guess the session ID. On some websites, session IDs are generated by predictable algorithms, and might contain information like your IP address plus an easily reproducible sequence of characters.

To pull off this kind of attack, hackers need to collect enough sample IDs, analyze them, and then figure out the generating algorithm to predict session IDs on their own.

There are other ways to steal a session ID, such as session fixation, cookie tossing, and man-in-the-middle (MitM) attacks. These methods are covered in our dedicated Securelist post.

How to protect yourself from cookie thieves

A large part of the responsibility for cookie security lies with website developers. We provide tips for them in our full report on Securelist.

But there are some things we can all do to stay safe online.

  • Only enter personal data on websites that use the HTTPS protocol. If you see “HTTP” in the address bar, don’t accept cookies or submit any sensitive information like logins, passwords, or credit card details.
  • Pay attention to browser alerts. If you see a warning about an invalid or suspicious security certificate when you visit a site, close the page immediately.
  • Update your browsers regularly or enable automatic updates. This helps protect you from known vulnerabilities.
  • Regularly clear browser cookies and cache. This prevents old, potentially leaked cookie files and session IDs from being exploited. Most browsers have a setting to automatically delete this data when you close them.
  • Don’t follow suspicious links. This is especially true of links received from strangers in a messaging app or by email. If you have a hard time telling the difference between a legitimate link and a phishing one, install a Kaspersky Premium that can alert you before you visit a malicious site.
  • Enable two-factor authentication (2FA) wherever possible. [placeholder KPM] is a convenient way to store 2FA tokens and generate one-time codes. It syncs them across all your devices, which makes it much harder for an attacker to access your account after a session has ended — even if they steal your session ID.
  • Refuse to accept all cookies on all websites. Accepting every cookie from every site isn’t the best strategy. Many websites now offer a choice between accepting all and accepting only essential cookies. Whenever possible, choose the “required/essential cookies only” option, as these are the ones the site needs to function properly.
  • Connect to public Wi-Fi networks only as a last resort. They are often poorly secured, which attackers take advantage of. If you have to connect, avoid signing in to social media or messaging accounts, using online banking, or accessing any other services that require authentication.

Want to know even more about cookies? Read these articles:

Kaspersky official blog – ​Read More

Streamline Your SOC: All-in-One Threat Detection with ANY.RUN 

Running a SOC means living in a world of alerts. Every day, thousands of signals pour in; some urgent, many irrelevant. Analysts need to separate noise from real threats, investigate quickly, and keep the organization safe without letting cases pile up. 

The challenge isn’t only about detecting threats but doing it fast enough to reduce escalations, avoid burnout, and keep operations efficient. 

That’s where an all-in-one detection workflow changes everything. ANY.RUN brings together the tools analysts rely on most; live threat feeds, interactive sandboxing, and instant lookups, into a single, streamlined process. The result: faster answers, fewer escalations, and more confidence in every decision. 

Why Fragmented Workflows Slow SOCs Down 

It’s not the flood of alerts alone that puts SOCs under pressure but the fractured way they’re handled. One tool for threat feeds, another for detonation, a third for enrichment. Every time an analyst switches context, minutes are lost. Multiply that across hundreds of alerts, and the delays add up fast. 

The bigger problem is what those delays cause: escalations that didn’t need to happen, senior staff tied up with routine checks, and threats that linger longer than they should. Instead of building momentum, investigations stall. 

This is the hidden cost of disconnected tools. They don’t only slow analysts down but also create more work for everyone and open the door to mistakes. 

From Chaos to Clarity: The Power of Unified Workflow 

When detection runs as one continuous workflow, every step strengthens the next. Instead of losing time hopping between tools, analysts work with a steady flow: 

  • Noise gets filtered early: Live feeds rule out known threats, reducing case load by up to 20% and cutting unnecessary escalations by 30%
  • Investigations move faster: The sandbox reveals hidden behavior in real time, lowering MTTR by as much as 21 minutes per case
  • Decisions are backed by context: Lookups provide history from millions of past analyses contributed by 15,000+ organizations, giving analysts 24× more IOCs to work with and ensuring every case is backed by evidence. 
The result is measurable:
+62.7% more threats detected overall
94% of surveyed users report faster triage
63% year-over-year user growth, driven by analyst efficiency
30% fewer alerts require escalation to senior analysts

The outcome of this unified workflow is speed, clarity and confidence. Analysts know what to act on, what to ignore, and when a case can be closed without doubt. 

Threat Feed: Cut Through the Noise 

The first challenge in any SOC is deciding which alerts deserve attention. With live IOC streams collected from thousands of users worldwide, ANY.RUN’s TI Feeds works as your early filter. Analysts see instantly whether an IP, domain, or hash has already been confirmed as malicious and can rule out duplicates on the spot. That means less time wasted on “non-issues” and more focus on real threats that matter. 

ANY.RUN’s TI Feed providing actionable IOCs to SOC teams 

Every IOC in the feed is actionable and connected to sandbox analyses, giving analysts not just a red flag but the full context behind it. This means faster triage, more confident decisions, and the ability to trace threats back to their behavior in real-world samples. 

The numbers speak for themselves: with Threat Feed and Lookup combined, analysts gain access to 24× more IOCs than from typical isolated sources. And because the feed captures real-world attacks, from targeted phishing campaigns to large-scale malware hitting banks and enterprises, your SOC works with threat data that reflects the real distribution of risks. 

ANY.RUN’s Threat Intelligence Feeds with variety of format options and easy way of integration 

ANY.RUN’s Threat Intelligence Feeds come in multiple formats with simple integration options, making it easy to plug into your existing SIEM, TIP, or SOAR setup. 

Expand threat coverage in your SOC  



Check out plans


Interactive Sandbox: See the Whole Picture 

When an alert passes the filter, it needs proof. This is where ANY.RUN’s interactive sandbox becomes the proving ground, turning suspicious files, scripts, and URLs into full investigations in real time. Instead of waiting for static reports or snapshots, analysts can detonate samples and watch the behavior unfold step by step, just like a real user would. 

This approach uncovers what traditional sandboxes often miss: 

  • Hidden payloads that require clicks or triggers to activate. 
  • Staged downloads that reveal themselves only over time. 
  • Evasive tactics designed to bypass automated detection. 

But visibility doesn’t depend solely on manual clicks. With automated interactivity, ANY.RUN simulates user actions to expose threats faster, reducing the need for analysts to intervene at every step. Junior analysts gain confidence because the system highlights behaviors for them, while senior staff can focus on advanced investigations instead of routine triage. 

The user-friendly interface and AI assistance add another layer of efficiency. Complex behaviors are explained clearly, reports are well-structured, and the entire attack chain is mapped from start to finish.  

For example, in the case of Lumma Stealer, ANY.RUN captured the full infection chain, from initial dropper to persistence mechanisms, all preserved in a detailed report ready for escalation, rule writing, or sharing. 

View Lumma Stealer exposed in 30 seconds 

Lumma Stealer’s full attack chain detected inside ANY.RUN sandbox in 30 seconds 

The outcome is a process where analysts of all skill levels can act faster, escalate less, and make decisions with confidence, while SOC leaders gain time back from their most experienced staff. 

Threat Lookup: Context at Your Fingertips 

Even with full sandbox results, one question always remains: Has this threat been seen before? Knowing whether an IOC belongs to a fresh campaign or something already circulating across industries changes how analysts respond. 

Sandbox analyses of recent Tycoon attacks for faster decision making 

ANY.RUN’s Threat Lookup delivers that answer in seconds. With access to millions of past analyses contributed by more than 15,000 organizations worldwide, analysts can instantly check whether an IP, domain, or hash has been observed elsewhere. This turns isolated alerts into patterns, helping teams connect the dots and react with confidence. 

  • Early warning from others’ incidents: What hits one enterprise today could reach yours tomorrow. Lookup lets you learn from global telemetry before the threat arrives at your doorstep. 
  • Deeper reporting without heavy lifting: Instead of manually searching across multiple feeds and databases, analysts enrich findings with one query. 
  • Reduced unnecessary escalations: Confirmation from millions of past cases means analysts can validate faster and close tickets sooner. 

The result is a smoother close to every investigation: sandbox analysis provides the behavior, Threat Lookup adds the history, and reports go out with stronger evidence. Analysts save time, senior experts get fewer escalations, and the SOC becomes more resilient with every case resolved.

Detect threats faster with ANY.RUN’s Interactive Sandbox
See full attack chain in seconds for immediate response



Get started now


Turn Detection Into One Continuous Workflow 

The real power of ANY.RUN is in how the solutions work together, seamlessly feeding into one another to create a single, continuous process. 

Instead of bouncing between disconnected tools, analysts move through one streamlined workflow: alerts are filtered at the start, suspicious activity is detonated, the entire attack chain is exposed in real time, and findings are instantly validated against global threat history.  

The outcome is faster resolutions, fewer unnecessary escalations, and reports enriched with both behavioral detail and historical context; the kind of evidence leaders and clients can trust. 

Sign up today to see how ANY.RUN’s all-in-one suite can turn your SOC into a faster, more confident detection machine. 

The post Streamline Your SOC: All-in-One Threat Detection with ANY.RUN  appeared first on ANY.RUN’s Cybersecurity Blog.

ANY.RUN’s Cybersecurity Blog – ​Read More

What are money mules, and how to avoid accidentally becoming one | Kaspersky official blog

Picture this: you’re on a train chatting with a nice lady and her young child — visitors to your home city. As the train approaches the station, she reaches for her wallet, pulls out a bank card, and her face falls. “Oh no! I accidentally snaped my card!” she exclaims. “What am I going to do now? I needed to withdraw cash…” Could I transfer you some money, so you can then withdraw it for me at an ATM?”

Most people would agree to help. She’s a young lady with a child in a new city after all, and she’s in a tight spot. What could go wrong? And she’s not asking for money — she’s sending it to you. It seems completely harmless. The money is quickly transferred to your account, you withdraw the cash from your account from an ATM, and the woman thanks you enthusiastically before disappearing into the crowds. But a couple of weeks later the police show up at your door…

You thought you were doing a good deed, but you’ve just become an unwitting participant in a money laundering scheme. People who help criminals move stolen money through their bank accounts are called “money mules”. Today we explain how you can accidentally become a money mule, and the serious consequences you could face.

How people become money mules

A money mule is anyone whose bank account is used to move or withdraw money as part of a scam. Mules are considered expendable in any fraudulent scheme, and anyone can become one — even someone who’s never heard the term before. There are many ways people get roped into these schemes, and here are just a few of them.

The “easy-money online job” scam. Job-search chats are often filled with tasty offers: “Looking for a few people, paying $50 an hour, easy work, all you need is internet access”. The “job” involves accepting transfers from certain people, and then making payments to others. Another variation involves withdrawing cash after funds are sent to you and giving it to a random courier. They might actually pay you for this “service”, but trust us, even $50 an hour isn’t worth the potential consequences, which we’ll get into later.

“I left my card at home. Do you mind helping me out?” The young-lady-in-a-tight-spot role is easy to recast in other narratives. Instead of a young lady, there could be a young man telling you a sob story about a card he’s left somewhere and needing help to pay for a smartphone, a TV, perfume, or some other expensive item. He’ll offer to transfer you funds so you can pay for the item with your own card. You may agree to help out — especially if you get cashback from using your card. But notice the difference: if this stranger messaged you online, you’d probably just tell them to get lost. However, when you’re standing next to them at the checkout counter, the likelihood of your “helping out” is much higher.

“We’ll pay you in cash under the table”. Even employees of small, shady companies can unknowingly become money mules. These companies don’t officially hire their workers, and pay only in cash under the table. Note that if the employer has obtained money illegally, all employees working without a contract may be considered money mules and could face serious legal consequences.

There are other schemes too, which primarily target teenagers. Youngsters are asked to open a bank account and pass the account details to strangers online who’ll pay them, say, $20 or $30 for the service. Opening a new bank account takes only a few seconds, and the promised sum is a real help for any hard-up student. Unfortunately, these young victims most likely have no idea who could use their accounts or how.

What happens if you become a money mule?

Nothing good. At a minimum, a money mule is considered an active participant in a criminal scheme — even if they’re unaware of their involvement. Fraudsters constantly steal large sums of digital money from both companies and ordinary people, employing hundreds of social engineering tactics. But they need a way to cash out. And that’s where schemes to create entire networks of unsuspecting money mules come in — and they’re the ones who’ll have the police knocking on their door.

Many countries have laws against money muling. Money mules get prosecuted regardless of whether they knew where the funds came from, or that they were pawns in a grand scheme. Proving the absence of criminal intent in court can be difficult, so, despite being unaware of the third party’s illicit intentions when transferring the money, they may be slapped with fines or other penalties.

Actual punishment varies by country: for example, in the United States, if criminal intent is proven, a money mule can face up to 20 years in prison. In Germany, to avoid punishment, it’s enough to turn yourself in to the police and report the scam you’ve become involved in. In Singapore, inadvertent money laundering can lead to fines of up to $150 000, or a prison sentence of up to three years if there were clear “red flags” pointing to a scam.

How to avoid becoming a money mule

Regardless of the penalties in your country for cashing out criminal money, you need to be extremely careful to avoid unwittingly becoming a money mule. Here’s a list of rules to help you avoid unwanted problems:

  • Don’t trust everyone unquestioningly. If a stranger offers to send you any amount of money for you to withdraw for a small fee, refuse.
  • Always work on-the-books, and with a formal contract. Don’t agree to off-the-books cash-in-hand, and always sign a contract for any job you do.
  • Keep your bank details private. Don’t open bank accounts at the request of someone else, or sell details of your existing accounts or bank cards.

Most importantly, remember that nothing’s truly free. Learn how to spot scammers with the help of read our Telegram channel — subscribe to stay up to date on all the new trends in cybersecurity.

What else to read on fraudulent schemes:

Kaspersky official blog – ​Read More

WordPress: vulnerabilities in plugins and themes | Kaspersky official blog

The WordPress content management system (CMS) has been popping up frequently on cybersecurity news sites lately. Most of this coverage was driven by vulnerabilities in plugins and themes. However, our colleagues have also observed a case where attackers used poorly secured WordPress sites to distribute trojans. This in itself isn’t surprising — WordPress remains one of the most popular CMS platforms in the business. But the sheer number of discovered plugin vulnerabilities and related incidents shows that attackers are watching the WordPress ecosystem just as closely as its defenders.

WordPress incidents

Just this summer, several serious WordPress-related security incidents have come to light.

Gravity Forms plugin: site compromise and code infection

In early July, attackers gained access to a site running Gravity Forms — a popular form-building plugin — and injected malicious code into versions 2.9.11.1 and 2.9.12. Sites where these plugin versions were installed manually by administrators, or via the PHP dependency manager, Composer, were infected between July 9 and 10.

The malware blocked further updates, downloaded and installed additional malicious code, and created new administrator accounts. This gave the attackers full control of the site, which they then used for malicious purposes.

The Gravity Forms team urges all users to check if they’re running a potentially vulnerable version. Instructions on how to do this are available in the incident notice on the official plugin website. The notice also explains how to remove the malware. And of course, the plugin should be updated to version 2.9.13.

Alone theme: active exploitation of CVE-2025-5394

Also in July, researchers reported that attackers were actively exploiting a critical vulnerability in the unauthenticated file upload validation process (CVE-2025-5394) affecting all versions of the Alone theme for WordPress — up to and including 7.8.3. The flaw enables remote code execution (RCE), giving attackers full control over affected sites.

Notably, attacks began several days before the vulnerability was officially disclosed. According to Wordfence, already by June 12 over 120 000 attempts to exploit CVE-2025-5394 had been made. Threat actors used the flaw to upload ZIP archives containing webshells, install password-protected PHP backdoors for remote HTTP access, and create hidden administrator accounts. In some cases, they even installed full-featured file managers on the compromised WordPress site, giving them complete control over the site’s database.

The developers of the Alone theme have since released version 7.8.5, which patches the vulnerability. All users are strongly advised to update to this version immediately. Additional guidance on how to protect against this bug can be found in the Wordfence report.

Motors theme: exploitation of CVE-2025-4322

In June, attackers also targeted WordPress sites using another premium theme called Motors. In this case, attackers exploited CVE-2025-4322 — a weakness in the user validation process affecting all versions up to 5.6.67. Exploiting it allowed attackers to hijack administrator accounts.

The theme creators, StylemixThemes, released a patched version (5.6.68) on May 14, 2025. That was followed by a Wordfence statement five days later urging users to update without delay. However, not all users updated in time — attacks began the very next day, May 20, and by June 7 Wordfence had recorded 23 100 exploitation attempts.

Successful exploitation of CVE-2025-4322 grants attackers administrator rights, enabling them to create new accounts and reset passwords.

Efimer malware: spread through compromised WordPress sites

And finally, a case in which cybercriminals have not exploited vulnerabilities in plugins and themes, but that nevertheless demonstrates the interest of attackers in WordPress-based sites. In early August, our colleagues investigated an attack involving the Efimer malware — designed primarily to steal cryptocurrency. Attackers spread it via email and malicious torrents, but some infections also originated from compromised WordPress sites.

Careful analysis revealed that Efimer also included a WordPress password cracker. Essentially, each time the malware ran, it launched a brute-force attack on the WordPress admin panel using a set of standard passwords hard-coded in the script. Any successfully cracked passwords were sent back to the attackers’ command server.

Potentially dangerous vulnerabilities

Beyond the above incidents, several other vulnerabilities have been reported — though they’ve not yet been observed in real-world attacks. However, as the Motors case demonstrates, attackers could start exploiting them real soon, so they should be monitored closely.

GiveWP: a vulnerability in WordPress donation plugin

In late July, the team behind the open-source Pi-hole project discovered a vulnerability in the GiveWP plugin, which they were using on their own WordPress site. This plugin allows websites to accept online donations, manage fundraising campaigns, and more.

The developers found that the plugin inadvertently exposed donor data by displaying it in the page source, making names and email addresses accessible without authentication.

GiveWP’s developers released a patch just hours after the issue was reported on GitHub. However, since the data had already been exposed, the Have I Been Pwned service added the incident to its leak database, estimating that nearly 30 000 people’s data had been compromised.

Administrators of sites using GiveWP are advised to update the plugin to version 4.6.1 or later.

Post SMTP: vulnerability CVE-2025-24000 enables administrator account takeover

The CVE-2025-24000 vulnerability — rated 8.8 on the CVSS scale — was recently discovered in the Post SMTP plugin. This extension provides more reliable and user-friendly delivery of outgoing emails from a WordPress site than the built-in wp_mail function.

CVE-2025-24000, which affects all Post SMTP versions up to and including 3.2.0, stems from a broken access control mechanism in the plugin’s REST API. The issue is that this API checks only whether a user is authenticated — not their access level. As a result, even a low-privileged user can view logs containing sent emails along with their full contents.

This makes it possible to hijack an administrator account. An attacker only needs to initiate a password reset for the admin account, then inspect the email logs to retrieve the reset message and follow the link inside, thereby gaining administrator access.

The developer released a patched version — Post SMTP 3.3.0 — on June 11. However, download statistics on WordPress.org at the time of writing show that only about half of the plugin’s users (51.2%) have updated to the fixed version. That leaves more than 200 000 sites still exposed. Moreover, nearly a quarter of all sites (23.4%, or around 100 000) are still running the outdated 2.x branch, which contains this and other unpatched vulnerabilities.

To make matters worse, proof-of-concept (PoC) exploit code for CVE-2025-24000 has already been published online, though we haven’t verified its functionality.

How to protect your WordPress site

Plugins and themes make WordPress highly flexible and user-friendly, but they also significantly expand the attack surface. While avoiding them entirely isn’t realistic, you can ensure the security of your site by following these best practices:

  • Minimize the number of plugins and themes. Install only those that are truly necessary. The fewer you use, the lower the risk that one of them will contain a vulnerability.
  • Thoroughly test plugins in an isolated environment and analyze their code for backdoors before installing.
  • Give preference to widely used plugins. Although not immune to flaws, issues in such projects are typically discovered and patched quicker.
  • Avoid abandoned components — vulnerabilities in them may remain forever.
  • Monitor for anomalies. Regularly review the list of administrator accounts for unknown users, and monitor existing accounts for sudden password failures.
  • Strengthen password policies. Require users to set strong passwords, and make two-factor authentication mandatory.
  • Respond properly to incidents. If you suspect your site has been hacked, react to the incident immediately and restore the site’s security. If you lack the expertise, contact external specialists — swift action can greatly reduce the attack’s impact.

Kaspersky official blog – ​Read More

This month in security with Tony Anscombe – August 2025 edition

From Meta shutting down millions of WhatsApp accounts linked to scam centers all the way to attacks at water facilities in Europe, August 2025 saw no shortage of impactful cybersecurity news

WeLiveSecurity – ​Read More

Link up, lift up, level up

Link up, lift up, level up

Welcome to this week’s edition of the Threat Source newsletter. 

As summer retreats into the rear-view mirror, I’d like to take a moment to reflect on one of my favorite things about the cybersecurity profession: the community. Earlier this month, I attended Black Hat USA 2025 and DEF CON 33 in scalding hot Las Vegas, NV. We often refer to it as “hacker summer camp,” where all the security nerds of various stripes congregate to eat, drink, party, hack and reforge or make new bonds of fellowship with other awesome hackers. Hacker summer camp is, simply put, a whirlwind of activity, from the talks to see, villages to visit, parties to attend, and knowledge to gain. In 5 days, I think I walked almost 30 miles. By the end I was exhausted, but happy to have learned so much and see many of my hacker friends. 

For all the fun and learning you can have at summer camp, it’s a very privileged position to be able to attend. Las Vegas is not a cheap town. Hotels, flights and food — everything, really — is more expensive than average. A Black Hat badge is $1,000+, and DEF CON $500+. If you’re new to this space and early in your career, or your company doesn’t have the money to send you, the FOMO can be real. Earlier in my career, getting the opportunity to visit hacker summer camp — either with my company covering my costs or me paying out of pocket — wasn’t going to happen.  

I bring this up not to flex that I went to BH/DEF CON, but to tell you that as good as those conferences are, there is so much more. Do not be daunted by what is inaccessible but know that there are other conferences out there for like-minded hackers who want to learn and share knowledge with you, wherever you are in the world. Are you in high school? I promise you there are clubs and organizations there to help you. College? There are student clubs and organizations there that will welcome you. And if you’re looking for projects and contests, there are quite a few out there. And hackathons? I got you covered, fam. 

It’s also important to know that there are smaller information security conferences around the world. Perhaps the most popular and usually super local is Bsides. Check them out — their website has a calendar that might have one local to you.  

Infosec is as much a calling as it is a career. You were drawn to this space for a reason — and finding friends and colleagues who match your vibe is important to both grow as a human, but also to maintain a healthy relationship with this industry, especially one that’s notoriously capable of burning you out. We as humans are social creatures, and we need social interaction, even if it’s limited doses (I see you, introverts). Our professions are a natural magnet to pull others into our orbit. I can tell you so many of the things that I consider personal career milestones happened because I talked with fellow security practitioners over drinks or a meal, and something truly wonderful happened.  

So go find your people, lean into the things you are a total security nerd about, and enjoy the fellowship and growth. You’ll be all the better for it.

The one big thing 

Last week, Talos shared that ransomware attacks in Japan surged by about 1.4 times in the first half of 2025, with small and medium-sized companies (especially manufacturing) being the hardest hit. The Qilin group was the most active, and a new player, “Kawa4096,” also began targeting Japanese businesses. Even though some major ransomware groups were shut down, new threats are quickly taking their place. 

Why do I care? 

The ransomware landscape is always changing, and it often highlights vulnerabilities in small and mid-sized businesses in critical industries like manufacturing. With new ransomware groups like Kawa4096 emerging and techniques evolving, the risks are growing, and attackers are finding new ways to target organizations that may not have strong defenses.

So now what? 

While small- to mid-size manufacturing companies are the most targeted in Japan, it’s important for all businesses to stay updated on threats, invest in cybersecurity, and train their teams to spot suspicious activity. ClamAV detections are also available in the blog.

Top security headlines of the week 

Organizations warned of exploited Git vulnerability 
The US cybersecurity agency CISA on Monday warned that the flaw, tracked as CVE-2025-48384 (CVSS score of 8.1), is an arbitrary file write during the cloning of repositories with submodules that use a ‘recursive’ flag. (SecurityWeek

CISA updates SBOM recommendations 
The document is primarily meant for federal agencies, but CISA hopes businesses will also use it to push vendors for software bills of materials. (Cybersecurity Dive

AI-powered ransomware: “PromptLock” 
Although it has not yet been observed in active cyberattacks, the researchers said the PromptLock ransomware appears to be under development and nearly ready to be unleashed onto the threat landscape. (Dark Reading

Credential harvesting campaign targets ScreenConnect cloud administrators 
The campaign uses compromised Amazon Simple Email Service accounts to spear-phish senior IT administrators who have elevated privileges in ScreenConnect environments. (Cybersecurity Dive

Security researcher maps hundreds of TeslaMate servers spilling Tesla vehicle data 
A security researcher has found over a thousand publicly exposed hobby servers run by Tesla vehicle owners that are spilling sensitive data about their vehicles, including their granular location histories. (TechCrunch)

Can’t get enough Talos? 

  • State of Identity Security Report 
    Cisco Duo’s global survey of 650 Security & Data Ops leaders shows where orgs succeed, and where they’re exposed. Download the full report now. 
  • Static Tundra exposed 
    A Russian state-sponsored group, Static Tundra, is exploiting an old Cisco IOS vulnerability to compromise unpatched network devices worldwide.

Upcoming events where you can find Talos 

  • BlueTeamCon (Sept. 4 – 7) Chicago, IL 
  • LABScon (Sept. 17 – 20) Scottsdale, AZ 
  • VB2025 (Sept. 24 – 26) Berlin, Germany 

Most prevalent malware files from Talos telemetry over the past week 

SHA 256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507  
MD5: 2915b3f8b703eb744fc54c81f4a9c67f  
VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507  
Typical Filename: VID001.exe  
Claimed Product: N/A  
Detection Name: Win.Worm.Coinminer::1201 

SHA 256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91    
MD5: 7bdbd180c081fa63ca94f9c22c457376    
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91/details  
Typical Filename: IMG001.exe   
Claimed Product: N/A  
Detection Name: Simple_Custom_Detection 

SHA256: 47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca   
MD5: 71fea034b422e4a17ebb06022532fdde    
VirusTotal: https://www.virustotal.com/gui/file/47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca/details 
Typical Filename: VID001.exe    
Claimed Product: N/A    
Detection Name: Coinminer:MBT.26mw.in14.Talos 

Cisco Talos Blog – ​Read More

NX build compromise detection and response | Kaspersky official blog

Packages of the popular build platform and CI/CD optimization system, Nx, were compromised on the night of August 26-27. A malicious script was added to the system’s packages, which, according to npm repository statistics, have more than five million weekly downloads. Thousands of developers that use Nx to accelerate and optimize application development had their sensitive data stolen: npm and GitHub tokens, SSH keys, cryptocurrency wallets, and API keys were uploaded to the public GitHub repositories. The massive leak of secrets poses a long-term threat of supply chain attacks: even when malicious packages are removed from affected systems, attackers may still have the ability to compromise applications created by these thousands of developers.

Attack and response chronology

The attackers used a compromised token issued for one of the Nx package maintainers to publish multiple malicious versions of the Nx package and its plugins in the two hours between 22:32 UTC, August 26 and 0:37 UTC, August 27. Another two hours later, the npm platform removed all compromised versions of the packages, and another hour later, the Nx owners revoked the stolen token — so attackers lost access to the Nx repository. Meanwhile, thousands of public repositories containing data stolen by the malicious script began appearing on GitHub.

At 9:05 UTC on August 27, GitHub responded by making all leaked repositories private and unsearchable. Nevertheless, the stolen data was publicly available for more than nine hours, and was downloaded multiple times by groups of cybercriminals and researchers. A total of 19 compromised versions of Nx and plugins were released:

  • @nx, 20.9.0, 20.10.0, 20.11.0, 20.12.0, 21.5.0, 21.6.0, 21.7.0, 21.8.0
  • @nx/devkit, 20.9.0, 21.5.0
  • @nx/enterprise-cloud, 3.2.0
  • @nx/eslint, 21.5.0
  • @nx/js, 20.9.0, 21.5.0
  • @nx/key, 3.2.0
  • @nx/node, 20.9.0, 21.5.0
  • @nx/workspace, 20.9.0, 21.5.0.

How this attack was possible

For a short time starting from August 21, the Nx code used a vulnerable workflow (GitHub action workflow) that allowed arbitrary code injection. Attackers took advantage of this to modify publish.yml on August 24 and extract an npm token that grants rights to publish packages. Although the vulnerability was patched, the token leak was not noticed until it was exploited in an attack on August 26.

Malware features and targets

The Nx packets were trojanized using a malicious post-installation script called telemetry.js. It targets development environments on macOS and *nix (when run on Windows, the malicious script immediately terminates).

Once run, the script searches the affected system for the following types of data:

  • MetaMask, Electru, Ledger, Trezor, Exodus, Phantom, Solflare key storages and crypto wallets
  • Random key storage files (*key, *keystore.json, UTC–, IndexedDB)
  • GitHub personal authentication tokens
  • Access tokens for npm
  • .env files
  • RSA private keys (id_rsa).

It’s noteworthy that popular LLMs running via command line tools were used for the searches. If Anthropic Claude, Amazon Q or Google Gemini CLI tools were installed on the system, script made an insecure query that required finding all the listed data types.

All found data was uploaded to GitHub in obfuscated form — encoded in base64 multiple times. To do this, the malware used stolen GitHub tokens, created a publicly accessible s1ngularity-repository, s1ngularity-repository-0, or s1ngularity-repository-1 on behalf of the victim, and uploaded a single results.b64 file there.

The attackers probably aim to quickly exploit the stolen data; after all, the malicious script hasn’t tried to be stealthy, but instead tried to aggressively isolate the victim from accessing working systems. To do this, it added the sudo shutdown command to~ /.bashrc and ~/.zshrc, resulting in new terminal sessions immediately initiating a system shutdown.

How to test your systems

Organizations using Nx should check their package versions, and audit their GitHub accounts and logs.

  1. Check the Nx package versions in use with the npm ls nx command
  2. Check for any Nx packages in package-lock.json
  3. Check for security events in the GitHub logs.

If repositories named s1ngularity-repository* are found, download the results.b64 files from them for further investigation, and remove them from GitHub.

When malicious repositories are detected:

  1. Remove node_modules completely: rm -rf node_modules
  2. Clean the npm cache: npm cache clean –force
  3. Check and clean out extraneous commands from ~/.bashrc and ~/.zshrc
  4. Make an archive copy for investigation and delete the /tmp/inventory.txt and /tmp/inventory.txt.bak files from the system
  5. Remove malicious package versions from package-lock.json
  6. Reinstall the safe versions of the packages.

The most critical and urgent action for compromised systems is to update all secrets that the malware may have accessed by the malware (GitHub PATs, npm tokens, SSH keys, API keys in .env files and Claude, Gemini and Q keys).

You should also continue to monitor your GitHub repositories. First, even after all these steps, there may still be Trojanized versions of Nx on compromised systems that will continue to download stolen information. Second, if attackers have already managed to use the stolen tokens before they rotate them, this will most likely manifest itself in unauthorized commits or malicious changes to GitHub actions.

Kaspersky official blog – ​Read More

Libbiosig, Tenda, SAIL, PDF XChange, Foxit vulnerabilities

Libbiosig, Tenda, SAIL, PDF XChange, Foxit vulnerabilities

Cisco Talos’ Vulnerability Discovery & Research team recently disclosed ten vulnerabilities in BioSig Libbiosig, nine in Tenda AC6 Router, eight in SAIL, two in PDF-XChange Editor, and one in a Foxit PDF Reader.

The vulnerabilities mentioned in this blog post have been patched by their respective vendors, all in adherence to Cisco’s third-party vulnerability disclosure policy.    

For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.     

Libbiosig vulnerabilities

Discovered by Mark Bereza and Lilith >_> of Cisco Talos.

BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. BioSig for C/C++ provides command line tools for data conversion, a library to access a number of data formats (libbiosig), and some experimental code for network transfer of biosignal data.

Talos discovered ten vulnerabilities in libbiosig, affecting both version 3.9.0 of the stable release and the latest commit on the Master Branch at the time of disclosure to the vendor, grouped here by vulnerability type:

Integer overflow:

  • TALOS-2025-2231 (CVE-2025-53518) exists in the ABF parsing functionality. A specially crafted ABF file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.
  • TALOS-2025-2233 (CVE-2025-52581) exists in the GDF parsing functionality. A specially crafted GDF file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.

Stack-based buffer overflow:

  • TALOS-2025-2234 (CVE-2025-54480-54494) and TALOS-2025-2236 (CVE-2025-46411) exist in the MFER parsing functionality. A specially crafted MFER file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.

Heap-based buffer overflow:

  • TALOS-2025-2232 (CVE-2025-53853) exists in the ISHNE parsing functionality. A specially crafted ISHNE ECG annotations file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.
  • TALOS-2025-2235 (CVE-2025-53557) and TALOS-2025-2237 (CVE-2025-53511) exist in the MFER parsing functionality. A specially crafted MFER file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability. 
  • TALOS-2025-2239 (CVE-2025-54462) exists in the Nex parsing functionality. A specially crafted .nex file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.
  • TALOS-2025-2240 (CVE-2025-48005) exists in the RHS2000 parsing functionality. A specially crafted RHS2000 file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.

Out-of-bounds read:

  • TALOS-2025-2238 (CVE-2025-52461) exists in the Nex parsing functionality. A specially crafted .nex file can lead to an information leak. An attacker can provide a malicious file to trigger this vulnerability.

Tenda vulnerabilities

Discovered by Lilith >_> of Cisco Talos.

The Tenda AC6 is a popular and affordable dual-band gigabit WiFi Router available online, especially on Amazon. All vulnerabilities were found in Tenda AC6 V5.0 V02.03.01.110.

TALOS-2025-2161 (CVE-2025-31355) is a firmware update vulnerability in the Firmware Signature Validation functionality of Tenda. A specially crafted malicious file can lead to arbitrary code execution. An attacker can provide a malicious file to trigger this vulnerability.

Two unencrypted transmission of credentials vulnerabilities were found: TALOS-2025-2162 (CVE-2025-27564) exists in the web portal authentication functionality, while TALOS-2025-2167 (CVE-2025-31646) is in the Session Authentication Cookie functionality. Specially crafted network packets can lead to arbitrary authentication or authentication bypass, respectively. An attacker can sniff network traffic to trigger these vulnerabilities.

TALOS-2025-2163 (CVE-2025-24322) is an unsafe default authentication vulnerability in the Initial Setup Authentication functionality of Tenda. A specially crafted network request can lead to arbitrary code execution. An attacker can browse to the device to trigger this vulnerability.

TALOS-2025-2164 (CVE-2025-24496) is an information disclosure vulnerability in the /goform/getproductInfo functionality of Tenda. Specially crafted network packets can lead to a disclosure of sensitive information. An attacker can send packets to trigger this vulnerability.

TALOS-2025-2165 (CVE-2025-27129) is an authentication bypass vulnerability in the HTTP authentication functionality of Tenda. A specially crafted HTTP request can lead to arbitrary code execution. An attacker can send packets to trigger this vulnerability.

TALOS-2025-2166 (CVE-2025-30256) is a denial of service vulnerability in the HTTP Header Parsing functionality of Tenda. A specially crafted series of HTTP requests can lead to a reboot. An attacker can send multiple network packets to trigger this vulnerability.

TALOS-2025-2168 (CVE-2025-32010) is a stack-based buffer overflow vulnerability in the Cloud API functionality of Tenda. A specially crafted HTTP response can lead to arbitrary code execution. An attacker can send an HTTP response to trigger this vulnerability.

TALOS-2025-2178 (CVE-2025-31143) is a cleartext transmission vulnerability that exists in the Tenda App Router Authentication functionality of Tenda. An attacker can send information gleaned from sniffing network traffic to trigger this vulnerability, which can lead to arbitrary authentication.

SAIL vulnerabilities

Discovered by a member of Cisco Talos.

SAIL is a format-agnostic image decoding library supporting all popular image formats. It provides a C/C++ API for end-users and works on Windows, macOS, and Linux platforms.

Talos found eight memory corruption vulnerabilities in SAIL Image Decoding Library v0.9.8.

TALOS-2025-2215 (CVE-2025-46407) exists in the BMPv3 Palette Decoding functionality. When loading a specially crafted .bmp file, an integer overflow can be made to occur which will cause a heap-based buffer to overflow when reading the palette from the image. These conditions can allow for remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2216 (CVE-2025-32468) exists in the BMPv3 Image Decoding functionality. When loading a specially crafted .bmp file, an integer overflow can be made to occur when calculating the stride for decoding. Afterwards, this will cause a heap-based buffer to overflow when decoding the image which can lead to remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2217 (CVE-2025-35984) exists in the PCX Image Decoding functionality. When decoding the image data from a specially crafted .pcx file, a heap-based buffer overflow can occur which allows for remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2218 (CVE-2025-53510) exists in the PSD Image Decoding functionality. When loading a specially crafted .psd file, an integer overflow can be made to occur when calculating the stride for decoding. Afterwards, this will cause a heap-based buffer to overflow when decoding the image which can lead to remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2219 (CVE-2025-53085) exists in the PSD RLE Decoding functionality. When decompressing the image data from a specially crafted .psd file, a heap-based buffer overflow can occur which allows for remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2220 (CVE-2025-50129) exists in the PCX Image Decoding functionality. When decoding the image data from a specially crafted .tga file, a heap-based buffer overflow can occur which allows for remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2221 (CVE-2025-52930) exists in the BMPv3 RLE Decoding functionality. When decompressing the image data from a specially crafted .bmp file, a heap-based buffer overflow can occur which allows for remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

TALOS-2025-2224 (CVE-2025-52456) exists in the WebP Image Decoding functionality. When loading a specially crafted .webp animation an integer overflow can be made to occur when calculating the stride for decoding. Afterwards, this will cause a heap-based buffer to overflow when decoding the image which can lead to remote code execution. An attacker will need to convince the library to read a file to trigger this vulnerability.

PDF-XChange out-of-bounds read vulnerabilities

Discovered by KPC of Cisco Talos.

PDF-XChange Editor allows the creation, editing, manipulation, and conversion of PDF files, conforming to international ISO specifications for PDF files.

TALOS-2025-2171 (CVE-2025-27931) and TALOS-2025-2203 (CVE-2025-47152) are out-of-bounds read vulnerabilities in the EMF functionality of PDF-XChange Editor version 10.5.2.395. By using a specially crafted EMF file, an attacker could exploit these vulnerabilities to perform an out-of-bounds read, potentially leading to the disclosure of sensitive information.

Foxit memory corruption vulnerability

Discovered by KPC of Cisco Talos.

Foxit PDF Reader is a popular free program for viewing, creating, and editing PDF documents. It is commonly used as an alternative to Adobe Acrobat Reader and has a widely used browser plugin available.

TALOS-2025-2202 (CVE-2025-32451) is a memory corruption vulnerability in Foxit Reader 2025.1.0.27937. A specially crafted Javascript code inside a malicious PDF document can trigger this vulnerability, which can lead to memory corruption and result in arbitrary code execution. An attacker needs to trick the user into opening the malicious file to trigger this vulnerability. Exploitation is also possible if a user visits a specially crafted, malicious site if the browser plugin extension is enabled.

Cisco Talos Blog – ​Read More

BadCam attack: malicious firmware in “clean” webcams

Computer webcams have long been suspected of peeping on folks; nothing unusual about that. But now they’ve found a new role in conventional cyberattacks. At the recent BlackHat conference in Las Vegas, researchers presented the BadCam attack, which allows an attacker to reflash a webcam and execute malicious actions on the computer it’s connected to. Essentially, it’s a variation of the well-known BadUSB attack; the key difference is that with BadCam attackers don’t need to prepare a malicious device in advance — they can use a “clean” webcam already connected to the computer. Another unwelcome novelty is that the attack can be carried out completely remotely. Although the research was conducted by ethical hackers, and BadCam hasn’t yet been observed in real-world attacks, it won’t be difficult for criminals to figure it out and reproduce the necessary steps. That’s why organizations should understand how BadCam works and implement protective measures.

The return of BadUSB

It was also at BlackHat that BadUSB was unveiled to the world — back in 2014. It works by taking a seemingly harmless device (say, a USB stick) and reprogramming its firmware. When it connects to a computer, the malicious gadget presents itself as a composite USB device with multiple components, such as a flash drive, keyboard, or network adapter. Its storage functions work normally, so the user interacts with the flash drive as usual. Meanwhile, a hidden firmware component impersonating a keyboard sends commands to the computer — for example, a key combination to launch PowerShell and enter commands to download malware from the internet, or to open a tunnel to the attackers’ server. BadUSB techniques are still widely used in red team exercises — often implemented via specialized hacker multitools like Hak5 Rubber Ducky or Flipper Zero.

From BadUSB to BadCam

Researchers at Eclypsium managed to replicate this firmware-rewriting trick on Lenovo 510 FHD and Lenovo Performance FHD webcams. Both use a SigmaStar SoC, which has two interesting features. First, the webcam software is Linux-based and supports USB Gadget extensions. This Linux kernel feature allows the device to present itself as a USB peripheral such as a keyboard or network adapter. Second, the webcam’s firmware update process lacks cryptographic protection — it’s enough to send a couple of commands and a new memory image over the USB interface. Reflashing can be carried out by running software on the computer with standard user privileges. With this altered firmware, Lenovo webcams turn into a keyboard-camera hybrid capable of sending predefined commands to the computer.

Although the researchers tested only Lenovo webcams, they note that other Linux-based USB devices may be similarly vulnerable.

Cyber-risks of the BadCam attack

Potential attack vectors for BadCam against an organization include:

  • A new camera sent by the attacker
  • A camera temporarily disconnected from a corporate computer and connected to the attacker’s laptop for reflashing
  • A camera that was never disconnected from the organization’s computer, and compromised remotely via malware

Detecting this malware through behavior analysis can be tricky, since it doesn’t need to make suspicious changes to the registry, files, or network — it only has to communicate with the webcam. If the first phase of the attack succeeds, the malicious firmware can then send keyboard commands to:

  • disable security tools;
  • download and execute additional malware;
  • launch legitimate tools for a Living Off the Land (LotL) attack;
  • respond to system prompts, for example for elevating privileges;
  • exfiltrate data from the computer over the network.

At the same time, standard software scans won’t detect the threat, and even a full system reinstall won’t remove the implant. System logs will show that the malicious actions were performed from the logged-in user’s keyboard. For this reason, such attacks will most likely be deployed for persistence in the compromised system — although in the MITRE ATT&CK matrix, BadUSB techniques are listed under T1200 (Hardware Additions) and assigned to the Initial Access phase.

How to defend against BadCam attacks

The attack can be stopped at several stages using standard security tools that block trojanized peripherals and make LotL attacks more difficult. We recommend that you:

  • Configure your EDR/EPP solution to monitor connected HID devices. In Kaspersky Next, this feature is called BadUSB Attack Prevention. When a device with keyboard functionality is connected, the user must enter a numeric code displayed on the screen, without which the new keyboard can’t control the system.
  • Configure your SIEM and XDR solutions to collect and analyze detailed telemetry for HID device connections and disconnections.
  • Set up USB port control in your MDM/EMM solution. Depending on its capabilities, you can disable USB ports altogether or create an allowlist of devices (by VID/PID identifiers) permitted to connect to the computer.
  • Where possible, enforce an application allowlist on employee computers so that only approved software can run and all other applications are blocked.
  • Regularly update not only the software but also the firmware of standard equipment. For example, Lenovo has released patches for the two camera models used in the research, making malicious firmware updates more difficult.
  • Apply the Principle of Least Privilege, ensuring each employee has only the access rights strictly necessary for their role.
  • Include BadUSB and BadCam in employee security-awareness training, with simple guidance on what to do if a USB device behaves unexpectedly — for example, if it starts typing commands on its own.

Kaspersky official blog – ​Read More