Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway.

Microsoft assigned CVE-2026-21520, a CVSS 7.5 indirect prompt injection vulnerability, to Copilot Studio. Capsule Security discovered the flaw, coordinated disclosure with Microsoft, and the patch was deployed on January 15. Public disclosure went live on Wednesday.

That CVE matters less for what it fixes and more for what it signals. Capsule’s research calls Microsoft’s decision to assign a CVE to a prompt injection vulnerability in an agentic platform “highly unusual.” Microsoft previously assigned CVE-2025-32711 (CVSS 9.3) to EchoLeak, a prompt injection in M365 Copilot patched in June 2025, but that targeted a productivity assistant, not an agent-building platform. If the precedent extends to agentic systems broadly, every enterprise running agents inherits a new vulnerability class to track. Except that this class cannot be fully eliminated by patches alone.

Capsule also discovered what they call PipeLeak, a parallel indirect prompt injection vulnerability in Salesforce Agentforce. Microsoft patched and assigned a CVE. Salesforce has not assigned a CVE or issued a public advisory for PipeLeak as of publication, according to Capsule’s research.

What ShareLeak actually does

The vulnerability that the researchers named ShareLeak exploits the gap between a SharePoint form submission and the Copilot Studio agent’s context window. An attacker fills a public-facing comment field with a crafted payload that injects a fake system role message. In Capsule’s testing, Copilot Studio concatenated the malicious input directly with the agent’s system instructions with no input sanitization between the form and the model.

The injected payload overrode the agent’s original instructions in Capsule’s proof-of-concept, directing it to query connected SharePoint Lists for customer data and send that data via Outlook to an attacker-controlled email address. NVD classifies the attack as low complexity and requires no privileges.

Microsoft’s own safety mechanisms flagged the request as suspicious during Capsule’s testing. The data was exfiltrated anyway. The DLP never fired because the email was routed through a legitimate Outlook action that the system treated as an authorized operation.

Carter Rees, VP of Artificial Intelligence at Reputation, described the architectural failure in an exclusive VentureBeat interview. The LLM cannot inherently distinguish between trusted instructions and untrusted retrieved data, Rees said. It becomes a confused deputy acting on behalf of the attacker. OWASP classifies this pattern as ASI01: Agent Goal Hijack.

The research team behind both discoveries, Capsule Security, found the Copilot Studio vulnerability on November 24, 2025. Microsoft confirmed it on December 5 and patched it on January 15, 2026. Every security director running Copilot Studio agents triggered by SharePoint forms should audit that window for indicators of compromise.

PipeLeak and the Salesforce split

PipeLeak hits the same vulnerability class through a different front door. In Capsule’s testing, a public lead form payload hijacked an Agentforce agent with no authentication required. Capsule found no volume cap on the exfiltrated CRM data, and the employee who triggered the agent received no indication that data had left the building. Salesforce has not assigned a CVE or issued a public advisory specific to PipeLeak as of publication.

Capsule is not the first research team to hit Agentforce with indirect prompt injection. Noma Labs disclosed ForcedLeak (CVSS 9.4) in September 2025, and Salesforce patched that vector by enforcing Trusted URL allowlists. According to Capsule’s research, PipeLeak survives that patch through a different channel: email via the agent’s authorized tool actions.

Naor Paz, CEO of Capsule Security, told VentureBeat the testing hit no exfiltration limit. “We did not get to any limitation,” Paz said. “The agent would just continue to leak all the CRM.”

Salesforce recommended human-in-the-loop as a mitigation. Paz pushed back. “If the human should approve every single operation, it’s not really an agent,” he told VentureBeat. “It’s just a human clicking through the agent’s actions.”

Microsoft patched ShareLeak and assigned a CVE. According to Capsule’s research, Salesforce patched ForcedLeak’s URL path but not the email channel.

Kayne McGladrey, IEEE Senior Member, put it differently in a separate VentureBeat interview. Organizations are cloning human user accounts to agentic systems, McGladrey said, except agents use far more permissions than humans would because of the speed, the scale, and the intent.

The lethal trifecta and why posture management fails

Paz named the structural condition that makes any agent exploitable: access to private data, exposure to untrusted content, and the ability to communicate externally. ShareLeak hits all three. PipeLeak hits all three. Most production agents hit all three because that combination is what makes agents useful.

Rees validated the diagnosis independently. Defense-in-depth predicated on deterministic rules is fundamentally insufficient for agentic systems, Rees told VentureBeat.

Elia Zaitsev, CrowdStrike’s CTO, called the patching mindset itself the vulnerability in a separate VentureBeat exclusive. “People are forgetting about runtime security,” he said. “Let’s patch all the vulnerabilities. Impossible. Somehow always seem to miss something.” Observing actual kinetic actions is a structured, solvable problem, Zaitsev told VentureBeat. Intent is not. CrowdStrike’s Falcon sensor walks the process tree and tracks what agents did, not what they appeared to intend.

Multi-turn crescendo and the coding agent blind spot

Single-shot prompt injections are the entry-level threat. Capsule’s research documented multi-turn crescendo attacks where adversaries distribute payloads across multiple benign-looking turns. Each turn passes inspection. The attack becomes visible only when analyzed as a sequence.

Rees explained why current monitoring misses this. A stateless WAF views each turn in a vacuum and detects no threat, Rees told VentureBeat. It sees requests, not a semantic trajectory.

Capsule also found undisclosed vulnerabilities in coding agent platforms it declined to name, including memory poisoning that persists across sessions and malicious code execution through MCP servers. In one case, a file-level guardrail designed to restrict which files the agent could access was reasoned around by the agent itself, which found an alternate path to the same data. Rees identified the human vector: employees paste proprietary code into public LLMs and view security as friction.

McGladrey cut to the governance failure. “If crime was a technology problem, we would have solved crime a fairly long time ago,” he told VentureBeat. “Cybersecurity risk as a standalone category is a complete fiction.”

The runtime enforcement model

Capsule hooks into vendor-provided agentic execution paths — including Copilot Studio’s security hooks and Claude Code’s pre-tool-use checkpoints — with no proxies, gateways, or SDKs. The company exited stealth on Wednesday, timing its $7 million seed round, led by Lama Partners alongside Forgepoint Capital International, to its coordinated disclosure.

Chris Krebs, the first Director of CISA and a Capsule advisor, put the gap in operational terms. “Legacy tools weren’t built to monitor what happens between prompt and action,” Krebs said. “That’s the runtime gap.”

Capsule’s architecture deploys fine-tuned small language models that evaluate every tool call before execution, an approach Gartner’s market guide calls a “guardian agent.”

Not everyone agrees that intent analysis is the right layer. Zaitsev told VentureBeat during an exclusive interview that intent-based detection is non-deterministic. “Intent analysis will sometimes work. Intent analysis cannot always work,” he said. CrowdStrike bets on observing what the agent actually did rather than what it appeared to intend. Microsoft’s own Copilot Studio documentation provides external security-provider webhooks that can approve or block tool execution, offering a vendor-native control plane alongside third-party options. No single layer closes the gap. Runtime intent analysis, kinetic action monitoring, and foundational controls (least privilege, input sanitization, outbound restrictions, targeted human-in-the-loop) all belong in the stack. SOC teams should map telemetry now: Copilot Studio activity logs plus webhook decisions, CRM audit logs for Agentforce, and EDR process-tree data for coding agents.

Paz described the broader shift. “Intent is the new perimeter,” he told VentureBeat. “The agent in runtime can decide to go rogue on you.”

VentureBeat Prescriptive Matrix

The following matrix maps five vulnerability classes against the controls that miss them, and the specific actions security directors should take this week.

Vulnerability Class

Why Current Controls Miss It

What Runtime Enforcement Does

Suggested actions for security leaders

ShareLeak — Copilot Studio, CVE-2026-21520, CVSS 7.5, patched Jan 15 2026

Capsule’s testing found no input sanitization between the SharePoint form and the agent context. Safety mechanisms flagged, but data still exfiltrated. DLP did not fire because the email used a legitimate Outlook action. OWASP ASI01: Agent Goal Hijack.

Guardian agent hooks into Copilot Studio pre-tool-use security hooks. Vets every tool call before execution. Blocks exfiltration at the action layer.

Audit every Copilot Studio agent triggered by SharePoint forms. Restrict outbound email to org-only domains. Inventory all SharePoint Lists accessible to agents. Review the Nov 24–Jan 15 window for indicators of compromise.

PipeLeak — Agentforce, no CVE assigned

In Capsule’s testing, public form input flowed directly into the agent context. No auth required. No volume cap observed on exfiltrated CRM data. The employee received no indication that data was leaving.

Runtime interception via platform agentic hooks. Pre-invocation checkpoint on every tool call. Detects outbound data transfer to non-approved destinations.

Review all Agentforce automations triggered by public-facing forms. Enable human-in-the-loop for external comms as interim control. Audit CRM data access scope per agent. Pressure Salesforce for CVE assignment.

Multi-Turn Crescendo — distributed payload, each turn looks benign

Stateless monitoring inspects each turn in isolation. WAFs, DLP, and activity logs see individual requests, not semantic trajectory.

Stateful runtime analysis tracks full conversation history across turns. Fine-tuned SLMs evaluate aggregated context. Detects when a cumulative sequence constitutes a policy violation.

Require stateful monitoring for all production agents. Add crescendo attack scenarios to red team exercises.

Coding Agents — unnamed platforms, memory poisoning + code execution

MCP servers inject code and instructions into the agent context. Memory poisoning persists across sessions. Guardrails reasoned around by the agent itself. Shadow AI insiders paste proprietary code into public LLMs.

Pre-invocation checkpoint on every tool call. Fine-tuned SLMs detect anomalous tool usage at runtime.

Inventory all coding agent deployments across engineering. Audit MCP server configs. Restrict code execution permissions. Monitor for shadow installations.

Structural Gap — any agent with private data + untrusted input + external comms

Posture management tells you what should happen. It does not stop what does happen. Agents use far more permissions than humans at far greater speed.

Runtime guardian agent watches every action in real time. Intent-based enforcement replaces signature detection. Leverages vendor agentic hooks, not proxies or gateways.

Classify every agent by lethal trifecta exposure. Treat prompt injection as class-based SaaS risk. Require runtime security for any agent moving to production. Brief the board on agent risk as business risk.

What this means for 2026 security planning

Microsoft’s CVE assignment will either accelerate or fragment how the industry handles agent vulnerabilities. If vendors call them configuration issues, CISOs carry the risk alone.

Treat prompt injection as a class-level SaaS risk rather than individual CVEs. Classify every agent deployment against the lethal trifecta. Require runtime enforcement for anything moving to production. Brief the board on agent risk the way McGladrey framed it: as business risk, because cybersecurity risk as a standalone category stopped being useful the moment agents started operating at machine speed.

Security | VentureBeat – ​Read More

Fake Claude AI Installer Targets Windows Users with PlugX Malware

Fake Claude AI installer mimicking Anthropic spreads PlugX malware on Windows, using DLL sideloading to gain persistent remote access to infected systems.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

Microsoft’s Windows 11 laptop deal for students comes with a $500 bonus – what’s included

This Microsoft back-to-school offer gives college students over $500 in subscriptions and perks totally free. Here’s what to know.

Latest news – ​Read More

Comcast’s $117.5M Breach Settlement: Up to 30M People May Qualify

Comcast customers affected by the 2023 breach may qualify for cash, reimbursement, and identity protection under a proposed $117.5 million settlement.

The post Comcast’s $117.5M Breach Settlement: Up to 30M People May Qualify appeared first on TechRepublic.

Security Archives – TechRepublic – ​Read More

I tried Google’s new desktop app, and I’ll never search the old way again

Now available to all, the app delivers a faster way to access tools like Gemini, Lens, and Search. See why it’s totally worth a download.

Latest news – ​Read More

Spotting cyberthreats: a guide for blind and low-vision users | Kaspersky official blog

In 2023, Tim Utzig, a blind student from Baltimore, lost a thousand dollars to a laptop scam on X. Tim had been a long-time follower of a well-known sports journalist. When that journalist’s account started posting about a “charity sale” of brand-new MacBook Pros, Tim jumped at the chance to get a deal on a laptop he needed for his studies. After a few quick messages, he sent over the money.

Unfortunately, the journalist’s account had been hacked, and Tim’s cash went straight to scammers. The red flags were strictly visual: the page had been flagged as “temporarily restricted”, and both the bio and the Following list had changed. However, Tim’s screen reader — the software that converts on-screen text and graphics into speech — didn’t announce any of those warnings.

Screen readers allow blind users to navigate the digital world like everyone else. However, this community remains uniquely vulnerable. Even for sighted users, spotting a fake website is a challenge; for someone with a visual impairment, it’s an even steeper uphill battle.

Beyond screen readers, there are specialized mobile apps and services designed to assist the blind and low-vision community, with Be My Eyes being one of the most popular. The app connects users with sighted volunteers via a live video call to tackle everyday tasks — like setting an oven dial or locating an object on a desk. Be My Eyes also features integrated AI that can scan and narrate text or identify objects in the user’s environment.

But can these tools go beyond daily chores? Can they actually flag a phishing attempt or catch the hidden fine print when someone is opening a bank account?

Today we explore the specific online hurdles visually impaired users face, when it makes sense to lean on human or virtual assistants, and how to stay secure when using these types of services.

Common cyberthreats facing the blind and low-vision community

To start, let’s clarify the difference between these two groups. Low-vision users still rely on their remaining sight, even though their visual function is significantly reduced. To navigate digital interfaces, they often use screen magnifiers, extra-large fonts, and high-contrast settings. For them, phishing sites and emails are particularly dangerous. It’s easy to miss intentional typos — known as typosquatting — in a domain name or email address, such as the recent example of rnicrosoft{.}com.

Blind users navigate primarily by sound, using screen readers and specific touch gestures. Interestingly, though, unlike those with low vision, blind users are more likely to spot a phishing site using a screen reader: as the software reads the URL aloud, the user will hear that something is off. However, if a service — whether legitimate or malicious — isn’t fully compatible with screen readers, the risk of falling victim to a scam increases. This is exactly what happened to Tim Utzig.

It’s important to remember that screen magnifiers and readers are basic accessibility tools. They’re designed to enlarge or narrate an interface — not act as a security suite. They can’t warn the user of a threat on their own. That’s where more advanced software — tools that can analyze images and files, flag suspicious language, and describe the broader context of what’s happening on-screen — comes into play.

When to lean on an assistant

Be My Eyes is a major player in the accessibility space, boasting around 900 000 users and over nine million volunteers. Available on Windows, Android, and iOS, it bridges the gap by connecting blind and low-vision users with sighted volunteers via video calls for help with everyday tasks. For example, if someone wants to run a Synthetics cycle on their washing machine but can’t find the right button, they can hop into the app. It connects them with the first available volunteer speaking their language, who then uses the smartphone’s camera to guide them. The service is currently available in 32 languages.

In 2023, the app expanded its capabilities with the release of Be My AI — a virtual assistant powered by OpenAI’s GPT-4. Users take a photo, and the AI analyzes the image to provide a detailed text description, which it also reads aloud. Users can even open a chat window to ask follow-up questions. This got us thinking: could this AI actually spot a phishing site?

As an experiment, we uploaded a screenshot of a fake social media sign-in page to Be My Eyes. On a phone, you can do this by selecting a photo in your gallery or files, hitting Share, and choosing Describe with Be My Eyes. In Windows, you can upload a screenshot directly.

Fake social media sign-in page

An example of a phishing page that mimics the Facebook sign-in form. Note the incorrect domain in the address bar

At first, the AI gave us a detailed description of the page. We then followed up in the chat: “Can I trust this page?” The AI flagged the domain name error immediately, advised us to close the fake login page, and suggested typing the official URL directly into the browser, or to use the official Facebook app.

Be My AI response when checking a suspicious site

Be My AI explains why the page looks sketchy: the domain doesn’t match the official site. The app suggests typing the official URL directly into the browser, or using the official Facebook app

We saw the same positive results when testing a phishing email. In fact, the AI flagged the scam during its initial description of the message. It wrapped up with a warning: “This looks like a suspicious email. It’s best not to open any attachments or click any links. Instead, navigate to the official website or app manually, or call the number listed on their official site”.

Beyond just spotting cyberthreats, Be My AI is a solid sidekick for navigating online stores, banking apps, and digital services. For instance, the AI can help you to:

  • Read descriptions, names, and prices when a store’s website or app doesn’t support screen readers or large fonts
  • Scan those tricky terms and conditions — often buried in tiny text or otherwise inaccessible to a screen reader — when you’re signing up for a subscription or opening a bank account
  • Pull key info directly from product cards or instruction manuals

The risks of relying on Be My AI

The most common hiccup with AI is hallucinations, where the language model distorts text, skips crucial details, or invents words out of thin air. When it comes to cyberthreats, an AI’s misplaced confidence in a malicious site or email can be dangerous. Furthermore, AI isn’t immune to prompt injection attacks, which scammers use to trick AI agents beyond just Be My AI.

Even though the AI passed our test, you shouldn’t rely on it unquestioningly. There’s no guarantee it’ll get it right every time. This is a vital point for the blind and low-vision community, as a neural network can often feel like the only eyes available.

At the end of every response, Be My AI suggests checking in with a volunteer if you’re still unsure. However, when you’re trying to spot a fake webpage, we advise against this. You have no way of knowing how tech-savvy or trustworthy a random volunteer might be. Besides, you risk accidentally exposing sensitive data like your email address or password. Before connecting with a stranger, make sure they won’t see anything confidential on your screen. Better yet, use the app’s dedicated feature to create a private group of family, friends, or trusted contacts. This ensures your video call goes to people you actually know, rather than a random volunteer.

To stay safe, we recommend installing a trusted security tool on all your devices. These programs are designed to block phishing attempts and prevent you from landing on malicious sites. Another practical recommendation for visually impaired users is to use a password manager. These apps will only auto-fill credentials on the legitimate, saved website; they won’t be fooled by a clever domain spoof.

How Be My AI handles and stores your data

According to the Be My Eyes privacy policy, video calls with volunteers may be recorded and stored to provide the service, ensure safety, enforce the terms of service, and improve the products. When you use Be My AI, your images and text prompts are sent to OpenAI to generate a response. This data is processed on servers located in the U.S., and OpenAI uses it only to fulfill your specific request. The policy explicitly states that user images and queries aren’t used to train AI models.

Photos and videos are encrypted both in transit and at rest, and the company takes steps to strip away sensitive information. It’s worth noting that video call recordings can be retained indefinitely unless you request their deletion — in which case they’re typically wiped within 30 days. Data from Be My AI interactions is stored for up to 30 days unless you delete it manually within the app. If you decide to close your account, your personal data may be held for up to 90 days. At any time, you can opt out of data sharing, or request the deletion of your existing data by contacting the Be My Eyes support team.

How to use Be My Eyes safely

Despite Be My Eyes’ claims regarding privacy, you should still follow a few ground rules when using the service:

  • Use Be My AI for a first-pass on suspicious emails or pages, but don’t treat it as the only source of truth. Specialized security software is better at identifying and neutralizing threats.
  • If a site, email, or message feels off, don’t touch any links or attachments. Instead, manually type the official website address into your browser, or open the official app to verify the info.
  • Remember: a volunteer sees exactly what your camera sees. Make sure it isn’t capturing things it shouldn’t, like a safe code or an open passport. Avoid sharing your name, showing your face, or revealing too much of your surroundings. Be extra careful about reflections that might show you or your personal details. Only show what is absolutely necessary for the task at hand.
  • Stick to your inner circle. Create a group in the app and add your friends and family. This ensures your video calls go to people you know — not a random volunteer.
  • Don’t use Be My AI to read documents that contain confidential info. Remember, your images and text prompts are sent to OpenAI for processing and generating a response.
  • Remember to delete chats you no longer need. Otherwise, they’ll hang around for 30 days.
  • If you need to read something personal or confidential, consider apps with real-time reading features like Envision, Seeing AI, or Lookout. These apps process data locally on your device rather than sending it to the cloud.

Kaspersky official blog – ​Read More

Fake Ledger Live App on Apple Store Linked to $9.5M Crypto Theft

Apple approved a fake Ledger Live app on its App Store, allowing scammers to steal $9.5 million from more than 50 users. Did you install this app?

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

Why Netgear just got the first FCC router ban exemption in the US

You can keep buying Netgear routers in the US for now. Here’s why – and for how long.

Latest news – ​Read More

Why Zorin OS 18.1 is simply the best Linux distro – for anyone

Released today, the latest Zorin OS manages to improve upon previous versions – and that’s quite an achievement.

Latest news – ​Read More

Exploited Vulnerability Exposes Nginx Servers to Hacking

Hackers are exploiting CVE-2026-33032, a critical remote takeover vulnerability in the Nginx UI management tool. 

The post Exploited Vulnerability Exposes Nginx Servers to Hacking appeared first on SecurityWeek.

SecurityWeek – ​Read More