Enterprise MCP adoption is outpacing security controls

AI agents now carry more access and more connections to enterprise systems than any other software in the environment. That makes them a bigger attack surface than anything security teams have had to govern before, and the industry doesn’t yet have a framework for it. “If that attack vector gets utilized, it can result in a data breach, or even worse,” said Spiros Xanthos, founder and CEO of Resolve AI, speaking at a recent VentureBeat AI Impact Series event.

Traditional security frameworks are built around human interactions. There’s not yet an agreed-upon construct for AI agents that have personas and can work autonomously, noted Jon Aniano, SVP of product and CRM applications at Zendesk, at the same event. Agentic AI is moving faster than enterprises can build guardrails — and Model Context Protocol (MCP), while decreasing integration complexity, is making the problem worse.

Agentic AI is moving faster than enterprises can build guardrails around them, according to Aniano and other enterprises leaders. And Model Context Protocol (MCP), while decreasing integration complexity, doesn’t help.

“Right now it’s an unsolved problem because it’s the wild, wild West,” Aniano said. “We don’t even have a defined technical agent-to-agent protocol that all companies agree on. How do you balance user expectations versus what keeps your platform safe?”

MCP still “extremely permissive”

Enterprises are increasingly hooking into MCP servers because they simplify integration between agents, tools and data. However, MCP servers tend to be “extremely permissive,” he said.

They are “actually probably worse than an API,” he contended, because APIs at least have more controls in place to impose upon agents.

Today’s agents are acting on behalf of humans based on explicit permissions, thus establishing human accountability. “But you might have tens, hundreds of agents in the future with their own identity, their own access,” said Xanthos. “It becomes a very complex matrix.”

Even as his startup is developing autonomous AI agents for site reliability engineering (SRE) and system management, he acknowledged that the industry “completely lacks the framework” for autonomous agents.

“It’s completely on us and to anybody who builds agents to figure out what restrictions to give them,” he said. And customers must be able to trust those decisions.

Some existing security tools do offer fine-grained access — Splunk, for instance, developed a method to provide access to certain indexes in underlying data stores, he noted — but most are broader and human-oriented.

“We’re trying to figure this out with existing tools,” he said. “But I don’t think they’re sufficient for the era of agents.”

Who’s accountable when an AI mis-authenticates a user?

At Zendesk and other customer relationship management (CRM) platform providers, AI is involved in a number of user interactions, Aniano noted — in fact, now it’s at a “volume and a scale that we haven’t contemplated as businesses and as a society.”

It can get tricky when AI is helping out human agents; the audit trail can become a labyrinth.

“So now you’ve got a human talking to a human that’s talking to an AI,” Aniano noted. “The human tells the AI to take action. Who’s at fault if it’s the wrong action?” This becomes even more complicated when there are “multiple pieces of AI and multiple humans” in the mix.

To prevent agents from going off the rails, Zendesk tends to be “very strict” about access and scope; however, customers can define their own guardrails based on their needs. In most cases, AI can access knowledge sources, but they’re not writing code or running commands on servers, Aniano said. If an AI does call an API, it is “declaratively designed” and sanctioned, and actions are specifically called out.

However, customer demand is flooding these scenarios and “we’re kind of holding the gates right now,” he said.

The industry must develop concrete standards for agent interactions. “We’re entering a world where, with things like MCP that can auto-discover tools, we’re going to have to create new methods of safety for deciding what tools these bots can interact with,” said Aniano.

When it comes to security, enterprises are rightly concerned when AI takes over authentication tasks, such as sending out and processing one-time passwords (OTP), SMS codes, or other two-step verification methods, he said. What happens if an AI mis-authenticates or misidentifies someone? This can lead to sensitive data leakage or open the door for attackers.

“There’s a spectrum now, and the end of that spectrum today is a human,” Aniano said. However, “the end of that spectrum tomorrow might be a specialized agent designed to do the same kind of gut feeling or human-level interaction.”

Customers themselves are on a spectrum of adoption and comfort. In certain companies — particularly financial services or other highly-regulated environments — humans still must be involved in authentication, Aniano noted. In other cases, legacy companies or old guards only trust humans to authenticate other humans.

He noted that Zendesk is experimenting with new AI agents that are “a little more connected to systems,” and working with a select group of customers around guardrailing.

Standing authorization is coming

In some future, agents may actually be more trusted than humans to do some tasks, and granted permissions “way beyond” what humans have today, Xanthos said. But we’re a long way from that, and, for the most part, the fear of something going wrong is what’s holding enterprises back.

“Which is a good fear, right? I’m not saying that it is a bad thing,” he said. Many enterprises simply aren’t yet comfortable with an agent doing all steps of a workflow or fully closing the loop by itself. They still want human review.

Resolve AI is on the cusp of giving agents standing authorization in a few cases that are “generally safe,” such as in coding; from there they’ll move to more open-ended scenarios that are not all that risky, Xanthos explained. But he acknowledged that there will always be very risky situations where AI mistakes could “mutate the state of the production system,” as he put it.

Ultimately, though: “There’s no going back, obviously; this is moving faster than maybe even mobile did. So the question is what do we do about it?”

What security teams can do now

Both speakers pointed to interim measures available within existing tooling. Xanthos noted that some tools — Splunk among them — already offer fine-grained index-level access controls that can be applied to agents. Aniano described Zendesk’s approach as a practical starting point: declaratively designed API calls with explicitly sanctioned actions, strict access and scope limits, and human review before expanding agent permissions.

The underlying principle, as Aniano put it: “We’re always checking those gates and seeing how we can widen the aperture” — meaning don’t grant standing authorization until you’ve validated each expansion.

Security | VentureBeat – ​Read More

You can share your real-time location via Google Messages now – here’s how

You can choose how long you want to share your location or turn it off at any time.

Latest news – ​Read More

ShinyHunters Leak 2M Records From Dutch Telecom Odido, Claim 21M Stolen

ShinyHunters hackers leak 2 million records from Dutch telecom Odido after ransom refusal, claiming up to 21 million customer records were stolen in the breach.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

7 ways Nano Banana 2 just got better and faster – how to try Google’s latest image model

Google’s new default model for generating images, Nano Banana 2 offers faster speeds, better text rendering, and higher resolutions than its predecessor.

Latest news – ​Read More

900+ Sangoma FreePBX Instances Compromised in Ongoing Web Shell Attacks

The Shadowserver Foundation has revealed that over 900 Sangoma FreePBX instances still remain infected with web shells as part of attacks that exploited a command injection vulnerability starting in December 2025.
Of these, 401 instances are located in the U.S., followed by 51 in Brazil, 43 in Canada, 40 in Germany, and 36 in France.
The non-profit entity said the compromises are likely

The Hacker News – ​Read More

Ultrahuman takes aim at Oura with new ring’s 15-day battery – but not everyone can buy it

The Ultrahuman Ring Pro also packs a new safety feature that makes the smart ring easier to cut off – just in case.

Latest news – ​Read More

The Case for Why Better Breach Transparency Matters

It’s become a standard practice for organizations to disclose the bare minimum about a data breach, or worse — not disclose the incident at all.

darkreading – ​Read More

ClawJacked Vulnerability in OpenClaw Lets Websites Hijack AI Agents

Is your AI assistant safe? Oasis Security researchers have found a critical ClawJacked vulnerability in OpenClaw that allows hackers to hijack AI agents through a simple browser tab.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

5 Nations Alert: Critical Cisco Bug Used in Global Espionage Campaign

Hackers exploited a critical Cisco SD-WAN flaw, prompting a rare joint warning from the US, UK, Australia, Canada, and New Zealand.

The post 5 Nations Alert: Critical Cisco Bug Used in Global Espionage Campaign appeared first on TechRepublic.

Security Archives – TechRepublic – ​Read More

Destroyed servers and DoS attacks: What can happen when OpenClaw AI agents interact

By testing agent-to-agent interactions, researchers observed catastrophic system failures. Here’s why that’s bad news for everyone.

Latest news – ​Read More