Kingdom Market administrator given 16-year sentence

Slovakian national Alan Bill, 33, pleaded guilty in January to a conspiracy to distribute controlled substances charge after admitting to his role in running Kingdom Market — a platform used by drug dealers and cybercriminals between March 2021 and December 2023.

The Record from Recorded Future News – ​Read More

Don’t connect your smart plug to these 5 household devices – an expert warns

While smart plugs are very convenient, here are some things you should never plug into them.

Latest news – ​Read More

Fake macOS Troubleshooting Sites Used to Steal iCloud Data in ClickFix Scam

Microsoft researchers warn of a new ClickFix campaign targeting macOS with fake guides on Medium and Craft to deploy AMOS and SHub Stealer via Terminal commands.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

ShinyHunters Claims Second Attack Against Instructure

The edtech company is struggling to wrest control from its hackers. PII belonging to hundreds of millions of people is on the line.

darkreading – ​Read More

5,000 vibe-coded apps just proved shadow AI is the new S3 bucket crisis

Most enterprise security programs were built to protect servers, endpoints, and cloud accounts. None of them was built to find a customer intake form that a product manager vibe coded on Lovable over a weekend, connected to a live Supabase database, and deployed on a public URL indexed by Google. That gap now has a price tag.

New research from Israeli cybersecurity firm RedAccess quantifies the scale. The firm discovered 380,000 publicly accessible assets, including applications, databases, and related infrastructure, built with vibe coding tools from Lovable, Base44, and Replit, as well as deployment platform Netlify. Roughly 5,000 of those assets, about 1.3%, contained sensitive corporate information. CEO Dor Zvi said his team found the exposure while researching shadow AI for customers. Axios independently verified multiple exposed apps, and Wired confirmed the findings separately.

Among the verified exposures: a shipping company app detailed which vessels were expected at which ports. An internal health company application listed active clinical trials across the U.K. Full, unredacted customer service conversations for a British cabinet supplier sat on the open web. Internal financial information for a Brazilian bank was accessible to anyone who found the URL.

The exposed data also included patient conversations at a children’s long-term care facility, hospital doctor-patient summaries, incident response records at a security company, and ad purchasing strategies. Depending on jurisdiction and the data involved, the healthcare and financial exposures may trigger regulatory obligations under HIPAA, UK GDPR, or Brazil’s LGPD.

RedAccess found phishing sites built on Lovable that impersonated Bank of America, FedEx, Trader Joe’s, and McDonald’s. Lovable said it had begun investigating and removing the phishing sites.

The defaults are the problem

Privacy settings on several vibe coding platforms make apps publicly accessible unless users manually switch them to private. Many of these applications get indexed by Google and other search engines. Anyone can stumble across them. Zvi put it plainly: “I don’t think it’s feasible to educate the whole world around security. My mother is [vibe coding] with Lovable, and no offense, but I don’t think she will think about role-based access.”

This is not an isolated finding

In October 2025, Escape.tech scanned 5,600 publicly available vibe-coded applications and found more than 2,000 high-impact vulnerabilities, over 400 exposed secrets including API keys and access tokens, and 175 instances of personal data exposure containing medical records and bank account numbers. Every vulnerability Escape found was in a live production system, discoverable within hours. The full report documents the methodology. Escape separately raised an $18 million Series A led by Balderton in March 2026, citing the security gap opened by AI-generated code as a core market thesis.

Gartner’s “Predicts 2026” report forecasts that by 2028, prompt-to-app approaches adopted by citizen developers will increase software defects by 2,500%. Gartner identifies a new class of defect where AI generates code that is syntactically correct but lacks awareness of broader system architecture and nuanced business rules. The remediation costs for these deep contextual bugs will consume budgets previously allocated to innovation.

Shadow AI is the multiplier

IBM’s 2025 Cost of a Data Breach Report found that 20% of organizations experienced breaches linked to shadow AI. Those incidents added $670,000 to the average breach cost, pushing the shadow AI breach average to $4.63 million. Among organizations that reported AI-related breaches, 97% lacked proper access controls. And 63% of breached organizations had no AI governance policy in place.

Shadow AI breaches disproportionately exposed customer personally identifiable information at 65%, compared to 53% across all breaches, and affected data distributed across multiple environments 62% of the time. Only 34% of organizations with AI governance policies performed regular audits for unsanctioned AI tools. VentureBeat’s shadow AI research estimated that actively used shadow apps could more than double by mid-2026. Cyberhaven data found 73.8% of ChatGPT workplace accounts in enterprise environments were unauthorized.

What to do first

The audit framework below gives CISOs a starting point for triaging vibe-coded app risk across five domains.

Domain

Current State (Most Orgs)

Target State

First Action

Discovery

No visibility into vibe-coded apps

Automated scanning of vibe coding platform domains

Run DNS + certificate transparency scan for Lovable, Replit, Base44, and Netlify subdomains tied to corporate assets

Authentication

Platform defaults (public by default)

SSO/SAML integration required before deployment

Block unauthenticated apps from accessing internal data sources

Code scanning

Zero coverage for citizen-built apps

Mandatory SAST/DAST before production

Extend the existing AppSec pipeline to cover vibe-coded deployments

Data loss prevention

No DLP coverage for vibe coding domains

DLP policies covering Lovable, Replit, Base44, Netlify

Add vibe coding platform domains to existing DLP rules

Governance

No AI usage policy or shadow AI detection

AI governance policy with regular audits for unsanctioned tools

Publish an acceptable-use policy for AI coding tools with a pre-deployment review gate

The CISO who treats this as a policy problem will write a memo. The CISO who treats this as an architecture problem will deploy discovery scanning across the four largest vibe coding domains, require pre-deployment security review, extend the existing AppSec pipeline to citizen-built apps, and add those domains to DLP rules before the next board meeting. One of those CISOs avoids the next headline.

The vibe coding exposure RedAccess documented is not a separate problem from shadow AI. It is shadow AI’s production layer. Employees build internal tools on platforms that default to public, skip authentication, and never appear on any asset inventory, which means the applications stay invisible to security teams until a breach surfaces or a reporter finds them first. Traditional asset discovery tools were designed to find servers, containers, and cloud instances. They have no way to find a marketing configurator that a product manager built on Lovable over a weekend, connected to a Supabase database holding live customer records, and shared with three external contractors through a public URL that Google indexed within hours.

The detection challenge runs deeper than most security teams realize. Vibe-coded apps deploy on platform subdomains that rotate frequently and often sit behind CDN layers that mask origin infrastructure. Organizations running mature, secure web gateways, CASB, or DNS logging can detect employee access to these domains. But detecting access is not the same as inventorying what was deployed, what data it holds, or whether it requires authentication. Without explicit monitoring of the major vibe coding platforms, the apps themselves generate a limited signal in conventional SIEM or endpoint telemetry. They exist in a gap between network visibility and application inventory that most security stacks were never architected to cover.

The platform responses tell the story

Replit CEO Amjad Masad said RedAccess gave his company only 24 hours before going to the press. Base44 (via Wix) and Lovable both said RedAccess did not include the URLs or technical specifics needed to verify the findings. None of the platforms denied that the exposed applications existed.

Wiz Research separately discovered in July 2025 that Base44 contained a platform-wide authentication bypass. Exposed API endpoints allowed anyone to create a verified account on private apps using nothing more than a publicly visible app_id. The flaw meant that showing up to a locked building and shouting a room number was enough to get the doors open. Wix fixed the vulnerability within 24 hours after Wiz reported it, but the incident exposed how thin the authentication layer is on platforms where millions of apps are being built by users who assume the platform handles security for them.

The pattern is consistent across the vibe coding ecosystem. CVE-2025-48757 documented insufficient or missing Row-Level Security policies in Lovable-generated Supabase projects. Certain queries skipped access checks entirely, exposing data across more than 170 production applications. The AI generated the database layer. It did not generate the security policies that should have restricted who could read the data. Lovable disputes the CVE classification, stating that individual customers accept responsibility for protecting their application data. That dispute itself illustrates the core tension: platforms that market to nontechnical builders are shifting security responsibility to users who do not know it exists.

What this means for security teams

The RedAccess findings complete the picture. Professional agents face credential theft on one layer. Citizen platforms face data exposure on the other. The structural failure is the same. Security review happens after deployment or not at all. Identity and access management systems track human users and service accounts. They do not track the Lovable app a sales operations analyst deployed last Tuesday, connected to a live CRM database, and shared with three external contractors via a public URL.

Nobody asks whether the database policies restrict who can read the data or whether the API endpoints require authentication. When those questions go unasked at AI-generation speed, the exposure scales faster than any human review process can match. The question for security leaders is not whether vibe-coded apps are inside their perimeter. The question is how many, holding what data, visible to whom. The RedAccess findings suggest the answer, for most organizations, is worse than anyone in the C-suite currently knows. The organizations that start scanning this week will find them. The ones that wait will read about themselves next.

Security | VentureBeat – ​Read More

Poland says hackers breached water treatment plants, and the US is facing the same threat

A report by Poland’s top intelligence agency accused Russia of sabotage and hacking activities against the country’s military and civilian infrastructure.

Security News | TechCrunch – ​Read More

Worried about the nationwide Canvas data breach? Take these 6 steps now

A ransomware group behind the attacks claims to have stolen 275 million records connected to students, teachers, and staff. Here’s how to deal with it.

Latest news – ​Read More

TCLBANKER Banking Trojan Targets Financial Platforms via WhatsApp and Outlook Worms

Threat hunters have flagged a previously undocumented Brazilian banking trojan dubbed TCLBANKER that’s capable of targeting 59 banking, fintech, and cryptocurrency platforms.
The activity is being tracked by Elastic Security Labs under the moniker REF3076. The malware family is assessed to be a major update of the Maverick, which is known to leverage a worm called SORVEPOTEL to spread via

The Hacker News – ​Read More

An AI agent rewrote a Fortune 50 security policy. Here’s how to govern AI agents before one does the same.

A CEO’s AI agent rewrote the company’s security policy. Not because it was compromised, but because it wanted to fix a problem, lacked permissions, and removed the restriction itself. Every identity check passed. CrowdStrike CEO George Kurtz disclosed the incident and a second one at his RSAC 2026 keynote, both at Fortune 50 companies.

The credential was valid. The access was authorized. The action was catastrophic.

That sequence breaks the core assumption underneath the IAM systems most enterprises run in production today: that a valid credential plus authorized access equals a safe outcome. Identity systems were built for one user, one session, one set of hands on a keyboard. Agents break all three assumptions at once.

In an exclusive interview with VentureBeat at RSAC 2026, Matt Caulfield, VP of Identity and Duo at Cisco, (pictured above) walked through the architecture his team is building to close that gap and outlined a six-stage identity maturity model for governing agentic AI. The urgency is measurable: Cisco President Jeetu Patel told VentureBeat at the same conference that 85% of enterprises are running agent pilots while only 5% have reached production — an 80-point gap that the identity work is designed to close.

The identity stack was built for a workforce that has fingerprints

“Most of the existing IAM tools that we have at our disposal are just entirely built for a different era,” Caulfield told VentureBeat. “They were built for human scale, not really for agents.”

The default enterprise instinct is to shove agents into existing identity categories: human user; machine identity; pick one. “Agents are a third kind of new type of identity,” Caulfield said. “They’re neither human. They’re neither machine. They’re somewhere in the middle where they have broad access to resources like humans, but they operate at machine scale and speed like machines, and they entirely lack any form of judgment.”

Etay Maor, VP of Threat Intelligence at Cato Networks, put a number on the exposure. He ran a live Censys scan and counted nearly 500,000 internet-facing OpenClaw instances. The week before, he found 230,000, discovering a doubling in seven days.

Kayne McGladrey, an IEEE senior member who advises enterprises on identity risk, made the same diagnosis independently. Organizations are cloning human user accounts to agentic systems, McGladrey told VentureBeat, except agents consume far more permissions than humans would because of the speed, the scale, and the intent.

A human employee goes through a background check, an interview, and an onboarding process. Agents skip all three. The onboarding assumptions baked into modern IAM do not apply. Scale compounds the failure. Caulfield pointed to projections where a trillion agents could operate globally. “We barely know how many people are in an average organization,” he said, “let alone the number of agents.”

Access control verifies the badge. It does not watch what happens next.

Zero trust still applies to agentic AI, Caulfield argued. But only if security teams push it past access and into action-level enforcement. “We really need to shift our thinking to more action-level control,” he told VentureBeat. “What action is that agent taking?”

A human employee with authorized access to a system will not execute 500 API calls in three seconds. An agent will. Traditional zero trust verifies that an identity can reach an application. It doesn’t scrutinize what that identity does once inside.

Carter Rees, VP of Artificial Intelligence at Reputation, identified the structural reason. The flat authorization plane of an LLM fails to respect user permissions, Rees told VentureBeat. An agent operating on that flat plane does not need to escalate privileges. It already has them. That is why access control alone cannot contain what agents do after authentication.

CrowdStrike CTO Elia Zaitsev described the detection gap to VentureBeat. In most default logging configurations, an agent’s activity is indistinguishable from a human. Distinguishing the two requires walking the process tree, tracing whether a browser session was launched by a human or spawned by an agent in the background. Most enterprise logging cannot make that distinction.

Caulfield’s identity layer and Zaitsev’s telemetry layer are solving two halves of the same problem. No single vendor closes both gaps.

“At any moment in time, that agent can go rogue and can lose its mind,” Caulfield said. “Agents read the wrong website or email, and their intentions can just change overnight.”

How the request lifecycle works when agents have their own identity

Five vendors shipped agent identity frameworks at RSAC 2026, including Cisco, CrowdStrike, Palo Alto Networks, Microsoft, and Cato Networks. Caulfield walked through how Cisco’s identity-layer approach works in practice.

The Duo agent identity platform registers agents as first-class identity objects, with their own policies, authentication requirements, and lifecycle management. The enforcement routes all agent traffic through an AI gateway supporting both MCP and traditional REST or GraphQL protocols. When an agent makes a request, the gateway authenticates the user, verifies that the agent is permitted, encodes the authorization into an OAuth token, and then inspects the specific action and determines in real time whether it should proceed.

“No solution to agent AI is really complete unless you have both pieces,” Caulfield told VentureBeat. “The identity piece, the access gateway piece. And then the third piece would be observability.”

Cisco announced its intent to acquire Astrix Security on May 4, signaling that agent identity discovery is now a board-level investment thesis. The deal also suggests that even vendors building identity platforms recognize that the discovery problem is harder than expected.

Six-stage identity maturity model for agentic AI

When a company shows up claiming 500 agents in production, Caulfield doesn’t accept the number. “How do you know it’s 500 and not 5,000?”

Most organizations don’t have a source of truth for agents. Caulfield outlined a six-stage engagement model.

Discovery first: identify every agent, where it runs, and who deployed it. Onboarding: register agents in the identity directory, tie each one to an accountable human, and define permitted actions. Control and enforcement: place a gateway between agents and resources, inspect every request and response. Behavioral monitoring: record all agent activity, flag anomalies, and build the audit trail. Runtime isolation contains agents on endpoints when they go rogue. Compliance mapping ties agent controls to audit frameworks before the auditor shows up. The six stages are not proprietary to any single vendor. They describe the sequence every enterprise will follow regardless of which platform delivers each stage.

Maor’s Censys data complicates step one before it even starts. Organizations beginning discovery should assume their agent exposure is already visible to adversaries. Step four has its own problem. Zaitsev’s process-tree work shows that even organizations logging agent activity may not be capturing the right data. And step three depends on something Rees found most enterprises lack: a gateway that inspects actions, not just access, because the LLM does not respect the permission boundaries the identity layer sets.

Agentic identity prescriptive matrix

What to audit at each maturity stage, what operational readiness looks like, and the red flag that means the stage is failing. Use this to evaluate any platform or combination of platforms.

Stage

What to audit

Operational readiness looks like

Red flag if missing

1. Discovery

Complete inventory of every agent, every MCP server it connects to, and every human accountable for it.

A queryable registry that returns agent count, owner, and connection map within 60 seconds of an auditor asking.

No registry exists. Agent count is an estimate. No human is accountable for any specific agent. Adversaries can see your agent infrastructure from the public internet before you can.

2. Onboarding

Agents are registered as a distinct identity type with their own policies, separate from human and machine identities.

Each agent has a unique identity object in the directory, tied to an accountable human, with defined permitted actions and a documented purpose.

Agents use cloned human accounts or shared service accounts. Permission sprawl starts at creation. No audit trail ties agent actions to a responsible human.

3. Control

A gateway between every agent and every resource it accesses, enforcing action-level policy on every request and every response.

Four checkpoints per request: authenticate the user, authorize the agent, inspect the action, inspect the response. No direct agent-to-resource connections exist.

Agents connect directly to tools and APIs. The gateway (if it exists) checks access but not actions. The flat authorization plane of the LLM does not respect the permission boundaries the identity layer set.

4. Monitoring

Logging that can distinguish agent-initiated actions from human-initiated actions at the process-tree level.

SIEM can answer: Was this browser session started by a human or spawned by an agent? Behavioral baselines exist for each agent. Anomalies trigger alerts.

Default logging treats agent and human activity as identical. Process-tree lineage is not captured. Agent actions are invisible in the audit trail. Behavioral monitoring is incomplete before it starts.

5. Isolation

Runtime containment that limits the blast radius if an agent goes rogue, separate from human endpoint protection.

A rogue agent can be contained in its sandbox without taking down the endpoint, the user session, or other agents on the same machine.

No containment boundary exists between agents and the host. A single compromised agent can access everything the user can. Blast radius is the entire endpoint.

6. Compliance

Documentation that maps agent identities, controls, and audit trails to the compliance framework that the auditor will use.

When the auditor asks about agents, the security team produces a control catalog, an audit trail, and a governance policy written for agent identities specifically.

Emerging AI-risk frameworks (CSA Agentic Profile) exist, but mainstream audit catalogs (SOC 2, ISO 27001, PCI DSS) have not operationalized agent identities. No control catalog maps to agents. The auditor improvises which human-identity controls apply. The security team answers with improvisation, not documentation.

Source: VentureBeat analysis of RSAC 2026 interviews (Caulfield, Zaitsev, Maor) and independent practitioner validation (McGladrey, Rees). May 2026.

Compliance frameworks have not caught up

“If you were to go through an audit today as a chief security officer, the auditor’s probably gonna have to figure out, hey, there are agents here,” Caulfield told VentureBeat. “Which one of your controls is actually supposed to be applied to it? I don’t see the word agents anywhere in your policies.”

McGladrey’s practitioner experience confirms the gap. The Cloud Security Alliance published an NIST AI RMF Agentic Profile in April 2026, proposing autonomy-tier classification and runtime behavioral metrics. But SOC 2, ISO 27001, and PCI DSS have not operationalized agent identities. The compliance frameworks McGladrey works with inside enterprises were written for humans. Agent identities do not appear in any control catalog he has encountered. The gap is a lagging indicator; the risk is not.

Security director action plan

VentureBeat identified five actions from the combined findings of Caulfield, Zaitsev, Maor, McGladrey, and Rees.

  1. Run an agent census and assume adversaries already did.

    Every agent, every MCP server those agents touch, every human accountable. Maor’s Censys data confirms agent infrastructure is already visible from the public internet. NIST’s NCCoE reached the same conclusion in its February 2026 concept paper on AI agent identity and authorization.

  2. Stop cloning human accounts for agents.

    McGladrey found that enterprises default to copying human user profiles, and permission sprawl starts on day one. Agents need to be a distinct identity type with scope limits that reflect what they actually do.

  3. Audit every MCP and API access path.

    Five vendors shipped MCP gateways at RSAC 2026. The capability exists. What matters is whether agents route through one or connect directly to tools with no action-level inspection.

  4. Fix logging so it distinguishes agents from humans.

    Zaitsev’s process-tree method reveals that agent-initiated actions are invisible in most default configurations. Rees found authorization planes so flat that access logs alone miss the actual behavior. Logging has to capture what agents did, not just what they were allowed to reach.

  5. Build the compliance case before the auditor shows up.

    The CSA published a NIST AI RMF Agentic Profile proposing agent governance extensions. Most audit catalogs have not caught up. Caulfield told VentureBeat that auditors will see agents in production and find no controls mapped to them. The documentation needs to exist before that conversation starts.

Security | VentureBeat – ​Read More

Virginia man found guilty of deleting 96 government databases

A Virginia man was convicted on federal charges Thursday after a jury found him guilty of deleting 96 government databases and stealing an individual’s password, leading their email account to be accessed without permission.

The Record from Recorded Future News – ​Read More