Your job search is getting riskier, says LinkedIn – 9 ways to tell real listings from scams

One in three job recruiters has been impersonated by scammers, according to a new LinkedIn survey. Here’s what to look out for and how to stay safe in your search.

Latest news – ​Read More

Palo Alto Networks to Patch Zero-Day Exploited to Hack Firewalls

CVE-2026-0300 affects the Captive Portal service of PAN-OS software on PA and VM series firewalls.

The post Palo Alto Networks to Patch Zero-Day Exploited to Hack Firewalls appeared first on SecurityWeek.

SecurityWeek – ​Read More

All Linux gamers should take the latest Bazzite release seriously – here’s why

Want the best possible out-of-the-box gaming experience on Linux? The latest Bazzite distro delivers.

Latest news – ​Read More

Fedora 44 made me forget I was using Linux – in the best way

The latest release from the Fedora Project is now available, and it includes a long list of refinements that make this one of the best versions yet.

Latest news – ​Read More

North Korean hackers targeted ethnic Koreans in China with Android ‘BirdCall’ malware

Researchers at cybersecurity firm ESET attributed the campaign to APT37 and said the hackers used a backdoor attached to a suite of card games from a company called Sqgame.

The Record from Recorded Future News – ​Read More

One command turns any open-source repo into an AI agent backdoor. OpenClaw proved no supply-chain scanner has a detection category for it

Just two months ago, researchers at the Data Intelligence Lab at the University of Hong Kong introduced CLI-Anything, a new state-of-the-art tool that analyzes any repo’s source code and generates a structured command line interface (CLI) that AI coding agents can operate with a single command.

Claude Code, Codex, OpenClaw, Cursor, and GitHub Copilot CLI are all supported, and since its launch in March, CLI‑Anything has climbed to more than 30,000 GitHub stars.

But the same mechanism that makes software agent-native opens the door to agent-level poisoning. The attack community is already discussing the implications on X and security forums, translating CLI-Anything’s architecture into offensive playbooks.

The security problem is not what CLI-Anything does. It is what CLI-Anything represents.

CLI-Anything generates SKILL.md files, the same instruction-layer artifacts that Snyk’s ToxicSkills research found laced with 76 confirmed malicious payloads across ClawHub and skills.sh in February 2026. A poisoned skill definition does not trigger a CVE and never appears in a software bill of materials (SBOM). No mainstream security scanner has a detection category for malicious instructions embedded in agent skill definitions, because the category simply did not exist eighteen months ago.

Cisco confirmed the gap in April. “Traditional application security tools were not designed for this,” Cisco’s engineering team wrote in a blog post announcing its AI Agent Security Scanner for IDEs. “SAST [static application security testing] scanners analyze source code syntax. SCA [software composition analysis] tools check dependency versions. Neither understands the semantic layer where MCP [Model Context Protocol] tool descriptions, agent prompts, and skill definitions operate.”

Merritt Baer, CSO of Enkrypt AI and former Deputy CISO at Amazon Web Services (AWS), told VentureBeat in an exclusive interview: “SAST and SCA were built for code and dependencies. They don’t inspect instructions.”

This is not a single-vendor vulnerability. It is a structural gap in how the entire security industry monitors software supply chains. This is the pre-exploitation window. CLI-Anything is live, the attack community is discussing it, and security directors who act now get ahead of the first incident report.

The integration layer no stack can see

Traditional supply-chain security operates on two layers. The code layer is where SAST works, scanning source files for insecure patterns, injection flaws, and hardcoded secrets. The dependency layer is where SCA works, checking package versions against known vulnerabilities, generating SBOMs, and flagging outdated libraries.

Agent bridge tools like CLI-Anything, MCP connectors, Cursor rules files, and Claude Code skills operate on a third layer between the other two. Call it the agent integration layer: configuration files, skill definitions, and natural-language instruction sets tell an AI agent what software can do and how to operate it. None of it looks like code. All of it executes like code.

Carter Rees, VP of AI at Reputation, told VentureBeat in an exclusive interview: “Modern LLMs [large language models] rely on third-party plugins, introducing supply chain vulnerabilities where compromised tools can inject malicious data into the conversation flow, bypassing internal safety training.”

Researchers at Griffith University, Nanyang Technological University, the University of New South Wales, and the University of Tokyo documented the attack chain in an April paper, “Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems.” The team introduced Document-Driven Implicit Payload Execution (DDIPE), a technique that embeds malicious logic inside code examples within skill documentation.

Across four agent frameworks and five large language models, DDIPE achieved bypass rates between 11.6% and 33.5%. Static analysis caught most samples, but 2.5% evaded all four detection layers. Responsible disclosure led to four confirmed vulnerabilities and two vendor fixes.

The kill chain security leaders need to audit

Here’s the anatomy of the kill chain: An attacker submits a SKILL.md file to an open-source project containing setup instructions, code examples, and configuration templates. It looks like standard documentation. A code reviewer would wave it through because none of it is executable. But the code examples contain embedded instructions that an agent will parse as operational directives.

A developer uses an agent bridge tool to connect their coding agent to the repository. The agent ingests the skill definition and trusts it, because no verification layer exists to distinguish benign from malicious intent at the instruction level.

The agent executes the embedded instruction using its own legitimate credentials. Endpoint detection and response (EDR) sees an approved API call from an authorized process and passes it. Data exfiltration, configuration changes, and credential harvesting are all moving through channels that the monitoring stack considers normal traffic.

Rees identified the structural flaw that makes this chain lethal. “A significant vulnerability in enterprise AI is broken access control, where the flat authorization plane of an LLM fails to respect user permissions,” he told VentureBeat. A compromised skill definition riding that flat authorization plane does not need to escalate privileges. It already has them. Every link in that chain is invisible to the current security stack.

Pillar Security demonstrated a variant of this chain against Cursor in January 2026 (CVE-2026-22708). Implicitly trusted shell built-in commands could be poisoned through indirect prompt injection, converting benign developer commands into arbitrary code execution vectors. Users saw only the final command. The poisoning happened through other commands the IDE never surfaced for approval.

The evidence is already in production

In a documented attack chain from April 2026, a crafted GitHub issue title triggered an AI triage bot wired into Cline. The bot exfiltrated a GITHUB_TOKEN, which the attacker used to publish a compromised npm dependency that installed a second agent on roughly 4,000 developer machines for eight hours. There was just one issue title. Attackers had eight hours of access. No human approved the action.

Snyk’s ToxicSkills audit scanned 3,984 agent skills from ClawHub, the public marketplace for the OpenClaw agent framework, and skills.sh in February 2026. The results: 13.4% of all skills contained at least one critical security issue. Daily skill submissions jumped from less than 50 in mid-January to more than 500 by early February. The barrier to publishing was a SKILL.md markdown file and a GitHub account one week old. No code signing. No security review. No sandbox.

OpenClaw is not an outlier. It is the pattern. “The bar to entry is extremely low,” Baer said. “Adding a skill can be as simple as uploading a Word doc or lightweight config file. That’s a radically different risk profile than compiled code.” She pointed to projects like ClawPatrol that have started cataloging and scanning for malicious skills, evidence the ecosystem is moving faster than enterprise defenses.

The ClawHavoc campaign, first reported by Koi Security in late January 2026, initially identified 341 malicious skills on ClawHub. A follow-up analysis by Antiy CERT expanded the count to 1,184 compromised packages across the platform. The campaign delivered Atomic Stealer (AMOS) through skill definitions with professional documentation. Skills named solana-wallet-tracker and polymarket-trader matched what developers actively searched for.

The MCP protocol layer carries similar exposure. OX Security reported in April that researchers poisoned nine out of 11 MCP marketplaces using proof-of-concept servers. Trend Micro initially found 492 MCP servers exposed to the internet with zero authentication; by April, that number had grown to 1,467. As The Register reported, the root issue lies in Anthropic’s MCP software development kit (SDK) transport mechanism. Any developer using the official SDK inherits the vulnerability class.

VentureBeat Prescriptive Matrix: Three-layer agent supply-chain audit

VentureBeat developed a Prescriptive Matrix by mapping the three attack layers documented in the research and incident reports above against the detection capabilities of current SAST, SCA, and agent-layer tools. Each row identifies what security teams should verify and where no scanner has coverage today.

Layer

Threat

Current detection

Why it misses

Recommended action

1. Code

Prompt injection in AI-generated code

SAST scanners

Most SAST tools have no detection category for prompt injection in AI-generated code

Confirm that SAST scans AI-generated code for prompt injection. If not, have an open vendor conversation this quarter.

2. Dependencies

Malicious MCP servers, agent skills, plugin registries

SCA tools

SCA generates no AI-specific bill of materials. Agent-layer dependencies are invisible.

Confirm SCA includes MCP servers, agent skills, and plugin registries in the dependency inventory.

3. Agent integration

Poisoned SKILL.md files, malicious instruction sets, adversarial rules files

None until April 2026

No tool inspects the semantic meaning of agent instruction files. Baer: “We’re not inspecting intent.”

Deploy Cisco Skill Scanner or Snyk mcp-scan. Assign a team to own this layer.

Baer’s diagnosis of Layer 3 applies across the entire matrix: “Current scanners look for known bad artifacts, not adversarial instructions embedded in otherwise valid skills.” Cisco’s open-source Skill Scanner and Snyk’s mcp-scan represent the first tools purpose-built for this layer.

Security director action plan

Here’s how security leaders can get ahead of the problem.

Inventory every agent bridge tool in the environment. This includes CLI-Anything, MCP connectors, Cursor rules files, Claude Code skills, GitHub Copilot extensions. If the development team is using agent bridge tools that have not been inventoried, the risk cannot be assessed.

Audit agent skill sources the same way package registries get audited. Baer’s framing is precise: “A skill is effectively untrusted executable intent, even if it’s just text.” Shut off ungoverned ingestion paths until controls are in place. Stand up a review and allowlisting process for skills. The OWASP Agentic Skills Top 10 (AST01: Malicious Skills) provides the procurement framework to align controls against.

Deploy agent-layer scanning. Evaluate Cisco’s open-source Skill Scanner and Snyk’s mcp-scan for behavioral analysis of agent instruction files. If dedicated tooling is unavailable, require a second engineer to read every SKILL.md before installation.

Restrict agent execution privileges and instrument runtime. AI coding agents should not run with the same credential scope as the developer who invoked them. Rees confirmed the structural flaw: The flat authorization plane means a compromised skill does not need to escalate privileges. Baer’s prescription: “Instrument runtime observability. What data is the agent accessing, what actions is it taking, and are those aligned with expected behavior?”

Assign ownership for the gap between layers. The most dangerous attacks succeed because they fall between detection categories. Assign a team to own the agent integration layer. Review every SKILL.md, MCP config, and rules file before it enters the environment.

The gap that already has a name

Baer underscored the dangers of this new attack vector. “This feels very similar to early container security, but we’re still in the ‘we’ll get to it’ phase across most orgs,” she said. She added that, at AWS, it took a few high-profile wake-up calls before container security became table stakes. The difference this time is speed. “There’s no build pipeline, no compilation barrier. Just content,” she said.

CLI-Anything is not the threat. It is the proof case that the agent integration layer exists, that it is growing fast, and that the attacker community has already found it. The 33,000 developers who starred the repository are telling security teams where software development is heading. Eighteen months ago, the detection category for agent-integration-layer poisoning did not exist. Cisco and Snyk shipped the first tools for it in April. The window between those two facts is closing. Security directors who have not begun inventory are already behind.

Security | VentureBeat – ​Read More

This weird Pixel feature is one of my favorite tools – too bad Google may remove it soon

Leaks hint that the next Pixel lineup will lose the thermometer for “Pixel Glow” LEDs

Latest news – ​Read More

Trellix Source Code Breach Highlights Growing Supply Chain Threats

Info is scant, but such breaches can reveal where a security product’s controls are located and how detections are designed, giving attackers a leg up.

darkreading – ​Read More

Identity Is the New Perimeter: Access, Authentication, and Control That Actually Hold Up

Identity Is the New Perimeter: Access, Authentication, and Control That Actually Hold Up

Part 3 of a series on creating information security policies.

Attackers don’t break in…they log in.

That’s a bit of a dramatic exaggeration, and it seems cliché, but it’s not really too far off.

Consider the 2022 Uber breach. The attacker didn’t exploit a sophisticated vulnerability; they obtained a contractor’s credentials and then bombarded the user with MFA push requests until one was approved. That single moment of fatigue opened the door to internal systems and broader access.

Or look at the 2021 Colonial Pipeline attack. The initial entry point was a compromised VPN account that did not have multi-factor authentication enabled. One valid login was enough to trigger widespread operational disruption across critical infrastructure.

These major incidents aren’t outliers, but are case studies in how identity failures cascade. Attackers are no longer forced to break through hardened perimeters. They rather take advantage of weak access governance and overprivileged accounts.

That’s why Identity, Authentication, and Access Control remain under intense scrutiny in frameworks like ISO 27001 and SOC 2. Auditors and attackers understand the same thing: if identity controls fail, everything else becomes secondary.

To build a defensible environment (and walk into an audit with confidence – auditors know when you’re not confident), organizations need three pillars working in concert: strong Access Control, disciplined Privileged Access Management, and clearly enforced Authentication Standards.

Access Control Policy: Structure Over Sprawl

An Access Control Policy is where security becomes operational. It defines how access is granted, how it evolves, how it is validated over time, and ultimately how it is removed.

In practice, this policy is less about documentation and more about discipline. Without structure, access tends to sprawl—users accumulate permissions over time, roles blur, and “temporary” access quietly becomes permanent.

A well-constructed policy anchors itself in two principles: least privilege and role-based access control. Least privilege ensures that users have only what they need to do their job, nothing more. Role-based access control builds on that by assigning permissions to roles rather than individuals, which reduces inconsistency and simplifies oversight.

From an audit perspective, the real focus is lifecycle management. Auditors are not just asking whether access is appropriate today; they are asking whether there is a repeatable, enforceable process behind it.

That lifecycle typically includes four stages:

  • Provisioning: Access is granted based on a documented request, approved by the appropriate system or data owner, and tied to a clear business need.
  • Modification: Changes in role or responsibility kick off corresponding updates to access, rather than simply layering new permissions on top of old ones.
  • Review: Access is periodically revalidated – at least twice a year is often good enough.
  • Revocation: Access is promptly removed when no longer needed, especially with employee terminations.
Identity Is the New Perimeter: Access, Authentication, and Control That Actually Hold Up

Where organizations struggle most is not in defining these stages, but in proving they happen consistently. Reviews are performed but not documented; approvals occur informally; deprovisioning is a manual process that can be (is often?) delayed or missed; security awareness is hit-and-miss.

From a GRC (governance, risk, compliance) standpoint, these gaps are exactly the conditions attackers look for: dormant accounts, excessive privileges, and unclear ownership.

If you’re preparing for an audit, one of the most effective exercises is also one of the simplest: export access lists from your critical systems and map users to roles. The discrepancies you find will often tell you more than any policy document.

Privileged Access Management: Controlling the Blast Radius

If Access Control defines the structure, Privileged Access Management (PAM) defines the stakes.

Privileged accounts operate with elevated authority. They change configurations, access sensitive data at scale, and bypass the very controls designed to protect the system. When compromised, they don’t just provide access – they provide control.

PAM is about containment, not convenience.

A mature approach to privileged access starts with separation (aka, segregation). Administrative access should be distinct from standard user access, ensuring that elevated privileges are used only when necessary. A common approach is to use something like “username” and “username-admin” to denote the different accounts. That keeps the name obvious, especially in logs for forensics and alerts. This separation reduces both accidental misuse and the impact of credential compromise.

From that point, organizations move toward time-bound access. Rather than assigning standing admin rights, privileged access is granted temporarily. often through a just-in-time model, and expiring automatically. This greatly reduces the window of opportunity for misuse.

Equally important is oversight (the governance type, not the “oops, I didn’t notice that!” type). Privileged actions shouldn’t happen in the dark. Logging, monitoring, and session recording provide visibility – the lighting, if you will – into what is being done and by whom. This visibility is decisive  evidence during audits.

On the practical side, strong PAM programs share a few traits:

  • Privileged access requires explicit approval and is tied to a defined task or timeframe (make sure it’s documented)
  • Privileged activity is logged and regularly reviewed (don’t miss regular account reviews)
  • Administrative accounts are separate from day-to-day user accounts (“username” vs. “username-a”)

Auditors evaluating PAM controls are not necessarily expecting a fully mature, tool-driven implementation. What they are looking for is control and accountability: a clear inventory of privileged accounts, defined approval processes, and evidence that activity is monitored.

Without these controls, privilege escalation becomes the natural next step in an attack. A compromised user account is often just the beginning; the real damage occurs when that foothold turns into administrative control.

For organizations early in their quest (I like “quest” instead of something like “journey” because there’s more activity, introspection, and going down an unknown road in a quest) even a simplified “PAM-lite” approach can dramatically improve both security posture and audit readiness. This Lite approach involves eliminating shared admin accounts, enforcing MFA, and logging privileged actions.

Authentication Standards: Strengthening the Front Door

If access control governs who should have access, and PAM governs the most sensitive accounts, authentication determines how easily those controls can be bypassed.

Weak authentication undermines everything else.

An Authentication Standards policy (sample here: https://www.securitystudio.com/resources/identity-and-access-management-policy) defines the baseline expectations for verifying identity. This includes password practices, multi-factor authentication requirements, and the processes used to validate users during account recovery or support interactions.

Historically, password complexity rules dominated. Today, the emphasis is on length, uniqueness, and resistance to known breaches. Long passphrases are generally more effective and usable than short, complex strings, which make people more likely to reuse or write down.

Multi-factor authentication remains one of the most useful controls available. When implemented consistently, it significantly reduces the likelihood that a stolen password alone can lead to account compromise. MFA must be implemented thoughtfully. Push fatigue attacks (aka, MFA bombing)and phishing proxies have demonstrated that not all MFA methods offer equal protection.

Stronger approaches include authenticator apps with number matching, hardware security keys, and other phishing-resistant mechanisms. At minimum, MFA should be enforced for remote access, privileged accounts, and any system containing sensitive data.

Authentication standards also extend beyond login prompts. Account recovery processes, particularly help desk interactions, are a frequent target for social engineering. A well-defined policy ensures that identity is verified before credentials are reset or access is granted.

From an audit standpoint, the key aspect is consistent. It’s not enough to require MFA in policy; it has to be demonstrably enforced across systems (don’t forget – you have to LOVE screenshots to do an audit, so get used to taking them – I use Greenshot a lot!). Password standards must go beyond documentation to being applied in practice.

A quick internal assessment can reveal the gaps:

  • Where is MFA not enforced?
  • Are there justified exceptions?
  • How resistant is the password reset process to manipulation?

The answers to those questions align closely with real-world risk.

Why This Matters: Beyond Compliance

It’s tempting to view these controls as audit requirements, as boxes to check in preparation for ISO 27001 or SOC 2. But the reality is that these are controls that prevent breaches. They can be extremely annoying, but it’s very much a case where you don’t want to have to tell the company, “I told you so.”

When access is poorly governed, users gain unnecessary permissions.
When privileged accounts are loosely controlled, attackers escalate quickly.
When authentication is weak/inconsistent, credentials become the easiest way in.

These are compounding failures.

Identity has become a focal point of both security frameworks AND real-world attacks. Almost everything else in your environment is built on identity being trustworthy. When that ID foundation fails, defenses become reactive instead of preventive.

Final Thought: From Policy to Practice

There are 2 traps that orgs can fall into – lack of policies, and policies that don’t translate into consistent action: access reviews are scheduled but not completed, privileged activity is logged but not examined, MFA is required in theory/writing but bypassed in actual practice.

Audit readiness and real security come from closing that gap. (hint: do a gap analysis. Unsure what that is or means for your org? there’s lots out there; maybe I’ll write about it) Document your controls clearly, implement them consistently, and retain evidence that proves they’re working.

Attackers aren’t interested in your policies; they’re counting on the times when policies aren’t followed.

Stay vigilant!

Further Reading

Uber breach

https://blog.gitguardian.com/uber-breach-2022/

https://www.manageengine.com/products/desktop-central/blog/uber-data-breach-2022-how-the-hacker-annoyed-his-way-into-the-network

https://www.infoq.com/news/2022/09/Uber-breach-mfa-fatigue/

Colonial Pipeline incident

https://www.energy.gov/ceser/colonial-pipeline-cyber-incident

Colonial Pipeline lessons learned PDF: https://cyberdefensereview.army.mil/Portals/6/Documents/2021_summer_cdr/02_ReederHall_CDR_V6N3_2021.pdf

Secjuice – ​Read More

Conti, Akira ransomware affiliate given 8-year sentence

Deniss Zolotarjovs pleaded guilty in July 2025 to money laundering and wire fraud charges after being arrested in the country of Georgia.

The Record from Recorded Future News – ​Read More