Best DeleteMe Alternatives (2026): Competitors and Comparisons

Best DeleteMe alternatives for 2026 compared, including Incogni, Optery, Aura, Kanary, and Privacy Bee for data broker removal and privacy protection.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

Google Pixel 10a review: Should Android users consider anything else at this price?

The newest Pixel phone packs minor upgrades that make a big difference.

Latest news – ​Read More

Google’s Biggest Android Security Update in Years Fixes 129 Bugs, Including an Actively Exploited Zero-Day

Google’s March 2026 Android update patches 129 flaws, including an actively exploited Qualcomm zero-day, and urges users to install 2026-03-05.

The post Google’s Biggest Android Security Update in Years Fixes 129 Bugs, Including an Actively Exploited Zero-Day appeared first on TechRepublic.

Security Archives – TechRepublic – ​Read More

I turned my Android phone into the perfect bedside clock – here’s how

Don’t want a traditional alarm clock? Your Android phone can show the time while it charges. Here’s how to set it up.

Latest news – ​Read More

Apple’s $599 MacBook Neo first look: The budget Mac we’ve been waiting for?

The new affordable MacBook Neo uses the A18 chip found in the iPhone 16 Pro and comes in playful colors.

Latest news – ​Read More

What a browser-in-the-browser attack is, and how to spot a fake login window | Kaspersky official blog

In 2022, we dived deep into an attack method called browser-in-the-browser — originally developed by the cybersecurity researcher known as mr.d0x. Back then, no actual examples existed of this model being used in the wild. Fast-forward four years, and browser-in-the-browser attacks have graduated from the theoretical to the real: attackers are now using them in the field. In this post, we revisit what exactly a browser-in-the-browser attack is, show how hackers are deploying it, and, most importantly, explain how to keep yourself from becoming its next victim.

What is a browser-in-the-browser (BitB) attack?

For starters, let’s refresh our memories on what mr.d0x actually cooked up. The core of the attack stems from his observation of just how advanced modern web development tools — HTML, CSS, JavaScript, and the like — have become. It’s this realization that inspired the researcher to come up with a particularly elaborate phishing model.

A browser-in-the-browser attack is a sophisticated form of phishing that uses web design to craft fraudulent websites imitating login windows for well-known services like Microsoft, Google, Facebook, or Apple that look just like the real thing. The researcher’s concept involves an attacker building a legitimate-looking site to lure in victims. Once there, users can’t leave comments or make purchases unless they “sign in” first.

Signing in seems easy enough: just click the Sign in with {popular service name} button. And this is where things get interesting: instead of a genuine authentication page provided by the legitimate service, the user gets a fake form rendered inside the malicious site, looking exactly like… a browser pop-up. Furthermore, the address bar in the pop-up, also rendered by the attackers, displays a perfectly legitimate URL. Even a close inspection won’t reveal the trick.

From there, the unsuspecting user enters their credentials for Microsoft, Google, Facebook, or Apple into this rendered window, and those details go straight to the cybercriminals. For a while this scheme remained a theoretical experiment by the security researcher. Now — real-world attackers have added it to their arsenals.

Facebook credential theft

Attackers have put their own spin on mr.d0x’s original concept: recent browser-in-the-browser hits have been kicking off with emails designed to alarm recipients. For instance, one phishing campaign posed as a law firm informing the user they’d committed a copyright violation by posting something on Facebook. The message included a credible-looking link allegedly to the offending post.

Phishing email masquerading as a legal notice

Attackers sent messages on behalf of a fake law firm alleging copyright infringement — complete with a link supposedly to the problematic Facebook post. Source

Interestingly, to lower the victim’s guard, clicking the link didn’t immediately open a fake Facebook login page. Instead, they were first greeted by a bogus Meta CAPTCHA. Only after passing it was the victim presented with the fake authentication pop-up.

Fake login window rendered directly inside the webpage

This isn’t a real browser pop-up; it’s a website element mimicking a Facebook login page — a ruse that allows attackers to display a perfectly convincing address. Source

Naturally, the fake Facebook login page followed mr.d0x’s blueprint: it was built entirely with web design tools to harvest the victim’s credentials. Meanwhile, the URL displayed in the forged address bar pointed to the real Facebook site — www.facebook.com.

How to avoid becoming a victim

The fact that scammers are now deploying browser-in-the-browser attacks just goes to show that their bag of tricks is constantly evolving. But don’t despair — there’s a way to tell if a login window is legit. A password manager is your friend here, which, among other things, acts as a reliable security litmus test for any website.

That’s because when it comes to auto-filling credentials, a password manager looks at the actual URL, not what the address bar appears to show, or what the page itself looks like. Unlike a human user, a password manager can’t be fooled with browser-in-the-browser tactics, or any other tricks, like domains having a slightly different address (typosquatting) or phishing forms buried in ads and pop-ups. There’s a simple rule: if your password manager offers to auto-fill your login and password, you’re on a website you’ve previously saved credentials for. If it stays silent, something’s fishy.

Beyond that, following our time-tested advice will help you defend against various phishing methods, or at least minimize the fallout if an attack succeeds:

  • Enable two-factor authentication (2FA) for every account that supports it. Ideally, use one-time codes generated by a dedicated authenticator app as your second factor. This helps you dodge phishing schemes designed to intercept confirmation codes sent via SMS, messaging apps, or email. You can read more about one-time-code 2FA in our dedicated post.
  • Use passkeys. The option to sign in with this method can also serve as a signal that you’re on a legitimate site. You can learn all about what passkeys are and how to start using them in our deep dive into the technology.
  • Set unique, complex passwords for all your accounts. Whatever you do, never reuse the same password across different accounts. We recently covered what makes a password truly strong on our blog. To generate unique combinations — without needing to remember them — Kaspersky Password Manager is your best bet. As an added bonus, it can also generate one-time codes for two-factor authentication, store your passkeys, and synchronize your passwords and files across your various devices.

Finally, this post serves as yet another reminder that theoretical attacks described by cybersecurity researchers often find their way out into the wild. So, keep an eye on our blog, and subscribe to our Telegram channel to stay up to speed on the latest threats to your digital security and how to shut them down.

Read about other inventive phishing techniques scammers are using day in day out:

Kaspersky official blog – ​Read More

The best kids’ tablets of 2026: Expert tested and parent-reviewed

We tested the best kids’ tablets to find the most durable, fun-filled picks for travel, learning, and winter downtime.

Latest news – ​Read More

Pentagon vendor cutoff exposes the AI dependency map most enterprises never built

The federal directive ordering all U.S. government agencies to cease using Anthropic technology comes with a six-month phaseout window. That timeline assumes agencies already know where Anthropic’s models sit inside their workflows. Most don’t today.

Most enterprises wouldn’t, either. The gap between what enterprises think they’ve approved and what’s actually running in production is wider than most security leaders realize.

AI vendor dependencies don’t stop at the contract you signed; they cascade through your vendors, your vendors’ vendors, and the SaaS platforms your teams adopted without a procurement review. Most enterprises have never mapped that chain.

The inventory nobody has run

A January 2026 Panorays survey of 200 U.S. CISOs put a number on the problem: Only 15% said they have full visibility into their software supply chains, up from just 3% a year ago. And 49% had adopted AI tools without employer approval, according to a BlackFog survey of 2,000 workers at companies with more than 500 employees; 69% of C-suite members said they were fine with it.

That’s where undocumented AI vendor dependencies accumulate, invisible to the security team until a forced migration makes them everyone’s problem.

“If you asked a typical enterprise to produce a dependency graph that includes second- and third-order AI calls, they’d be building it from scratch under pressure,” said Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, in an exclusive interview with VentureBeat. “Most security programs were built for static assets. AI is dynamic, compositional, and increasingly indirect.”

When a vendor relationship ends overnight

The directive creates a forced migration unlike anything the federal government has attempted with an AI provider. Any enterprise running critical workflows on a single AI vendor faces the same math if that vendor disappears.

Shadow AI incidents now account for 20% of all breaches, adding as much as $670,000 to average breach costs, IBM’s 2025 Cost of Data Breach Report found. You can’t execute a transition plan for infrastructure you haven’t inventoried.

Your contract with Anthropic may not exist, but your vendors’ contracts might. A CRM platform could have Claude embedded in its analytics engine. A customer service tool might call it on every ticket you process. You didn’t sign for that exposure, but you inherited it, and when a vendor cutoff hits upstream, it cascades downstream fast. The enterprise at the end of that chain doesn’t know the dependency exists until something breaks or the compliance letter shows up.

Anthropic has said eight of the 10 largest U.S. companies use Claude. Any organization in those companies’ supply chains has indirect Anthropic exposure, whether they contracted for it or not. AWS and Palantir, which hold billions in military contracts, may need to reassess their commercial relationships with Anthropic to maintain Pentagon business.

The supply chain risk designation means any company doing business with the Pentagon now has to prove its workflows don’t touch Anthropic.

“Models are not interchangeable,” Baer told VentureBeat. “Switching vendors changes output formats, latency characteristics, safety filters, and hallucination profiles. That means revalidating controls, not just functionality.”

She outlined a sequence that starts with triage and blast radius assessment, moves to behavioral drift analysis, and ends with credential and integration churn. “Rotating keys is the easy part,” Baer said. “Untangling hardcoded dependencies, vendor SDK assumptions, and agent workflows is where things break.”

The dependencies your logs don’t show

A senior defense official described disentangling from Claude as an “enormous pain in the ass,” according to Axios. If that’s the assessment inside the most well-resourced security apparatus on the planet, the question for enterprise CISOs is straightforward. How long would yours take?

The shadow IT wave that followed SaaS adoption taught security teams about unsanctioned technology risk. Most caught up. They deployed CASBs, tightened SSO, and ran spend analysis. The tools worked because the threat was visible. A new application meant a new login, a new data store, a new entry in the logs.

AI vendor dependencies don’t leave those traces.

“Shadow IT with SaaS was visible at the edges,” Baer said. “AI dependencies are embedded inside other vendors’ features, invoked dynamically rather than persistently installed, non-deterministic in behavior, and opaque. You often don’t know which model or provider is actually being used.”

Four moves for Monday morning

The federal directive didn’t create the AI supply chain visibility problem. It exposed it.

“Not ‘inventory your AI,’ because that’s too abstract and too slow,” Baer told VentureBeat. She recommended four concrete moves that a security leader can execute in 30 days.

  1. Map execution paths, not vendors. Instrument at the gateway, proxy, or application layer to log which services are making model calls, to which endpoints, with what data classifications. You’re building a live map of usage, not a static vendor list.

  2. Identify control points you actually own. If your only control is at the vendor boundary, you’ve already lost. You want enforcement at ingress (what data goes into models), egress (what outputs are allowed downstream), and orchestration layers where agents and pipelines operate.

  3. Run a kill test on your top AI dependency. Pick your most critical AI vendor and simulate its removal in a staging environment. Kill the API key, monitor for 48 hours, and document what breaks, what silently degrades, and what throws errors your incident response playbook doesn’t cover. This exercise will surface dependencies you didn’t know existed.

  4. Force vendor disclosure on sub-processors and models. Your AI vendors should be able to answer which models they rely on, where those models are hosted, and what fallback paths exist. If they can’t, that’s your fourth-party blind spot. Ask the questions now, while the relationship is stable. Once a cutoff hits, the leverage shifts, and the answers come too late.

The control illusion

“Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system,” Baer told VentureBeat. “The real dependencies are one or two layers deeper, and those are the ones that fail under stress.”

The federal directive against Anthropic is one organization’s weather event. Every enterprise will eventually face its own version, whether the trigger is regulatory, contractual, operational, or geopolitical. The organizations that mapped their AI supply chain before the storm will recover. The ones that didn’t will scramble.

Map your AI vendor dependencies to the sub-tier level. Run the kill test. Force the disclosure. Give yourself 30 days. The next forced migration won’t come with a six-month warning.

Security | VentureBeat – ​Read More

How Pirated Software Turns Helpful Employees Into Malware Delivery Agents

Employees seeking free versions of paid software may unknowingly install malware-laced “cracked” apps that can steal credentials, deploy cryptominers, or open the door to ransomware.

The post How Pirated Software Turns Helpful Employees Into Malware Delivery Agents appeared first on SecurityWeek.

SecurityWeek – ​Read More

Phishing in 2026: 3 Attack Tactics That Beat Most Enterprise Defenses

Phishing drives about 90% of cyberattacks in 2026, using tactics like encrypted flows, QR code scams, and trusted cloud platforms to steal credentials.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More