ShinyHunters Hackers Threaten 400 Firms Over Stolen Salesforce Data

ShinyHunters claims to have stolen data from 400 firms via Salesforce portals and is threatening to leak the information unless ransom demands are paid.

Hackread – Cybersecurity News, Data Breaches, AI and More – ​Read More

Mental health apps are leaking your private thoughts. How do you protect yourself? | Kaspersky official blog

In February 2026, the cybersecurity firm Oversecured published a report that makes you want to factory reset your phone and move into a remote cabin in the woods. Researchers audited 10 popular Android mental health apps — ranging from mood trackers and AI therapists to tools for managing depression and anxiety — and uncovered… 1575 vulnerabilities! Fifty-four of those flaws were classified as critical. Given the download stats on Google Play, as many as 15 million people could be affected. The real kicker? Six out of the ten apps tested explicitly promised users that their data was “fully encrypted and securely protected”.

We’re breaking down this scandalous “brain drain”: what exactly could leak, how it’s happening, and why “anonymity” in these services is usually just a marketing myth.

What was found in the apps

Oversecured is a mobile app security firm that uses a specialized scanner to analyze APK files for known vulnerability patterns across dozens of categories. In January 2026, researchers ran ten mental health monitoring apps from Google Play through the scanner — and the results were, shall we say, “spectacular”.

App Type Installs Security vulnerabilities
High-severity Medium-severity Low-severity Total
Mood & habit tracker 10M+ 1 147 189 337
AI therapy chatbot 1M+ 23 63 169 255
AI emotional health platform 1M+ 13 124 78 215
Health & symptom tracker 500k+ 7 31 173 211
Depression management tool 100k+ 0 66 91 157
CBT-based anxiety app 500k+ 3 45 62 110
Online therapy & support community 1M+ 7 20 71 98
Anxiety & phobia self-help 50k+ 0 15 54 69
Military stress management 50k+ 0 12 50 62
AI CBT chatbot 500k+ 0 15 46 61
Total 14.7М+ 54 538 983 1575

Vulnerabilities found in the 10 tested mental health apps. Source

The anatomy of the flaws

The discovered vulnerabilities are diverse, but they all boil down to one thing: giving attackers access to data that should be under lock and key.

For starters, one of the vulnerabilities allows an attacker to access any internal activity of the app — even that never intended for external eyes. This opens the door to hijacking authentication tokens and user session data. Once an attacker has those, they essentially could gain access to a user’s therapy records.

Another issue is insecure local data storage with read permissions granted to any other app on the device. In other words, that random flashlight app or calculator on your smartphone could potentially read your cognitive behavioral therapy (CBT) logs, personal notes, and mood assessments.

The researchers also found unencrypted configuration data baked right into the APK installation files. This included backend API endpoints and hardcoded URLs for Firebase databases.

Furthermore, several apps were caught using the cryptographically weak java.util.Random class to generate session tokens and encryption keys.

Finally, most of the tested apps lacked root/jailbreak detection. On a rooted device, any third-party app with root privileges could gain total access to every bit of locally stored medical data.

Shockingly, of the 10 apps analyzed, only four received updates in February 2026. The rest haven’t seen a patch since November 2025, and one hasn’t been touched since September 2024. Going 18 months without a security patch is a lifetime in this industry — especially for an app housing mood journals, therapy transcripts, and medication schedules.

Here’s a quick reminder of just how dangerous the misuse of this type of data gets. In 2024, the tech world was rocked by a sophisticated attack on XZ Utils, a critical component found in virtually every operating system based on the Linux kernel. The attacker successfully pressured the maintainer into handing over code commit permissions by exploiting the developer’s public admission of burnout and a lack of motivation to carry on with the project. Had the attack been completed, the damage would have been mind-boggling given that roughly 80% of the world’s servers run on Linux.

What could leak?

What do these apps collect and store? It’s the kind of stuff you’d likely only share with a trusted clinician: therapy session transcripts, mood logs, medication schedules, self-harm indicators, CBT notes, and various clinical assessment scales.

As far back as 2021, complete medical records were selling on the dark web for US$1000 each. For comparison, a stolen credit card number goes for anywhere between US$5 and US$30. Medical records contain a full identity package: name, address, insurance details, and diagnostic history. Unlike a credit card, you can’t exactly “reissue” your medical history. Furthermore, medical fraud is notoriously difficult to spot. While a bank might flag a suspicious transaction in hours, a fraudulent insurance claim for a phantom treatment can go unnoticed for years.

We’ve seen this movie before

The Oversecured study isn’t just an isolated horror story.

Back in 2020, Julius Kivimäki hacked the database of the Finnish psychotherapy clinic Vastaamo, making off with the records of 33 000 patients. When the clinic refused to cough up a €400 000 ransom, Kivimäki began sending direct threats to patients: “Pay €200 in Bitcoin within 24 hours, or else your records go public”. Ultimately, he leaked the entire database onto the dark web anyway. At least two people died by suicide, and the clinic was forced into bankruptcy. Kivimäki was eventually sentenced to six years and three months in prison, marking a record-breaking trial in Finland for the sheer number of victims involved.

In 2023, the U.S. Federal Trade Commission (FTC) slapped the online therapy giant BetterHelp with a US$7.8 million fine. Despite stating on their sign-up page that your data was strictly confidential, the company was caught funneling user info — including mental health questionnaire responses, emails, and IP addresses — to Facebook, Snapchat, Criteo, and Pinterest for targeted advertising. After the dust settled, 800 000 affected users received a grand total of… US$10 each in compensation.

By 2024, the FTC set its sights on the telehealth firm Cerebral, tagging them with a US$7 million fine. Through tracking pixels, Cerebral leaked the data of 3.2 million users to LinkedIn, Snapchat, and TikTok. The haul included names, medical histories, prescriptions, appointment dates, and insurance info. And the cherry on top? The company sent promotional postcards (sans envelopes) to 6000 patients, which effectively broadcasted that the recipients were undergoing psychiatric treatment.

In September 2024, security researcher Jeremiah Fowler stumbled upon an exposed database belonging to Confidant Health, a provider specializing in addiction recovery and mental health services. The database contained audio and video recordings of therapy sessions, transcripts, psychiatric notes, drug test results, and even copies of driver’s licenses. In total, 5.3 terabytes of data, 126 000 files, or 1.7 million records were sitting there without a password.

Why anonymity is an illusion

Developers love to drop the line: “We never share your personal data with anyone.” Technically, that might be true — instead, they share “anonymized profiles”. The catch? De-anonymizing that data isn’t exactly rocket science anymore. Recent research highlights that using LLMs to strip away anonymity has become a routine reality.

Even the “anonymization” process itself is often a mess. A study by Duke University revealed that data brokers are openly hawking the mental health data of Americans. Out of 37 brokers surveyed, 11 agreed to sell data linked to specific diagnoses (like depression, anxiety, and bipolar disorder), demographic parameters, and in some cases, even names and home addresses. Prices started as low as US$275 for 5000 aggregated records.

According to the Mozilla Foundation, by 2023, 59% of popular mental health apps failed to meet even the most basic privacy standards, and 40% had actually become less secure than the previous year. These apps allowed account creation via third-party services (like Google, Apple, and Facebook), featured suspiciously brief privacy policies that glossed over data collection details, and employed a clever little loophole: some privacy policies applied strictly to the company’s website, but not the app itself. In short, your clicks on the site were “protected”, but your actions within the app were fair game.

How to protect yourself

Cutting these apps out of your life entirely is, of course, the most foolproof option — but it’s not the most realistic one. Besides, there’s no guarantee you can actually nuke the data already collected — even if you delete your account. We previously covered the grueling process of scrubbing your info from data broker databases; it’s possible, but prepare for a headache. So, how can you stay safe?

  • Check permissions before you hit “Install”. In Google Play, navigate to App description → About this app → Permissions. A mood tracker has no business asking for access to your camera, microphone, contacts, or precise GPS location. If it does, it’s not looking out for your well-being — it’s harvesting data.
  • Actually read the privacy policy. We get it — nobody reads these multi-page manifestos. But when a service is vacuuming up your most intimate thoughts, it’s worth a skim. Look for the red flags: does the company share data with third parties? Can you manually delete your records? Does the policy explicitly cover the app itself, or just the website? You can always feed the policy text into an AI and ask it to flag any privacy deal-breakers.
  • Check the last updated date. An app that hasn’t seen an update in over six months is likely a playground for unpatched vulnerabilities. Remember: six out of the 10 apps Oversecured tested hadn’t been touched in months.
  • Disable everything non-essential in your phone’s privacy settings. Whenever prompted, always select “ask not to track”. When an app pleads with you to enable a specific type of tracking — claiming it’s for “internal optimization” — it’s almost always a marketing ploy rather than a functional necessity. After all, if the app truly won’t work without a certain permission, you can always go back and toggle it on later.
  • Don’t use “Sign in with…” services. Authenticating via Facebook, Apple, Google, or Microsoft creates additional identifiers and gives companies a golden opportunity to link your data across different platforms.
  • Treat everything you type like a public social media post. If you wouldn’t want a random stranger on the internet reading it, you probably shouldn’t be typing it into an app with over 150 vulnerabilities that hasn’t seen a patch since the year before last.

What else you should know about privacy settings and controlling your personal data online:

Kaspersky official blog – ​Read More

Google’s $32B Wiz Acquisition Set to Become Israel’s Largest Tech Deal Ever

Google’s $32 billion Wiz acquisition is nearing completion, marking a record Israeli tech exit and a major bet on cloud security.

The post Google’s $32B Wiz Acquisition Set to Become Israel’s Largest Tech Deal Ever appeared first on TechRepublic.

Security Archives – TechRepublic – ​Read More

Jazz Emerges From Stealth With $61M in Funding for AI-Powered DLP

The startup brings AI to data loss prevention to provide visibility into intent, context, and risk.

The post Jazz Emerges From Stealth With $61M in Funding for AI-Powered DLP appeared first on SecurityWeek.

SecurityWeek – ​Read More

How I’m getting better sleep this year thanks to these quirky gadgets

I regularly clock 7 hours a night, and I wake up without the help of an alarm. Here’s why my sleep is so good.

Latest news – ​Read More

How to turn on repair mode on your Android phone – and why it’s critical to do so

Repair Mode lets technicians fix your Android phone without seeing your personal files or apps. Here’s how to enable it.

Latest news – ​Read More

Anthropic and OpenAI just exposed SAST’s structural blind spot with free tools

OpenAI launched Codex Security on March 6, entering the application security market that Anthropic had disrupted 14 days earlier with Claude Code Security. Both scanners use LLM reasoning instead of pattern matching. Both proved that traditional static application security testing (SAST) tools are structurally blind to entire vulnerability classes. The enterprise security stack is caught in the middle.

Anthropic and OpenAI independently released reasoning-based vulnerability scanners, and both found bug classes that pattern-matching SAST was never designed to detect. The competitive pressure between two labs with a combined private-market valuation exceeding $1.1 trillion means detection quality will improve faster than any single vendor can deliver alone.

Neither Claude Code Security nor Codex Security replaces your existing stack. Both tools change procurement math permanently. Right now, both are free to enterprise customers. The head-to-head comparison and seven actions below are what you need before the board of directors asks which scanner you are piloting and why.

How Anthropic and OpenAI reached the same conclusion from different architectures

Anthropic published its zero-day research on February 5 alongside the release of Claude Opus 4.6. Anthropic said Claude Opus 4.6 found more than 500 previously unknown high-severity vulnerabilities in production open-source codebases that had survived decades of expert review and millions of hours of fuzzing.

In the CGIF library, Claude discovered a heap buffer overflow by reasoning about the LZW compression algorithm, a flaw that coverage-guided fuzzing could not catch even with 100% code coverage. Anthropic shipped Claude Code Security as a limited research preview on February 20, available to Enterprise and Team customers, with free expedited access for open-source maintainers. Gabby Curtis, Anthropic’s communications lead, told VentureBeat in an exclusive interview that Anthropic built Claude Code Security to make defensive capabilities more widely available.

OpenAI’s numbers come from a different architecture and a wider scanning surface. Codex Security evolved from Aardvark, an internal tool powered by GPT-5 that entered private beta in 2025. During the Codex Security beta period, OpenAI’s agent scanned more than 1.2 million commits across external repositories, surfacing what OpenAI said were 792 critical findings and 10,561 high-severity findings. OpenAI reported vulnerabilities in OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium, resulting in 14 assigned CVEs. Codex Security’s false positive rates fell more than 50% across all repositories during beta, according to OpenAI. Over-reported severity dropped more than 90%.

Checkmarx Zero researchers demonstrated that moderately complicated vulnerabilities sometimes escaped Claude Code Security’s detection. Developers could trick the agent into ignoring vulnerable code. In a full production-grade codebase scan, Checkmarx Zero found that Claude identified eight vulnerabilities, but only two were true positives. If moderately complex obfuscation defeats the scanner, the detection ceiling is lower than the headline numbers suggest. Neither Anthropic nor OpenAI has submitted detection claims to an independent third-party audit. Security leaders should treat the reported numbers as indicative, not audited.

Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat that the competitive scanner race compresses the window for everyone. Baer advised security teams to prioritize patches based on exploitability in their runtime context rather than CVSS scores alone, shorten the window between discovery, triage, and patch, and maintain software bill of materials visibility so they know instantly where a vulnerable component runs.

Different methods, almost no overlap in the codebases they scanned, yet the same conclusion. Pattern-matching SAST has a ceiling, and LLM reasoning extends detection past it. When two competing labs distribute that capability at the same time, the dual-use math gets uncomfortable. Any financial institution or fintech running a commercial codebase should assume that if Claude Code Security and Codex Security can find these bugs, adversaries with API access can find them, too.

Baer put it bluntly: open-source vulnerabilities surfaced by reasoning models should be treated closer to zero-day class discoveries, not backlog items. The window between discovery and exploitation just compressed, and most vulnerability management programs are still triaging on CVSS alone.

What the vendor responses prove

Snyk, the developer security platform used by engineering teams to find and fix vulnerabilities in code and open-source dependencies, acknowledged the technical breakthrough but argued that finding vulnerabilities has never been the hard part. Fixing them at scale, across hundreds of repositories, without breaking anything. That is the bottleneck. Snyk pointed to research showing AI-generated code is 2.74 times more likely to introduce security vulnerabilities compared to human-written code, according to Veracode’s 2025 GenAI Code Security Report. The same models finding hundreds of zero-days also introduce new vulnerability classes when they write code.

Cycode CTO Ronen Slavin wrote that Claude Code Security represents a genuine technical advancement in static analysis, but that AI models are probabilistic by nature. Slavin argued that security teams need consistent, reproducible, audit-grade results, and that a scanning capability embedded in an IDE is useful but does not constitute infrastructure. Slavin’s position: SAST is one discipline within a much broader scope, and free scanning does not displace platforms that handle governance, pipeline integrity, and runtime behavior at enterprise scale.

“If code reasoning scanners from major AI labs are effectively free to enterprise customers, then static code scanning commoditizes overnight,” Baer told VentureBeat. Over the next 12 months, Baer expects the budget to move toward three areas.

  1. Runtime and exploitability layers, including runtime protection and attack path analysis.

  2. AI governance and model security, including guardrails, prompt injection defenses, and agent oversight.

  3. Remediation automation. “The net effect is that AppSec spending probably doesn’t shrink, but the center of gravity shifts away from traditional SAST licenses and toward tooling that shortens remediation cycles,” Baer said.

Seven things to do before your next board meeting

  1. Run both scanners against a representative codebase subset. Compare Claude Code Security and Codex Security findings against your existing SAST output. Start with a single representative repository, not your entire codebase. Both tools are in research preview with access constraints that make full-estate scanning premature. The delta is your blind spot inventory.

  2. Build the governance framework before the pilot, not after. Baer told VentureBeat to treat either tool like a new data processor for the crown jewels, which is your source code. Baer’s governance model includes a formal data-processing agreement with clear statements on training exclusion, data retention, and subprocessor use, a segmented submission pipeline so only the repos you intend to scan are transmitted, and an internal classification policy that distinguishes code that can leave your boundary from code that cannot. In interviews with more than 40 CISOs, VentureBeat found that formal governance frameworks for reasoning-based scanning tools barely exist yet. Baer flagged derived IP as the blind spot most teams have not addressed. Can model providers retain embeddings or reasoning traces, and are those artifacts considered your intellectual property? The other gap is data residency for code, which historically was not regulated like customer data but increasingly falls under export control and national security review.

  3. Map what neither tool covers. Software composition analysis. Container scanning. Infrastructure-as-code. DAST. Runtime detection and response. Claude Code Security and Codex Security operate at the code-reasoning layer. Your existing stack handles everything else. That stack’s pricing power is what shifted.

  4. Quantify the dual-use exposure. Every zero-day Anthropic and OpenAI surfaced lives in an open-source project that enterprise applications depend on. Both labs are disclosing and patching responsibly, but the window between their discovery and your adoption of those patches is exactly where attackers operate. AI security startup AISLE independently discovered all 12 zero-day vulnerabilities in OpenSSL’s January 2026 security patch, including a stack buffer overflow (CVE-2025-15467) that is potentially remotely exploitable without valid key material. Fuzzers ran against OpenSSL for years and missed every one. Assume adversaries are running the same models against the same codebases.

  5. Prepare the board comparison before they ask. Claude Code Security reasons about code contextually, traces data flows, and uses multi-stage self-verification. Codex Security builds a project-specific threat model before scanning and validates findings in sandboxed environments. Each tool is in research preview and requires human approval before any patch is applied. The board needs side-by-side analysis, not a single-vendor pitch. When the conversation turns to why your existing suite missed what Anthropic found, Baer offered framing that works at the board level. Pattern-matching SAST solved a different generation of problems, Baer told VentureBeat. It was designed to detect known anti-patterns. That capability still matters and still reduces risk. But reasoning models can evaluate multi-file logic, state transitions, and developer intent, which is where many modern bugs live. Baer’s board-ready summary: “We bought the right tools for the threats of the last decade; the technology just advanced.”

  6. Track the competitive cycle. Both companies are heading toward IPOs, and enterprise security wins drive the growth narrative. When one scanner misses a blind spot, it lands on the other lab’s feature roadmap within weeks. Both labs ship model updates on monthly cycles. That cadence will outrun any single vendor’s release calendar. Baer said that running both is the right move: “Different models reason differently, and the delta between them can reveal bugs neither tool alone would consistently catch. In the short term, using both isn’t redundancy. It’s defense through diversity of reasoning systems.”

  7. Set a 30-day pilot window. Before February 20, this test did not exist. Run Claude Code Security and Codex Security against the same codebase and let the delta drive the procurement conversation with empirical data instead of vendor marketing. Thirty days gives you that data.

Fourteen days separated Anthropic and OpenAI. The gap between the next releases will be shorter. Attackers are watching the same calendar.

Security | VentureBeat – ​Read More

AI is getting scary good at finding hidden software bugs – even in decades-old code

But AI also creates bugs – about 1.7 times as many as humans, including critical and major issues.

Latest news – ​Read More

Fake Gemini AI Chatbot Promotes ‘Google Coin’ in New Crypto Scam

A fake Gemini-style chatbot is pushing a bogus Google Coin presale, using Google branding and scripted AI replies to lure victims into crypto payments.

The post Fake Gemini AI Chatbot Promotes ‘Google Coin’ in New Crypto Scam appeared first on TechRepublic.

Security Archives – TechRepublic – ​Read More

This free Linux app lets you make memes in seconds – no GIMP required

If you want to make memes using your own images – without AI or complicated editors – Linux has a free tool that’s fun to try.

Latest news – ​Read More