Kaspersky Thin Client 2.0 update | Kaspersky official blog

Many companies have long since moved from the traditional workstation model to the virtual desktop infrastructure (VDI). VDI provides a number of advantages — one being better cybersecurity (not least because work data doesn’t leave corporate servers; it always lives in a virtual machine). However, despite a popular misconception, VDI alone doesn’t mean guaranteed security. It always matters how secure the endpoint device is that connects to the virtual workplace.

By and large, there are two options for using VDI. The first is to employ traditional workstations; the second is to use thin clients. Common advantages of a thin client include the following:

no moving parts: they don’t have active cooling systems or mechanical hard drives, which significantly increases the service life of the thin client (up to 7-10 years);
low energy consumption, which leads to direct savings;
lower price and cost of ownership (in comparation even with desktops and laptops for office work);
ease of maintenance and operation.

However, from our point of view, this isn’t the main advantage of using a thin client. Any workstation, be it a desktop PC or a laptop, must be provided with additional layers protection. And a thin client can be made secure as-is if its operating system is based on the secure-by-design principle. It’s precisely such an operating system — Kaspersky Thin Client 2.0 — that we propose to use in thin clients connected to virtual desktop infrastructure.

What is Kaspersky Thin Client, and what’s new in version 2.0?

Essentially, Kaspersky Thin Client 2.0 is an updated operating system for thin clients, created in accordance with our Cyber Immune approach; as such, it doesn’t require additional security measures. Kaspersky Thin Client is based on our KasperskyOS system, which minimizes the risk of its compromise even in the event of complex targeted attacks.

The updated Kaspersky Thin Client version 2.0 can connect to remote environments deployed on the Citrix Workspace platform and VMware Horizon infrastructure using HTML5 technology. Kaspersky Thin Client 2.0 also supports connection to individual business applications deployed on the Microsoft Remote Desktop Services infrastructure, Windows Server, and terminal servers running Windows 10/11.

Another key change in KTC 2.0 is the increase in performance. We managed to increase both the speed of application delivery and the speed of system updates (due to the compact size of the OS image). Now deployment time of thin clients under KTC 2.0 through automatic connection takes about two minutes.

You can learn more about the updated operating system for thin clients on the Kaspersky Thin Client page.

Kaspersky official blog – ​Read More

How to read encrypted messages from ChatGPT and other AI chatbots | Kaspersky official blog

Israeli researchers from Offensive AI Lab have published a paper describing a method for restoring the text of intercepted AI chatbot messages. Today we take a look at how this attack works, and how dangerous it is in reality.

What information can be extracted from intercepted AI chatbot messages?

Naturally, chatbots send messages in encrypted form. All the same, the implementation of large language models (LLMs) and the chatbots built on them harbors a number of features that seriously weaken the encryption. Combined, these features make it possible to carry out a side-channel attack when the content of a message is restored from fragments of leaked information.

To understand what happens during this attack, we need to dive a little into the details of LLM and chatbot mechanics. The first thing to know is that LLMs operate not on individual characters or words as such, but on tokens, which can be described as semantic units of text. The Tokenizer page on the OpenAI website offers a glimpse into the inner workings.

This example demonstrates how message tokenization works with the GPT-3.5 and GPT-4 models. Source

The second feature that facilitates this attack you’ll already know about if you’ve interacted with AI chatbots yourself: they don’t send responses in large chunks but gradually — almost as if a person were typing them. But unlike a person, LLMs write in tokens — not individual characters. As such, chatbots send generated tokens in real time, one after another; or, rather, most chatbots do: the exception is Google Gemini, which makes it invulnerable to this attack.

The third peculiarity is the following: at the time of publication of the paper, the majority of chatbots didn’t use compression, encoding or padding (appending garbage data to meaningful text to reduce predictability and increase cryptographic strength) before encrypting a message.

Side-channel attacks exploit all three of these peculiarities. Although intercepted chatbot messages can’t be decrypted, attackers can extract useful data from them — specifically, the length of each token sent by the chatbot. The result is similar to a Wheel of Fortune puzzle: you can’t see what exactly is encrypted, but the length of the individual words tokens is revealed.

While it’s impossible to decrypt the message, the attackers can extract the length of the tokens sent by the chatbot; the resulting sequence is similar to a hidden phrase in the Wheel of Fortune show. Source

Using extracted information to restore message text

All that remains is to guess what words are hiding behind the tokens. And you’ll never believe who’s good at guessing games: that’s right — LLMs. In fact, this is their primary purpose in life: to guess the right words in the given context. So, to restore the text of the original message from the resulting sequence of token lengths, the researchers turned to an LLM…

Two LLMs, to be precise, since the researchers observed that the opening exchanges in conversations with chatbots are almost always formulaic, and thus readily guessable by a model specially trained on an array of introductory messages generated by popular language models. Thus, the first model is used to restore the introductory messages and pass them to the second model, which handles the rest of the conversation.

General scheme of the attack. Source

This produces a text in which the token lengths correspond to those in the original message. But specific words are brute-forced with varying degrees of success. Note that a perfect match between the restored message and the original is rare — it usually happens that a part of the text is guessed wrong. Sometimes the result is satisfactory:

In this example, the text was restored quite close to the original. Source

But in an unsuccessful case, the reconstructed text may have little, or even nothing, in common with the original. For example, the result might be this:

Here the guesswork leaves much to be desired. Source

Or even this:

As Alice once said, “those are not the right words.” Source

In total, the researchers examined over a dozen AI chatbots, and found most of them vulnerable to this attack — the exceptions being Google Gemini (née Bard) and GitHub Copilot (not to be confused with Microsoft Copilot).

At the time of publication of the paper, many chatbots were vulnerable to the attack. Source

Should I be worried?

It should be noted that this attack is retrospective. Suppose someone took the trouble to intercept and save your conversations with ChatGPT (not that easy, but possible), in which you revealed some awful secrets. In this case, using the above-described method, that someone would theoretically be able to read the messages.

Thankfully, the interceptor’s chances are not too high: as the researchers note, even the general topic of the conversation was determined only 55% of the time. As for successful reconstruction, the figure was a mere 29%. It’s worth mentioning that the researchers’ criteria for a fully successful reconstruction were satisfied, for example, by the following:

Example of a text reconstruction that the researchers considered fully successful. Source

How important such semantic nuances are — decide for yourself. Note, however, that this method will most likely not extract any actual specifics (names, numerical values, dates, addresses, contact details, other vital information) with any degree of reliability.

And the attack has one other limitation that the researchers fail to mention: the success of text restoration depends greatly on the language the intercepted messages are written in: the success of tokenization varies greatly from language to language. This paper was focused on English, which is characterized by very long tokens that are generally equivalent to an entire word. Hence, tokenized English text shows distinct patterns that make reconstruction relatively straightforward.

No other language comes close. Even for those languages in the Germanic and Romance groups, which are the most akin to English, the average token length is 1.5–2 times shorter; and for Russian, 2.5 times: a typical Russian token is only a couple of characters long, which will likely reduce the effectiveness of this attack down to zero.

At least two AI chatbot developers — Cloudflare and OpenAI — have already reacted to the paper by adding the padding method mentioned above, which was designed specifically with this type of threat in mind. Other AI chatbot developers are set to follow suit, and future communication with chatbots will, fingers crossed, be safeguarded against this attack.

Kaspersky official blog – ​Read More

Content filtering in KSMG 2.1 | Kaspersky official blog

When it comes to spam, we usually think of a bunch of absolutely irrelevant advertising letters, which antispam engines filter out with no trouble at all. However, this is far from the most unpleasant thing that can fall into your mailbox. Sometimes spam is used to carry out a DDoS attack on corporate email addresses, and the victim gets bombarded with completely legitimate emails that don’t raise any suspicion of a standard antispam engine.

Registration confirmations attack

In order to perform a mail bomb attack, attackers can exploit the registration mechanisms on the web resources of totally unrelated companies. Using automation tools, they register on thousands of services from different countries using the victim’s email address. As a result, a huge number of confirmations, links to activate your account, and similar letters end up in your mailbox. Moreover, since they’re sent by legitimate mail servers with a good reputation, the antispam engine considers them legal and doesn’t block them.

Examples of registration confirmation emails used for DDoS attacks on corporate email addresses

As a target the attackers usually choose an address that’s crucial for the company’s work — something that’s used to communicate with clients or partners; for example, a mailbox of the sales department, technical support, or a bank’s address to which applications for mortgage loans are sent. An attack can last for days, and the plethora of emails  simply overload the victim’s mail server and paralyze the work of the attacked department.

To successfully protect a mailbox from such an attack, a more sophisticated tool is required. As one of the approaches to protection against mail bombs, we propose using the personalized content filtering module built into our updated Kaspersky Secure Mail Gateway In particular, in the above example of an attack through registration mechanisms, the operator can block letters based on the presence of the word “registration” in various languages in the Subject field (Registrace | Registracija | Registration | Registrierung | Regisztráció). As a result, emails will be automatically sent to quarantine without reaching the inbox and overloading the mail server.

Personalized mail filter settings

In Kaspersky Secure Mail Gateway version 2.1 we’ve added the following options for filtering incoming and outgoing mail:

by letter size;
by attachment types and names;
by sender — you can specify a specific sender address or a regular expression;
by recipients (including hidden ones);
by the presence of certain text in the body of the letter (keywords and regular expressions can be added to the dictionary);
by the presence of text in the subject of the letter – by keywords, using masks and regular expressions, indicating specific senders;
by X-headers.

 

Flexible filtering of business mailings

The new capabilities of our solution can be used not only to protect against email bombs attacks. They can be used, for example, for flexible configuration of B2B-mailout filtering. Not all employees perceive all kinds of business mailings in the same way: for some it makes sense to delve into offers to purchase electronic components; for others such advertisements just clog up their inboxes, while they consider various invitations to participate in conferences or conduct seminars extremely valuable.

Therefore, completely blocking legitimate business mailouts isn’t an option. But on the other hand, it’s also not worth allowing their uncontrolled delivery: someone will always be dissatisfied. Therefore, Kaspersky Secure Mail Gateway doesn’t categorize such letters as spam, but allows you to configure their flexible filtering by senders, recipients, text in the subject or body of the letter, and so on.

You can learn more about Kaspersky Secure Mail Gateway, part of Kaspersky Security for Mail Servers solution on our corporate website.

Kaspersky official blog – ​Read More

Cryptocurrency fraud with Toncoin on Telegram | Kaspersky official blog

Making money with cryptocurrency is imagined by many to be a sinecure: one lucky trade and you’re set for life. While theoretically possible, just like winning the lottery, it only happens to an incredibly small number of people. “Getting rich with crypto” is more of a meme than reality. Yet self-proclaimed crypto-millionaires flaunt their Lamborghinis, stacks of cash, and watches the price of an apartment — fueling the dream. However, those cars are often rented, the “money” from a prank store, and the watches cheap knock-offs.

These “crypto gurus” or “insiders” claim anyone can strike it rich with crypto; however, we all know there’s no such thing as a free lunch. Today, we expose the fraudulent scheme of “earning with Toncoin“, which revolves around a cryptocurrency based on Telegram technologies.

How the Toncoin “earning” scheme works

Scammers promote a “super-secret awesome bot” and referral links as the key to earning Toncoin. In short: you invest your money, buy “booster” tariffs, invite friends, and earn commission from every coin invested. The pyramid scheme incentivizes larger investments with the promise of higher returns.

According to our data, this scam has been active since at least November 2023 — targeting both Russian and users from other countries. To make it easier to lure in “potential partners”, the scammers have recorded instructional videos in both Russian and English, along with detailed manuals and a large number of explanatory screenshots.

Let’s break this scam down step by step. Get your protection ready, and let’s dive in!

Stage one: preparation

First, the scammers instruct you to register a crypto wallet using an unofficial Telegram bot for storing crypto. Next, you provide your new wallet address to the bot for “earnings” through purchasing boosters. What these bots are really needed for, the scammers explain to visctims later; initially, their main interest is ensuring you register without asking too many questions.

Window of the bot for purchasing boosters; registration requires you to enter the address of the wallet previously created in the crypto wallet bot

Next, you’re instructed to buy 5.5 to 501 Toncoin (TON), with one TON equivalent to about six U.S. dollars at the time of writing this. They suggest using legitimate tools like P2P markets, crypto exchanges, or the official Telegram bot for this purchase. The freshly purchased TON must be immediately transferred to the crypto wallet bot — supposedly acting as your personal account within the “earning system”, which the scammers can control.

Stage two: take action

With accounts registered and coins purchased and transferred to the bot, it’s time to start “earning”. The scammers then ask you to “activate the second bot” — by choosing a booster tariff: “bike”, “car”, “train”, “plane”, or “rocket”. The fancier the tariff, the higher the commission percentage — “bike” costs 5 TON and offers 30% commission, while “rocket” is 500 TON for 70%. However, the choice is irrelevant, because whatever tariff the victim chooses, the money will be irretrievably lost.

Window with tariff selection in the booster bot

Following the scammers’ instructions, you create a private Telegram group and post several instructional videos about the “earning” scheme, along with your generated referral link. The abundance of these videos online indicates a significant number of victims have fallen for this scam.

Stage three: earn!

So, how do you actually earn something? With the help of your friends and acquaintances, of course! They will also need to buy TON, transfer it to the crypto wallet, and “activate the booster bot”. The scammers strongly advise inviting at least five friends to your private group. “The number of invitations is unlimited, and the more people you attract, the better for you. Remember: you won’t earn until at least five people activate the booster bot!”. All very tempting. They even recommend calling each friend to personally explain this “incredible earning scheme”.

The scammers promise earnings from two sources:

A fixed payment of 25 TON for each invited friend.
Commission based on the booster tariff purchased by your referrals.

It turns out to be a classic pyramid scheme, where each participant is “a partner rather than a freeloader”. Sadly, nobody profits except the scammers, and all “partners” lose their investments.

How to avoid crypto scams

Don’t fall for get-rich-quick schemes — even if promoted by friends or family. They might be victims themselves, unaware of the scam.
Never transfer cryptocurrency to unknown or obscure wallets. This scam uses a confusing sequence of instructions, making it easy to overlook the suspicious transfer of money from the official @wallet bot to a third-party one.
Use maximum protection for your crypto assets. This will securely store your wallet data, warn you about suspicious websites, block crypto-phishing links and scams, and protect you from miners and other threats.
Read our posts about crypto scammers to stay informed about all the latest fraudulent schemes, and don’t forget to share them with friends and family — especially those who still aren’t all that internet-savvy.

Kaspersky official blog – ​Read More

Is it safe to message other apps from WhatsApp? | Kaspersky official blog

The EU’s Digital Markets Act (DMA) requires major tech companies to make their products more open and interoperable in order to increase competition. Thanks to the DMA, iOS will soon permit third-party app stores to be installed on it, and major messaging platforms will need to allow communication with other similar apps — creating cross-platform compatibility. Meta (Facebook) engineers recently detailed how this compatibility will be implemented in its WhatsApp and Messenger. The benefits of interoperability are clear to anyone who’s ever texted or emailed. You’ll be able to send or receive messages without worrying about what phone, computer, or app the other person is using, or what country they’re in. However, there are downsides: first third parties (from intelligence agencies to hackers) often have access to your correspondence; second, such messages are prime targets for spam and phishing. So, will the DMA be able to ensure provision of interoperability and its benefits, while eliminating its drawbacks?

It’s important to note that while the DMA’s impact on the iOS App Store will only affect EU users, cross-platform messaging will likely impact everyone — even if it will be only EU partners that connect to the WhatsApp infrastructure.

Can you chat on WhatsApp with users of other platforms?

Theoretically, yes, but not yet in practice. Meta has published specifications and technical requirements for partners who want their apps to be interoperable with WhatsApp or Messenger. It’s now up to these partners to climb aboard and develop a working bridge between their service and WhatsApp. To date, no such partnerships have been announced.

Owners and developers of other messaging services may be reluctant to implement such functionality. Some consider it insecure; others are unwilling to invest resources into rather complex integration. Meta requires potential partners to implement end-to-end encryption (E2EE) no weaker than in WhatsApp, which is a significant challenge for many platforms

Even when (or if) third-party services show up, only those WhatsApp users who explicitly opt-in will be able to message across platforms. It won’t be enabled by default.

What will such messaging look like?

Based on WhatsApp beta versions, messages with users on other platforms will be housed in a separate section of the app to distinguish them from chats with WhatsApp users.

Initially, only one-on-one messaging and file/image/video sharing will be supported. Calls and group chats won’t be available for at least a year.

User identification remains an open question. In WhatsApp, users find each other by phone number, while on Facebook, they do it by name, workplace, school, friends of friends, or other similar identifiers (and ultimately by a unique ID). Other platforms might use incompatible identifiers, like short usernames in Discord, or alphanumeric IDs in Threema. This is likely to impede automatic search and user matching, and at the same time facilitate impersonation attacks by scammers.

Encryption challenges

One of the key challenges with integrating different messaging platforms is implementing reliable encryption. Even if two platforms use the same encryption protocol, technical issues arise regarding storage and agreement of keys, user authentication, and more.

If the encryption method differs significantly, a bridge — an intermediary server that decrypts messages from one protocol and re-encrypts them into another — will likely be needed. If it seems to you that this is a man-in-the-middle (MITM) attack waiting to happen, where hacking this server would allow eavesdropping, you’re misgiving would be on the money. The failed Nothing Chats app, which used a similar scheme to enable iMessage on Android, recently demonstrated this vulnerability. Even Meta’s own efforts are illustrative: encrypted messaging between Messenger and Instagram was announced over five years ago, but full-scale encryption in Messenger only arrived last December, and seamless E2EE in Instagram remains not fully functional to this day. As this in-depth article explains, it’s not a matter of laziness or lack of time, but rather the significant technical complexity of the project.

Cryptographers are generally highly skeptical about the idea of cross-platform E2EE. Some experts believe the problem can be solved — for example, by placing the bridge directly on the user’s computer or by having all platforms adopt a single, decentralized messaging protocol. However, the big fish in the messaging market aren’t swimming in that direction at all. It’s hard to accuse them of idleness or inertia — all practical experience demonstrates that reliable and user-friendly message encryption within open ecosystems is difficult to implement. Just look at the saga of PGP encryption in email, and the confessions of top cryptography experts.

We’ve compiled information on the WhatsApp/Messenger integration plans of major communication platforms, and assessed the technical feasibility of cross-platform functionality:

Service
Statement on WhatsApp compatibility
Encryption compatibility

Discord
None
No E2EE support, integration unlikely

iMessage
None
Uses own encryption —comparable in strength to WhatsApp

Matrix
Interested in technical integration with WhatsApp, and supports the DMA in general
Uses own encryption —comparable in strength to WhatsApp

Signal
None
Uses the Signal protocol, as does WhatsApp

Skype
None
Uses the Signal protocol, as does WhatsApp, but for private conversations only

Telegram
None
Most chats are unencrypted, and private conversations are encrypted with an unreliable algorithm

Threema
Concerned about privacy risks associated with WhatsApp integration. Integration unlikely
Uses own encryption —comparable in strength to WhatsApp

Viber
None
Uses own encryption —comparable in strength to WhatsApp

Security concerns

Beyond encryption issues, integrating various services introduces additional challenges in protecting against spam, phishing, and other cyberthreats. Should you receive spam on WhatsApp, you can block the offender there and then. After being blocked by several users, the spammer will have limited ability to message strangers. To what extent such anti-spam techniques will work with third-party services remains to be seen.

Another issue is the moderation of unwanted content — from pornography to fake giveaways. When algorithms and experts from not one but two companies are involved, response speed and quality are bound to suffer.

Privacy concerns will also become more complex. Say you install the Skype app — in doing so, you share data with Microsoft, which will store it. However, as soon as you message someone on WhatsApp from Skype, certain information about you and your activity will land on Meta’s servers. Incidentally, WhatsApp already has a so-called guest agreement in place for this case. It’s this issue that the Swiss team behind Threema finds unsettling, for fear that messaging with WhatsApp users could lead to the de-anonymization of Threema users.

And let’s not forget that the news of cross-platform support is music to the ears of malware authors — it will be much easier to lure victims with “WhatsApp mods for messaging with Telegram” or other fictitious offerings. Of all the issues, however, this one is the easiest to solve: just install apps only from official stores and use reliable protection on your smartphones and computers.

What to do?

If you use WhatsApp and want to message users of other services

Count up roughly how many non-WhatsAppers there are in your circle who use other platforms that have announced interoperability with WhatsApp. If there aren’t many, it’s better not to enable support for any and all third-party messengers: the risks of spam and unwanted messages outweigh the potential benefits.

If there are many such people, consider whether you discuss confidential topics. Even with Meta’s encryption requirements, cross-platform messaging through a bridge should be considered vulnerable to interception and unauthorized modification. Therefore, it’s best to use the same secure messenger (such as Signal) for confidential communication.

If you decide that WhatsApp + third-party messenger is the winning formula, be sure to max out the privacy settings in WhatsApp, and be wary of odd messages, especially from strangers, but also from friends on unusual topics. Try to double-check it’s who they claim to be, and not some scammer messaging you through a third-party service.

If you use another messenger that has announced interoperability with WhatsApp

While gaining access to all WhatsApp users within your favorite messenger is appealing, if you use a different messenger for increased privacy, connecting to WhatsApp will likely diminish it. Meta services will collect certain metadata during conversations, potentially leading to account de-anonymization, and the encryption bridge may be vulnerable to eavesdropping. In general, we don’t recommend activating this feature in secure messengers, should it ever become available.

Tips for everyone

Beware of “mods” and little-known apps that promise cross-platform messaging and other wonders. Lurking behind the seductive interface is probably malware. Be sure to install protection on your computer and smartphone to prevent attackers from stealing your correspondence right inside legitimate messengers.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 343 | Kaspersky official blog

Episode 343 of the Transatlantic Cable podcast begins with news that Instagram is testing a tool to help tackle ‘sextortion’, or intimate image abuse. Following that, the team discuss how criminals are increasingly using A.I to defraud consumers out of their money.

The last two stories look at X and ransomware. The first story focuses on how X is automatically removing “twitter” from URLs, providing scammers with a real opportunity – finally, the last story looks at how some ransomware gangs are trying their luck at calling the front desk of businesses, to try to leverage payment out of them – however, it doesn’t always go to plan.

If you like what you heard, please consider subscribing.

Instagram to test new tools to fight so-called sextortion
Criminals ramp up social engineering and AI tactics to steal consumer details
X automatically changed ‘Twitter’ to ‘X’ in users’ posts, breaking legit URLs
Ransomware gang’s new extortion trick? Calling the front desk

Kaspersky official blog – ​Read More

How to prevent surveillance through banner ads | Kaspersky official blog

The industrial scale of surveillance of internet users is a topic we keep returning to. Every click on a website, every scroll in a mobile app, and every word you type into a search bar is tracked by dozens of tech companies and advertising firms. And it affects not only phones and computers, but also smart watches, smart TVs and speakers — even cars. As it turns out, these motherlodes of information are used not only by advertisers offering vacuum cleaners or travel insurance. Through various intermediary companies, this data is snapped up by security agencies of all stripes: police, intelligence, you name it. See here for the latest investigation into such practices, focusing on the Patternz platform and the “advertising” firm Nuviad. Previously, similar investigations probed Rayzone, Near Intelligence, and others. These companies, their jurisdictions of incorporation, and their client lists vary, but the general formula is always the same: collect and save proprietary information generated by advertising, then resell it to law enforcement agencies worldwide.

Behind the scenes of contextual advertising

We’ve already described in detail how data is collected on web pages and in apps — but not how it gets put to use. In overly simplified terms, behind every banner display or advertising link in today’s online world, there is some lightning-fast, super-complex trading. Advertisers upload their ads and audience requirements to a demand-side platform (DSP), which finds suitable sites or apps to display such advertising. The DSP then takes part in an auction for the types of advertising (banner, video, and so on) to be displayed on these sites and apps. Depending on who views the ads and how well they match the advertiser’s requirements, a particular type of ad may win the auction. This process is known as real-time bidding (RTB). During the bidding, participants receive information about the potential ad consumer: previously collected data on the individual is condensed into a brief description card. Depending on the platform, the composition of this data may vary, but a fairly typical set would be the consumer’s approximate or precise location, the device in use, the OS version, as well as “demographic and psychographic attributes” — that is, gender, age, family members, hobbies, and other topics of interest to the user.

How RTB data is used for surveillance

A 404 Media investigation found that the Patternz platform advertised to clients that it processed 90 terabytes of data daily, covering the actions of around five billion user IDs. Note that there are far fewer real users than IDs since each person can have several IDs. Because advertising is global — so too is the scope of data collection.

Collecting and analyzing the above data allows precision tracking of:

potential consumers’ movements
times when they leave or visit certain places
times when they are located close to certain people
their interests and search queries
history of changing interests
affiliation to certain segments, for example, “recently had a baby” or “just went on vacation”

This information makes it possible to discover lots of curious things: where the person is during the day and at night, who they like to spend time with, who they travel with by car and where, and masses of other personal information. As stated by the U.S. Office of the Director of National Intelligence (ODNI), such depth of data collection was previously only possible through physical surveillance or targeted wiretapping.

Is such data collection legal? Although laws vary greatly from country to country, in most cases intelligence agencies’ carrying out mass surveillance — especially with the use of commercial data — finds itself in a gray area.

Bonus game: surveillance through push notifications

There’s another unrelated, but no less unpleasant method of centralized surveillance of users. In this case, the role of treasure trove falls to Apple and Google, which send centralized push notifications to all iOS and Android devices, respectively. To save power on smartphones, almost all app notifications are delivered through Apple or Google servers; and depending on the app’s architecture, a notification may contain information that’s easy to see and of interest to third parties. It turns out that some intelligence agencies have tried to gain access to notification data. What’s more, a recent study found that a significant number of apps abuse notifications to collect data about the device (and the user) at the time the notification is received — even if the user is not in the relevant app at that moment or on their phone at all.

How to guard against surveillance through advertising

Since all of the above-mentioned companies collect data using central hubs in the shape of large ad exchanges, no amount of denylisting apps and sites will protect you from being tracked. Any banner ad, video insert, or social network advertising generates events for trackers.

The only way to achieve any meaningful reduction in the scale of surveillance is with quite radical anti-advertising measures. Not all of them are convenient or suitable for everyone, but the more tips from the list you can apply, the fewer “events” involving you will end up on the servers of Rayzone or other such companies. In a nutshell:

Use apps that don’t display ads. This doesn’t guarantee the absence of web beacons and tracking, but will at least reduce the intensity.
Block ads and tracking in web browsers. Mozilla Firefox and Safari have built-in anti-surveillance protection, while anti-spyware and anti-advertising add-ons are available for all popular browsers in the official add-on stores.
For maximum protection, turn on Private Browsing in Kaspersky Standard, Kaspersky Plus, or Kaspersky Premium.
Disable auto-downloading of images in emails.
Configure secure DNS on your smartphone, computer, and home router by specifying an ad-blocking server, say, BlahDNS.
Check your smartphone’s privacy settings. Make it a habit to reset your advertising ID at least once a month. Prevent apps from collecting data for personalized ads and showing location-based ads (Apple, Google);
Revoke permissions to access location and other sensitive data from all apps that do not require it for their primary function.
Completely disable push notifications in your smartphone settings for all apps that can do without it.

Kaspersky official blog – ​Read More

EM Eye: data theft from surveillance cameras | Kaspersky official blog

Scientific research of hardware vulnerabilities often paints captivating espionage scenarios, and a recent study by researchers from universities in the United States and China is no exception. They found a way to steal data from surveillance cameras by analyzing their stray electromagnetic emissions — aptly naming the attack EM Eye.

Reconstructing information from stray emissions

Let’s imagine a scenario: a secret room in a hotel with restricted access is hosting confidential negotiations, with the identities of the folks in attendance in this room also deemed sensitive information. There’s a surveillance camera installed in the room running round the clock, but hacking the recording computer is impossible. However, there’s a room next-door to the secret room accessible to other, regular guests of the hotel. During the meeting, a spy enters this adjacent room with a device which, for the sake of simplicity, we’ll consider to be a slightly modified radio receiver. This receiver gathers data that can be subsequently processed to reconstruct the video from the surveillance camera in the secret room! And the reconstructed video would look something like this:

On the left is the original color image from the surveillance camera. On the right are two versions of the image reconstructed from the video camera’s unintentional radio emissions. Source

How is this even possible? To understand this, let’s talk about TEMPEST attacks. This codename, coined by the U.S. National Security Agency, refers to methods of surveillance using unintentional radio emissions, plus countermeasures against those methods. This type of hardware vulnerability was first studied during… World War II. The U.S. Army used an automatic encryption device from the Bell Telephone Company: plaintext input was mixed with a pre-prepared random sequence of characters to produce an encrypted message. The device used electromagnetic relays — essentially large switches.

Think of a mechanical light switch: each time you use it, a spark jumps between its contacts. This electrical discharge generates radio waves. Someone at a distance could tune a radio receiver to a specific frequency and know when you turn the light on or off. This is called stray electromagnetic radiation — an inevitable byproduct of electrical devices.

In the case of the Bell encryption device, the switching of electromagnetic relays generated such interference that its operation could be detected from a considerable distance. And the nature of the interference permitted reconstruction of the encrypted text. Modern computers aren’t equipped with huge electromechanical switches, but they do still generate stray emissions. Each bit of data transmitted corresponds to a specific voltage applied to the respective electrical circuit, or its absence. Changing the voltage level generates interference that can be analyzed.

Research on TEMPEST has been classified for a long time. The first publicly accessible work was published in 1985. Dutch researcher Wim van Eck showed how stray emissions (also known as side-band electromagnetic emissions) from a computer monitor allow the reconstruction of the image displayed on it from a distance.

Images from radio noise

The authors of the recent study, however, work with much weaker and more complex electromagnetic interference. Compared to the encryption devices of the 1940s and computer monitors of the 1980s, data transmission speeds have increased significantly, and though there’s now more stray radiation, it’s weaker due to the miniaturization of components. However, the researchers benefit from the fact that video cameras have become ubiquitous, and their design — more or less standardized. A camera has a light-sensitive sensor — the raw data from which is usually transmitted to the graphics subsystem for further processing. It is this process of transmitting raw information that the authors of the research studied.

In some other recent experiments, researchers demonstrated that electromagnetic radiation generated by the data transmission from a video camera sensor can be used to determine the presence of a nearby camera — which is valuable information for protecting against unauthorized surveillance. But, as it turned out, much more information can be extracted from the interference.

Interference depending on the type of image transmitted by the surveillance camera. Source

The researchers had to study thoroughly the methods of data transmission between the video camera sensor and the data processing unit. Manufacturers use different transmission protocols for this. The frequently used MIPI CSI-2 interface transmits data line by line, from left to right — similar to how data is transmitted from a computer to a monitor (which that same Wim van Eck intercepted almost 40 years ago). The illustration above shows the experiments of the authors of the study. A high-contrast target with dark and light stripes running horizontally or vertically is placed in front of the camera. Next, the stray radiation in a certain frequency range (for example, 204 or 255 megahertz) is analyzed. You can see that the intensity of the radio emission correlates with the dark and light areas of the target.

Improving image quality by combining data from multiple frames. Source

This is essentially the whole attack: capture the stray radio emission from the video camera, analyze it, and reconstruct the unprotected image. However, in practice, it’s not that simple. The researchers were dealing with a very weak and noisy radio signal. To improve the picture, they used a neural network: by analyzing the sequence of stolen frames, it significantly improves the quality of the intercepted video. The result is a transition from “almost nothing is visible” to an excellent image, no worse than the original, except for a few artifacts typical of neural networks (and information about the color of objects is lost in any case).

EM Eye in practice

In numerous experiments with various video cameras, the researchers were able to intercept the video signal at distances of up to five meters. In real conditions, such interception would be complicated by a higher level of noise from neighboring devices. Computer monitors, which operate on a similar principle, “spoil” the signal from the video camera the most. As a recommendation to camera manufacturers, the authors of the study suggest improving the shielding of devices — even providing the results of an experiment in which shielding the vulnerable module with foil seriously degraded the quality of the intercepted image.

Degradation of the intercepted image when shielding the electrical circuits of the video camera. Source

Of course, a more effective solution would be to encrypt the data transmitted from the video camera sensor for further processing.

Pocket spy

But some of the researchers’ findings seem even more troubling. For example, the exact same interference is generated by the camera in your smartphone. OK, if someone starts following his target around with an antenna and a radio receiver, they’ll be noticed. But what if attackers give the potential victim, say, a slightly modified power bank? By definition, such a device is likely to stay close to the smartphone. When the victim decides to shoot a video or even take a photo, the advanced “bug” could confidently intercept the resulting image. The illustration below shows how serious the damage from such interception can be when, for example, photographing documents using a smartphone. The quality is good enough to read the text.

Examples of image interception from different devices: smartphone, dashcam, stationary surveillance camera. Source

However, we don’t want to exaggerate the danger of such attacks. This research won’t lead to attackers going around stealing photos tomorrow. But such research is important: ideally, we should apply the same security measures to hardware vulnerabilities as we do to software ones. Otherwise, a situation may arise where all the software protection measures for these smartphone cameras will be useless against a hardware “bug” which, though complex, could be assembled entirely from components available at the nearest electronics store.

Kaspersky official blog – ​Read More

Mitigating the risks of residential proxies | Kaspersky official blog

Every day, millions of ordinary internet users grant usage of their computers, smartphones, or home routers to complete strangers — whether knowingly or not. They install proxyware — a proxy server that accepts internet requests from these strangers and forwards them via the internet to the target server. Access to such proxyware is typically provided by specialized companies, which we’ll refer to as residential proxy providers (RPPs) in this article. While some businesses utilize RPP services for legitimate purposes, more often their presence on work computers indicates illicit activity.

RPPs compete with each other, boasting the variety and quantity of their available IP addresses, which can reach millions. This market is fragmented, opaque, and poses unique risks to organizations and their cybersecurity teams.

Why are residential proxies used?

The age when the internet was the same for everyone has long passed. Today, major online services tailor content based on region, websites filter content — excluding entire countries and continents, and a service’s functionalities may differ across countries. Residential proxies offer a way to analyze, circumvent, and bypass such filters. RPPs often advertise use cases for their services like market research (tracking competitor pricing), ad verification, web scraping for data collection and AI training, search engine result analysis, and more.

While commercial VPNs and data-center proxies offer similar functionalities, many services can detect them based on known data-center IP ranges or heuristics. Residential proxies, operating on actual home devices, are significantly harder to identify.

What RPP websites conveniently omit are the dubious and often downright malicious activities for which residential proxies are systematically used. Among them:

credential stuffing attacks, including password spraying, as in the recent Microsoft breach;
infiltrating an organization using legitimate credentials — using residential proxies from specific regions can prevent suspicious login heuristic rules from triggering;
covering up signs of cyberattacks — it’s harder to trace and attribute the source of malicious activity;
fraudulent schemes involving credit and gift cards. Residential proxies can be used to bypass anti-fraud systems;
conducting DDoS attacks. For example, a large series of DDoS attacks in Hungary was traced back to the White Proxies RPP;
automated market manipulation, such as high-speed bulk purchases of scarce event tickets or limited-edition items (sneaker bots);
marketing fraud — inflating ad metrics, generating fake social media engagement, and so on;
spamming, mass account registration;
CAPTCHA bypass services.

Proxyware: a grey market

The residential proxy market is complex because the sellers, buyers, and participants are not necessarily all absolutely legitimate (voluntary and adhering to best practices) – they can be blatantly illegal.  Some RPPs maintain official websites with transparent information, real addresses, recommendations from major clients, and so on. Others operate in the shadows of hacker forums and the dark web, taking orders through Telegram. Even seemingly legitimate providers often lack proper customer verification and struggle to provide clear information about the origins of their “nodes” — that is, home computers and smartphones on which proxyware is installed. Sometimes this lack of transparency stems from RPPs relying on subcontractors for infrastructure, leaving them unaware of the true source of their proxies.

Where do residential proxies come from?

Let’s list the main methods of acquiring new nodes for a residential proxy network — from the most benign to the most unpleasant:

“earn on your internet” applications. Users are incentivized to run proxyware on their devices to provide others with internet access when the computer and connection channel have light loads. Users are paid for this monthly. While seemingly consensual, these programs often fail to adequately inform users of what exactly will be happening on their computers and smartphones;
proxyware-monetized apps and games. A publisher embeds RPP components within their games or applications, generating revenue based on the traffic routed through users’ devices. Ideally, users or players should have the choice to opt in or choose alternative monetization methods like ads or buying the application. However, transparency and user choice are often neglected;
covert installation of proxyware. An application or an attacker can install an RPP app or library on a computer or smartphone without user consent. However, if they’re lucky, the owner can notice this “feature” and remove it relatively easily;
This scenario mirrors the previous one in that the user consent is ignored, but persistence and concealment techniques are more complex. Criminal proxyware uses all means available to help attackers gain a foothold in the system and hide their activity. Malware may even spread within the local network, compromising additional devices.

How to address proxyware risks in an organization’s cybersecurity policy

Proxyware infections. Organizations may discover one or more computers exhibiting proxyware activity. A common and relatively harmless scenario involves employees installing free software that was covertly bundled with proxyware. In this scenario, the company not only pays for unauthorized bandwidth usage, but also risks ending up on various ban lists if malicious activity is found to originate from the compromised device. In particularly severe cases, companies may need to prove to law enforcement that they aren’t harboring hackers.

The situation becomes even more complex when proxyware is just one element of a broader malware infection. Proxyware often goes hand in hand with mining — both are attempts to monetize access to the company’s resources if other options seem less profitable or have already been exploited. Therefore, upon detecting proxyware, thorough log analysis is crucial to determine the infection vector and identify other malicious activities.

To mitigate the risk of malware, including proxyware, organizations should consider implementing allowlisting policies on work computers and smartphones, restricting software installation and launch only to applications approved by the IT department. If strict allowlisting isn’t feasible, adding known proxyware libraries and applications to your EPP/EDR denylist is essential.

An additional layer of protection involves blocking communication with known proxyware command and control servers across the entire internal network. Implementing these policies effectively requires access to threat intelligence sources in order to regularly update rules with new data.

Credential stuffing and password spraying attacks involving proxyware. Attackers often attempt to leverage residential proxies in regions close to the targeted organization’s office to bypass geolocation-based security rules. The rapid switching between proxies enables them to circumvent basic IP-based rate limiting. To counter such attacks, organizations need rules that detect unusual spikes in failed login attempts. Identifying other suspicious user behavior such as frequent IP changes and failed login attempts across multiple applications is also crucial. For organizations with multi-factor authentication (MFA), implementing rules that trigger upon rapid, repeated MFA requests can also be effective, as this could indicate an ongoing MFA fatigue attack. The ideal environment for implementing such detection logic is offered by SIEM or XDR platforms, if the company has either.

Legitimate business use of proxies. If your organization requires residential proxies for legitimate purposes like website testing, meticulous vendor (that is, RPP) selection is critical. Prioritize RPPs with demonstrably lawful practices, relevant certifications, and documented compliance with data processing and storage regulations across all regions of operation. Ensure they provide comprehensive security documentation and transparency regarding the origins of the proxies used in their network. Avoid providers that lack customer verification, accept payment in cryptocurrencies, or operate from jurisdictions with lax internet regulations.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 342 | Kaspersky official blog

Episode 342 of the Transatlantic Cable podcast focuses on political news this week, kicking off with a story that China is being accused of using AI-generated content to sow discontent in the upcoming American election. From there the team look news that YouTube is being accused of complacent in blocking malicious videos advertisements in the upcoming Indian elections.

To wrap up, the team look at news that a spear-phishing / honey trap campaign is being orchestrated in UK parliament, with several members confessing to being targets – but who’s behind the attacks?

If you liked what you heard, please consider subscribing.

China Using AI-Generated Content to Sow Division in US
YouTube failed to block disinformation about Indian elections
UK minister confirmed as 12th target in Westminster ‘honey trap’ scandal

Kaspersky official blog – ​Read More