Is it safe to message other apps from WhatsApp? | Kaspersky official blog

The EU’s Digital Markets Act (DMA) requires major tech companies to make their products more open and interoperable in order to increase competition. Thanks to the DMA, iOS will soon permit third-party app stores to be installed on it, and major messaging platforms will need to allow communication with other similar apps — creating cross-platform compatibility. Meta (Facebook) engineers recently detailed how this compatibility will be implemented in its WhatsApp and Messenger. The benefits of interoperability are clear to anyone who’s ever texted or emailed. You’ll be able to send or receive messages without worrying about what phone, computer, or app the other person is using, or what country they’re in. However, there are downsides: first third parties (from intelligence agencies to hackers) often have access to your correspondence; second, such messages are prime targets for spam and phishing. So, will the DMA be able to ensure provision of interoperability and its benefits, while eliminating its drawbacks?

It’s important to note that while the DMA’s impact on the iOS App Store will only affect EU users, cross-platform messaging will likely impact everyone — even if it will be only EU partners that connect to the WhatsApp infrastructure.

Can you chat on WhatsApp with users of other platforms?

Theoretically, yes, but not yet in practice. Meta has published specifications and technical requirements for partners who want their apps to be interoperable with WhatsApp or Messenger. It’s now up to these partners to climb aboard and develop a working bridge between their service and WhatsApp. To date, no such partnerships have been announced.

Owners and developers of other messaging services may be reluctant to implement such functionality. Some consider it insecure; others are unwilling to invest resources into rather complex integration. Meta requires potential partners to implement end-to-end encryption (E2EE) no weaker than in WhatsApp, which is a significant challenge for many platforms

Even when (or if) third-party services show up, only those WhatsApp users who explicitly opt-in will be able to message across platforms. It won’t be enabled by default.

What will such messaging look like?

Based on WhatsApp beta versions, messages with users on other platforms will be housed in a separate section of the app to distinguish them from chats with WhatsApp users.

Initially, only one-on-one messaging and file/image/video sharing will be supported. Calls and group chats won’t be available for at least a year.

User identification remains an open question. In WhatsApp, users find each other by phone number, while on Facebook, they do it by name, workplace, school, friends of friends, or other similar identifiers (and ultimately by a unique ID). Other platforms might use incompatible identifiers, like short usernames in Discord, or alphanumeric IDs in Threema. This is likely to impede automatic search and user matching, and at the same time facilitate impersonation attacks by scammers.

Encryption challenges

One of the key challenges with integrating different messaging platforms is implementing reliable encryption. Even if two platforms use the same encryption protocol, technical issues arise regarding storage and agreement of keys, user authentication, and more.

If the encryption method differs significantly, a bridge — an intermediary server that decrypts messages from one protocol and re-encrypts them into another — will likely be needed. If it seems to you that this is a man-in-the-middle (MITM) attack waiting to happen, where hacking this server would allow eavesdropping, you’re misgiving would be on the money. The failed Nothing Chats app, which used a similar scheme to enable iMessage on Android, recently demonstrated this vulnerability. Even Meta’s own efforts are illustrative: encrypted messaging between Messenger and Instagram was announced over five years ago, but full-scale encryption in Messenger only arrived last December, and seamless E2EE in Instagram remains not fully functional to this day. As this in-depth article explains, it’s not a matter of laziness or lack of time, but rather the significant technical complexity of the project.

Cryptographers are generally highly skeptical about the idea of cross-platform E2EE. Some experts believe the problem can be solved — for example, by placing the bridge directly on the user’s computer or by having all platforms adopt a single, decentralized messaging protocol. However, the big fish in the messaging market aren’t swimming in that direction at all. It’s hard to accuse them of idleness or inertia — all practical experience demonstrates that reliable and user-friendly message encryption within open ecosystems is difficult to implement. Just look at the saga of PGP encryption in email, and the confessions of top cryptography experts.

We’ve compiled information on the WhatsApp/Messenger integration plans of major communication platforms, and assessed the technical feasibility of cross-platform functionality:

Service
Statement on WhatsApp compatibility
Encryption compatibility

Discord
None
No E2EE support, integration unlikely

iMessage
None
Uses own encryption —comparable in strength to WhatsApp

Matrix
Interested in technical integration with WhatsApp, and supports the DMA in general
Uses own encryption —comparable in strength to WhatsApp

Signal
None
Uses the Signal protocol, as does WhatsApp

Skype
None
Uses the Signal protocol, as does WhatsApp, but for private conversations only

Telegram
None
Most chats are unencrypted, and private conversations are encrypted with an unreliable algorithm

Threema
Concerned about privacy risks associated with WhatsApp integration. Integration unlikely
Uses own encryption —comparable in strength to WhatsApp

Viber
None
Uses own encryption —comparable in strength to WhatsApp

Security concerns

Beyond encryption issues, integrating various services introduces additional challenges in protecting against spam, phishing, and other cyberthreats. Should you receive spam on WhatsApp, you can block the offender there and then. After being blocked by several users, the spammer will have limited ability to message strangers. To what extent such anti-spam techniques will work with third-party services remains to be seen.

Another issue is the moderation of unwanted content — from pornography to fake giveaways. When algorithms and experts from not one but two companies are involved, response speed and quality are bound to suffer.

Privacy concerns will also become more complex. Say you install the Skype app — in doing so, you share data with Microsoft, which will store it. However, as soon as you message someone on WhatsApp from Skype, certain information about you and your activity will land on Meta’s servers. Incidentally, WhatsApp already has a so-called guest agreement in place for this case. It’s this issue that the Swiss team behind Threema finds unsettling, for fear that messaging with WhatsApp users could lead to the de-anonymization of Threema users.

And let’s not forget that the news of cross-platform support is music to the ears of malware authors — it will be much easier to lure victims with “WhatsApp mods for messaging with Telegram” or other fictitious offerings. Of all the issues, however, this one is the easiest to solve: just install apps only from official stores and use reliable protection on your smartphones and computers.

What to do?

If you use WhatsApp and want to message users of other services

Count up roughly how many non-WhatsAppers there are in your circle who use other platforms that have announced interoperability with WhatsApp. If there aren’t many, it’s better not to enable support for any and all third-party messengers: the risks of spam and unwanted messages outweigh the potential benefits.

If there are many such people, consider whether you discuss confidential topics. Even with Meta’s encryption requirements, cross-platform messaging through a bridge should be considered vulnerable to interception and unauthorized modification. Therefore, it’s best to use the same secure messenger (such as Signal) for confidential communication.

If you decide that WhatsApp + third-party messenger is the winning formula, be sure to max out the privacy settings in WhatsApp, and be wary of odd messages, especially from strangers, but also from friends on unusual topics. Try to double-check it’s who they claim to be, and not some scammer messaging you through a third-party service.

If you use another messenger that has announced interoperability with WhatsApp

While gaining access to all WhatsApp users within your favorite messenger is appealing, if you use a different messenger for increased privacy, connecting to WhatsApp will likely diminish it. Meta services will collect certain metadata during conversations, potentially leading to account de-anonymization, and the encryption bridge may be vulnerable to eavesdropping. In general, we don’t recommend activating this feature in secure messengers, should it ever become available.

Tips for everyone

Beware of “mods” and little-known apps that promise cross-platform messaging and other wonders. Lurking behind the seductive interface is probably malware. Be sure to install protection on your computer and smartphone to prevent attackers from stealing your correspondence right inside legitimate messengers.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 343 | Kaspersky official blog

Episode 343 of the Transatlantic Cable podcast begins with news that Instagram is testing a tool to help tackle ‘sextortion’, or intimate image abuse. Following that, the team discuss how criminals are increasingly using A.I to defraud consumers out of their money.

The last two stories look at X and ransomware. The first story focuses on how X is automatically removing “twitter” from URLs, providing scammers with a real opportunity – finally, the last story looks at how some ransomware gangs are trying their luck at calling the front desk of businesses, to try to leverage payment out of them – however, it doesn’t always go to plan.

If you like what you heard, please consider subscribing.

Instagram to test new tools to fight so-called sextortion
Criminals ramp up social engineering and AI tactics to steal consumer details
X automatically changed ‘Twitter’ to ‘X’ in users’ posts, breaking legit URLs
Ransomware gang’s new extortion trick? Calling the front desk

Kaspersky official blog – ​Read More

How to prevent surveillance through banner ads | Kaspersky official blog

The industrial scale of surveillance of internet users is a topic we keep returning to. Every click on a website, every scroll in a mobile app, and every word you type into a search bar is tracked by dozens of tech companies and advertising firms. And it affects not only phones and computers, but also smart watches, smart TVs and speakers — even cars. As it turns out, these motherlodes of information are used not only by advertisers offering vacuum cleaners or travel insurance. Through various intermediary companies, this data is snapped up by security agencies of all stripes: police, intelligence, you name it. See here for the latest investigation into such practices, focusing on the Patternz platform and the “advertising” firm Nuviad. Previously, similar investigations probed Rayzone, Near Intelligence, and others. These companies, their jurisdictions of incorporation, and their client lists vary, but the general formula is always the same: collect and save proprietary information generated by advertising, then resell it to law enforcement agencies worldwide.

Behind the scenes of contextual advertising

We’ve already described in detail how data is collected on web pages and in apps — but not how it gets put to use. In overly simplified terms, behind every banner display or advertising link in today’s online world, there is some lightning-fast, super-complex trading. Advertisers upload their ads and audience requirements to a demand-side platform (DSP), which finds suitable sites or apps to display such advertising. The DSP then takes part in an auction for the types of advertising (banner, video, and so on) to be displayed on these sites and apps. Depending on who views the ads and how well they match the advertiser’s requirements, a particular type of ad may win the auction. This process is known as real-time bidding (RTB). During the bidding, participants receive information about the potential ad consumer: previously collected data on the individual is condensed into a brief description card. Depending on the platform, the composition of this data may vary, but a fairly typical set would be the consumer’s approximate or precise location, the device in use, the OS version, as well as “demographic and psychographic attributes” — that is, gender, age, family members, hobbies, and other topics of interest to the user.

How RTB data is used for surveillance

A 404 Media investigation found that the Patternz platform advertised to clients that it processed 90 terabytes of data daily, covering the actions of around five billion user IDs. Note that there are far fewer real users than IDs since each person can have several IDs. Because advertising is global — so too is the scope of data collection.

Collecting and analyzing the above data allows precision tracking of:

potential consumers’ movements
times when they leave or visit certain places
times when they are located close to certain people
their interests and search queries
history of changing interests
affiliation to certain segments, for example, “recently had a baby” or “just went on vacation”

This information makes it possible to discover lots of curious things: where the person is during the day and at night, who they like to spend time with, who they travel with by car and where, and masses of other personal information. As stated by the U.S. Office of the Director of National Intelligence (ODNI), such depth of data collection was previously only possible through physical surveillance or targeted wiretapping.

Is such data collection legal? Although laws vary greatly from country to country, in most cases intelligence agencies’ carrying out mass surveillance — especially with the use of commercial data — finds itself in a gray area.

Bonus game: surveillance through push notifications

There’s another unrelated, but no less unpleasant method of centralized surveillance of users. In this case, the role of treasure trove falls to Apple and Google, which send centralized push notifications to all iOS and Android devices, respectively. To save power on smartphones, almost all app notifications are delivered through Apple or Google servers; and depending on the app’s architecture, a notification may contain information that’s easy to see and of interest to third parties. It turns out that some intelligence agencies have tried to gain access to notification data. What’s more, a recent study found that a significant number of apps abuse notifications to collect data about the device (and the user) at the time the notification is received — even if the user is not in the relevant app at that moment or on their phone at all.

How to guard against surveillance through advertising

Since all of the above-mentioned companies collect data using central hubs in the shape of large ad exchanges, no amount of denylisting apps and sites will protect you from being tracked. Any banner ad, video insert, or social network advertising generates events for trackers.

The only way to achieve any meaningful reduction in the scale of surveillance is with quite radical anti-advertising measures. Not all of them are convenient or suitable for everyone, but the more tips from the list you can apply, the fewer “events” involving you will end up on the servers of Rayzone or other such companies. In a nutshell:

Use apps that don’t display ads. This doesn’t guarantee the absence of web beacons and tracking, but will at least reduce the intensity.
Block ads and tracking in web browsers. Mozilla Firefox and Safari have built-in anti-surveillance protection, while anti-spyware and anti-advertising add-ons are available for all popular browsers in the official add-on stores.
For maximum protection, turn on Private Browsing in Kaspersky Standard, Kaspersky Plus, or Kaspersky Premium.
Disable auto-downloading of images in emails.
Configure secure DNS on your smartphone, computer, and home router by specifying an ad-blocking server, say, BlahDNS.
Check your smartphone’s privacy settings. Make it a habit to reset your advertising ID at least once a month. Prevent apps from collecting data for personalized ads and showing location-based ads (Apple, Google);
Revoke permissions to access location and other sensitive data from all apps that do not require it for their primary function.
Completely disable push notifications in your smartphone settings for all apps that can do without it.

Kaspersky official blog – ​Read More

EM Eye: data theft from surveillance cameras | Kaspersky official blog

Scientific research of hardware vulnerabilities often paints captivating espionage scenarios, and a recent study by researchers from universities in the United States and China is no exception. They found a way to steal data from surveillance cameras by analyzing their stray electromagnetic emissions — aptly naming the attack EM Eye.

Reconstructing information from stray emissions

Let’s imagine a scenario: a secret room in a hotel with restricted access is hosting confidential negotiations, with the identities of the folks in attendance in this room also deemed sensitive information. There’s a surveillance camera installed in the room running round the clock, but hacking the recording computer is impossible. However, there’s a room next-door to the secret room accessible to other, regular guests of the hotel. During the meeting, a spy enters this adjacent room with a device which, for the sake of simplicity, we’ll consider to be a slightly modified radio receiver. This receiver gathers data that can be subsequently processed to reconstruct the video from the surveillance camera in the secret room! And the reconstructed video would look something like this:

On the left is the original color image from the surveillance camera. On the right are two versions of the image reconstructed from the video camera’s unintentional radio emissions. Source

How is this even possible? To understand this, let’s talk about TEMPEST attacks. This codename, coined by the U.S. National Security Agency, refers to methods of surveillance using unintentional radio emissions, plus countermeasures against those methods. This type of hardware vulnerability was first studied during… World War II. The U.S. Army used an automatic encryption device from the Bell Telephone Company: plaintext input was mixed with a pre-prepared random sequence of characters to produce an encrypted message. The device used electromagnetic relays — essentially large switches.

Think of a mechanical light switch: each time you use it, a spark jumps between its contacts. This electrical discharge generates radio waves. Someone at a distance could tune a radio receiver to a specific frequency and know when you turn the light on or off. This is called stray electromagnetic radiation — an inevitable byproduct of electrical devices.

In the case of the Bell encryption device, the switching of electromagnetic relays generated such interference that its operation could be detected from a considerable distance. And the nature of the interference permitted reconstruction of the encrypted text. Modern computers aren’t equipped with huge electromechanical switches, but they do still generate stray emissions. Each bit of data transmitted corresponds to a specific voltage applied to the respective electrical circuit, or its absence. Changing the voltage level generates interference that can be analyzed.

Research on TEMPEST has been classified for a long time. The first publicly accessible work was published in 1985. Dutch researcher Wim van Eck showed how stray emissions (also known as side-band electromagnetic emissions) from a computer monitor allow the reconstruction of the image displayed on it from a distance.

Images from radio noise

The authors of the recent study, however, work with much weaker and more complex electromagnetic interference. Compared to the encryption devices of the 1940s and computer monitors of the 1980s, data transmission speeds have increased significantly, and though there’s now more stray radiation, it’s weaker due to the miniaturization of components. However, the researchers benefit from the fact that video cameras have become ubiquitous, and their design — more or less standardized. A camera has a light-sensitive sensor — the raw data from which is usually transmitted to the graphics subsystem for further processing. It is this process of transmitting raw information that the authors of the research studied.

In some other recent experiments, researchers demonstrated that electromagnetic radiation generated by the data transmission from a video camera sensor can be used to determine the presence of a nearby camera — which is valuable information for protecting against unauthorized surveillance. But, as it turned out, much more information can be extracted from the interference.

Interference depending on the type of image transmitted by the surveillance camera. Source

The researchers had to study thoroughly the methods of data transmission between the video camera sensor and the data processing unit. Manufacturers use different transmission protocols for this. The frequently used MIPI CSI-2 interface transmits data line by line, from left to right — similar to how data is transmitted from a computer to a monitor (which that same Wim van Eck intercepted almost 40 years ago). The illustration above shows the experiments of the authors of the study. A high-contrast target with dark and light stripes running horizontally or vertically is placed in front of the camera. Next, the stray radiation in a certain frequency range (for example, 204 or 255 megahertz) is analyzed. You can see that the intensity of the radio emission correlates with the dark and light areas of the target.

Improving image quality by combining data from multiple frames. Source

This is essentially the whole attack: capture the stray radio emission from the video camera, analyze it, and reconstruct the unprotected image. However, in practice, it’s not that simple. The researchers were dealing with a very weak and noisy radio signal. To improve the picture, they used a neural network: by analyzing the sequence of stolen frames, it significantly improves the quality of the intercepted video. The result is a transition from “almost nothing is visible” to an excellent image, no worse than the original, except for a few artifacts typical of neural networks (and information about the color of objects is lost in any case).

EM Eye in practice

In numerous experiments with various video cameras, the researchers were able to intercept the video signal at distances of up to five meters. In real conditions, such interception would be complicated by a higher level of noise from neighboring devices. Computer monitors, which operate on a similar principle, “spoil” the signal from the video camera the most. As a recommendation to camera manufacturers, the authors of the study suggest improving the shielding of devices — even providing the results of an experiment in which shielding the vulnerable module with foil seriously degraded the quality of the intercepted image.

Degradation of the intercepted image when shielding the electrical circuits of the video camera. Source

Of course, a more effective solution would be to encrypt the data transmitted from the video camera sensor for further processing.

Pocket spy

But some of the researchers’ findings seem even more troubling. For example, the exact same interference is generated by the camera in your smartphone. OK, if someone starts following his target around with an antenna and a radio receiver, they’ll be noticed. But what if attackers give the potential victim, say, a slightly modified power bank? By definition, such a device is likely to stay close to the smartphone. When the victim decides to shoot a video or even take a photo, the advanced “bug” could confidently intercept the resulting image. The illustration below shows how serious the damage from such interception can be when, for example, photographing documents using a smartphone. The quality is good enough to read the text.

Examples of image interception from different devices: smartphone, dashcam, stationary surveillance camera. Source

However, we don’t want to exaggerate the danger of such attacks. This research won’t lead to attackers going around stealing photos tomorrow. But such research is important: ideally, we should apply the same security measures to hardware vulnerabilities as we do to software ones. Otherwise, a situation may arise where all the software protection measures for these smartphone cameras will be useless against a hardware “bug” which, though complex, could be assembled entirely from components available at the nearest electronics store.

Kaspersky official blog – ​Read More

Mitigating the risks of residential proxies | Kaspersky official blog

Every day, millions of ordinary internet users grant usage of their computers, smartphones, or home routers to complete strangers — whether knowingly or not. They install proxyware — a proxy server that accepts internet requests from these strangers and forwards them via the internet to the target server. Access to such proxyware is typically provided by specialized companies, which we’ll refer to as residential proxy providers (RPPs) in this article. While some businesses utilize RPP services for legitimate purposes, more often their presence on work computers indicates illicit activity.

RPPs compete with each other, boasting the variety and quantity of their available IP addresses, which can reach millions. This market is fragmented, opaque, and poses unique risks to organizations and their cybersecurity teams.

Why are residential proxies used?

The age when the internet was the same for everyone has long passed. Today, major online services tailor content based on region, websites filter content — excluding entire countries and continents, and a service’s functionalities may differ across countries. Residential proxies offer a way to analyze, circumvent, and bypass such filters. RPPs often advertise use cases for their services like market research (tracking competitor pricing), ad verification, web scraping for data collection and AI training, search engine result analysis, and more.

While commercial VPNs and data-center proxies offer similar functionalities, many services can detect them based on known data-center IP ranges or heuristics. Residential proxies, operating on actual home devices, are significantly harder to identify.

What RPP websites conveniently omit are the dubious and often downright malicious activities for which residential proxies are systematically used. Among them:

credential stuffing attacks, including password spraying, as in the recent Microsoft breach;
infiltrating an organization using legitimate credentials — using residential proxies from specific regions can prevent suspicious login heuristic rules from triggering;
covering up signs of cyberattacks — it’s harder to trace and attribute the source of malicious activity;
fraudulent schemes involving credit and gift cards. Residential proxies can be used to bypass anti-fraud systems;
conducting DDoS attacks. For example, a large series of DDoS attacks in Hungary was traced back to the White Proxies RPP;
automated market manipulation, such as high-speed bulk purchases of scarce event tickets or limited-edition items (sneaker bots);
marketing fraud — inflating ad metrics, generating fake social media engagement, and so on;
spamming, mass account registration;
CAPTCHA bypass services.

Proxyware: a grey market

The residential proxy market is complex because the sellers, buyers, and participants are not necessarily all absolutely legitimate (voluntary and adhering to best practices) – they can be blatantly illegal.  Some RPPs maintain official websites with transparent information, real addresses, recommendations from major clients, and so on. Others operate in the shadows of hacker forums and the dark web, taking orders through Telegram. Even seemingly legitimate providers often lack proper customer verification and struggle to provide clear information about the origins of their “nodes” — that is, home computers and smartphones on which proxyware is installed. Sometimes this lack of transparency stems from RPPs relying on subcontractors for infrastructure, leaving them unaware of the true source of their proxies.

Where do residential proxies come from?

Let’s list the main methods of acquiring new nodes for a residential proxy network — from the most benign to the most unpleasant:

“earn on your internet” applications. Users are incentivized to run proxyware on their devices to provide others with internet access when the computer and connection channel have light loads. Users are paid for this monthly. While seemingly consensual, these programs often fail to adequately inform users of what exactly will be happening on their computers and smartphones;
proxyware-monetized apps and games. A publisher embeds RPP components within their games or applications, generating revenue based on the traffic routed through users’ devices. Ideally, users or players should have the choice to opt in or choose alternative monetization methods like ads or buying the application. However, transparency and user choice are often neglected;
covert installation of proxyware. An application or an attacker can install an RPP app or library on a computer or smartphone without user consent. However, if they’re lucky, the owner can notice this “feature” and remove it relatively easily;
This scenario mirrors the previous one in that the user consent is ignored, but persistence and concealment techniques are more complex. Criminal proxyware uses all means available to help attackers gain a foothold in the system and hide their activity. Malware may even spread within the local network, compromising additional devices.

How to address proxyware risks in an organization’s cybersecurity policy

Proxyware infections. Organizations may discover one or more computers exhibiting proxyware activity. A common and relatively harmless scenario involves employees installing free software that was covertly bundled with proxyware. In this scenario, the company not only pays for unauthorized bandwidth usage, but also risks ending up on various ban lists if malicious activity is found to originate from the compromised device. In particularly severe cases, companies may need to prove to law enforcement that they aren’t harboring hackers.

The situation becomes even more complex when proxyware is just one element of a broader malware infection. Proxyware often goes hand in hand with mining — both are attempts to monetize access to the company’s resources if other options seem less profitable or have already been exploited. Therefore, upon detecting proxyware, thorough log analysis is crucial to determine the infection vector and identify other malicious activities.

To mitigate the risk of malware, including proxyware, organizations should consider implementing allowlisting policies on work computers and smartphones, restricting software installation and launch only to applications approved by the IT department. If strict allowlisting isn’t feasible, adding known proxyware libraries and applications to your EPP/EDR denylist is essential.

An additional layer of protection involves blocking communication with known proxyware command and control servers across the entire internal network. Implementing these policies effectively requires access to threat intelligence sources in order to regularly update rules with new data.

Credential stuffing and password spraying attacks involving proxyware. Attackers often attempt to leverage residential proxies in regions close to the targeted organization’s office to bypass geolocation-based security rules. The rapid switching between proxies enables them to circumvent basic IP-based rate limiting. To counter such attacks, organizations need rules that detect unusual spikes in failed login attempts. Identifying other suspicious user behavior such as frequent IP changes and failed login attempts across multiple applications is also crucial. For organizations with multi-factor authentication (MFA), implementing rules that trigger upon rapid, repeated MFA requests can also be effective, as this could indicate an ongoing MFA fatigue attack. The ideal environment for implementing such detection logic is offered by SIEM or XDR platforms, if the company has either.

Legitimate business use of proxies. If your organization requires residential proxies for legitimate purposes like website testing, meticulous vendor (that is, RPP) selection is critical. Prioritize RPPs with demonstrably lawful practices, relevant certifications, and documented compliance with data processing and storage regulations across all regions of operation. Ensure they provide comprehensive security documentation and transparency regarding the origins of the proxies used in their network. Avoid providers that lack customer verification, accept payment in cryptocurrencies, or operate from jurisdictions with lax internet regulations.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 342 | Kaspersky official blog

Episode 342 of the Transatlantic Cable podcast focuses on political news this week, kicking off with a story that China is being accused of using AI-generated content to sow discontent in the upcoming American election. From there the team look news that YouTube is being accused of complacent in blocking malicious videos advertisements in the upcoming Indian elections.

To wrap up, the team look at news that a spear-phishing / honey trap campaign is being orchestrated in UK parliament, with several members confessing to being targets – but who’s behind the attacks?

If you liked what you heard, please consider subscribing.

China Using AI-Generated Content to Sow Division in US
YouTube failed to block disinformation about Indian elections
UK minister confirmed as 12th target in Westminster ‘honey trap’ scandal

Kaspersky official blog – ​Read More

Kaspersky Next: our new portfolio | Kaspersky official blog

We’ve decided to revise our portfolio and make it as seamless and customer-friendly as possible. This post explains what exactly we’re changing and why.

The evolution of protection

As the threat landscape constantly changes — so do corporate security needs in response. Just a decade ago, the only tool required to protect a company against most cyberattacks was an endpoint protection platform (EPP). Since then, attackers’ methods have grown ever more sophisticated — to the point where simply scanning workstations and servers is no longer sufficient to detect malicious activity.

Modern cyberattacks can be carried out under the guise of legitimate processes — without the use of malware at all. Increasingly, mass threats are beginning to deploy tactics and techniques previously associated only with targeted attacks. To detect such activity and ensure proper incident investigation, companies now need to collect and correlate data from endpoints, identify suspicious activity in their infrastructure, and, most importantly, take prompt countermeasures: isolate suspicious files, halt malicious processes, and sever network connections. To adequately respond to the increased complexity of threats, other tools are now indispensable: Endpoint Detection & Response (EDR) at a minimum, and ideally — Extended Detection and Response (XDR).

Yet EDR is no replacement for EPP. These are different solutions that solve different problems. For effective infrastructure protection, they need to work in tandem. As a result, customers have found themselves having to purchase both tools to ensure an adequate level of information security. We decided to simplify this process by rolling out a new line of products that deliver the security processes necessary in today’s world — with EDR and XDR capabilities at the core.

Simplified product line

Another reason for rethinking our product line was the ever increasing variety of the solutions we offer. Customers had to study many different products, which of course takes a lot of precious time. Therefore, we decided to simplify the line and make sure that each tier of Kaspersky Next covers the main needs of particular groups (rather — profiles) of corporate users. This approach provides room for maneuver while allowing us to use resources to develop the tools necessary to hone our XDR — a single console for products that protect different assets, expanded capabilities for the integration needed for cross-detection of threats, and the launch of new products to further enhance our XDR.

Our new Kaspersky Next approach guarantees maximum transparency of our products’ capabilities. With the particular kinds of threats that are relevant to your company in mind — combined with an accurate assessment of the skill level of your security team — you can choose one of the three Kaspersky Next tiers’ basic solutions, and then expand its capabilities with, first, additional products that cover specific attack vectors, and, second, services that provide expert assistance when and where your in-house team needs it.

What about the old licenses?

We’ve no intention of abandoning customers who use our time-tested solutions. Nor do we plan to cease selling them right away. At least until the end of this year, companies have the option to buy both old and new products. In time, we’ll stop selling licenses for legacy solutions; however, we understand that abrupt migration to new software can have an impact on companies’ workflows, so we’ll continue to renew already purchased licenses as required. The retirement of legacy products won’t occur in the short term.

For customers wishing to switch from older products to the Kaspersky Next line, we offer a flexible license renewal scheme involving trade-in mechanisms.

To learn more about Kaspersky Next, please visit our official page.

Kaspersky official blog – ​Read More

How to verify the authenticity and origin of photos and videos | Kaspersky official blog

Over the past 18 months or so, we seem to have lost the ability to trust our eyes. Photoshop fakes are nothing new, of course, but the advent of generative artificial intelligence (AI) has taken fakery to a whole new level. Perhaps the first viral AI fake was the 2023 image of the Pope in a white designer puffer jacket, but since then the number of high-quality eye deceivers has skyrocketed into the many thousands. And as AI develops further, we can expect more and more convincing fake videos in the very near future.

One of the first deepfakes to go viral worldwide: the Pope sporting a trendy white puffer jacket

This will only exacerbate the already knotty problem of fake news and accompanying images. These might show a photo from one event and claim it’s from another, put people who’ve never met in the same picture, and so on.

Image and video spoofing has a direct bearing on cybersecurity. Scammers have been using fake images and videos to trick victims into parting with their cash for years. They might send you a picture of a sad puppy they claim needs help, an image of a celebrity promoting some shady schemes, or even a picture of a credit card they say belongs to someone you know. Fraudsters also use AI-generated images for profiles for catfishing on dating sites and social media.

The most sophisticated scams make use of deepfake video and audio of the victim’s boss or a relative to get them to do the scammers’ bidding. Just recently, an employee of a financial institution was duped into transferring $25 million to cybercrooks! They had set up a video call with the “CFO” and “colleagues” of the victim — all deepfakes.

So what can be done to deal with deepfakes or just plain fakes? How can they be detected? This is an extremely complex problem, but one that can be mitigated step by step — by tracing the provenance of the image.

Wait… haven’t I seen that before?

As mentioned above, there are different kinds of “fakeness”. Sometimes the image itself isn’t fake, but it’s used in a misleading way. Maybe a real photo from a warzone is passed off as being from another conflict, or a scene from a movie is presented as documentary footage. In these cases, looking for anomalies in the image itself won’t help much, but you can try searching for copies of the picture online. Luckily, we’ve got tools like Google Reverse Image Search and TinEye, which can help us do just that.

If you’ve any doubts about an image, just upload it to one of these tools and see what comes up. You might find that the same picture of a family made homeless by fire, or a group of shelter dogs, or victims of some other tragedy has been making the rounds online for years. Incidentally, when it comes to false fundraising, there are a few other red flags to watch out for besides the images themselves.

Dog from a shelter? No, from a photo stock

Photoshopped? We’ll soon know.

Since photoshopping has been around for a while, mathematicians, engineers, and image experts have long been working on ways to detect altered images automatically. Some popular methods include image metadata analysis and error level analysis (ELA), which checks for JPEG compression artifacts to identify modified portions of an image. Many popular image analysis tools, such as Fake Image Detector, apply these techniques.

Fake Image Detector warns that the Pope probably didn’t wear this on Easter Sunday… Or ever

With the emergence of generative AI, we’ve also seen new AI-based methods for detecting generated content, but none of them are perfect. Here are some of the relevant developments: detection of face morphing; detection of AI-generated images and determining the AI model used to generate them; and an open AI model for the same purposes.

With all these approaches, the key problem is that none gives you 100% certainty about the provenance of the image, guarantees that the image is free of modifications, or makes it possible to verify any such modifications.

WWW to the rescue: verifying content provenance

Wouldn’t it be great if there were an easier way for regular users to check if an image is the real deal? Imagine clicking on a picture and seeing something like: “John took this photo with an iPhone on March 20”, “Ann cropped the edges and increased the brightness on March 22”, “Peter re-saved this image with high compression on March 23”, or “No changes were made” — and all such data would be impossible to fake. Sounds like a dream, right? Well, that’s exactly what the Coalition for Content Provenance and Authenticity (C2PA) is aiming for. C2PA includes some major players from the computer, photography, and media industries: Canon, Nikon, Sony, Adobe, AWS, Microsoft, Google, Intel, BBC, Associated Press, and about a hundred other members — basically all the companies that could have been individually involved in pretty much any step of an image’s life from creation to publication online.

The C2PA standard developed by this coalition is already out there and has even reached version 1.3, and now we’re starting to see the pieces of the industrial puzzle necessary to use it fall into place. Nikon is planning to make C2PA-compatible cameras, and the BBC has already published its first articles with verified images.

BBC talks about how images and videos in its articles are verified

The idea is that when responsible media outlets and big companies switch to publishing images in verified form, you’ll be able to check the provenance of any image directly in the browser. You’ll see a little “verified image” label, and when you click on it, a bigger window will pop up showing you what images served as the source, and what edits were made at each stage before the image appeared in the browser and by whom and when. You’ll even be able to see all the intermediate versions of the image.

History of image creation and editing

This approach isn’t just for cameras; it can work for other ways of creating images too. Services like Dall-E and Midjourney can also label their creations.

This was clearly created in Adobe Photoshop

The verification process is based on public-key cryptography similar to the protection used in web server certificates for establishing a secure HTTPS connection. The idea is that every image creator — be it Joe Bloggs with a particular type of camera, or Angela Smith with a Photoshop license — will need to obtain an X.509 certificate from a trusted certificate authority. This certificate can be hardwired directly into the camera at the factory, while for software products it can be issued upon activation. When processing images with provenance tracking, each new version of the file will contain a large amount of extra information: the date, time, and location of the edits, thumbnails of the original and edited versions, and so on. All this will be digitally signed by the author or editor of the image. This way, a verified image file will have a chain of all its previous versions, each signed by the person who edited it.

This video contains AI-generated content

The authors of the specification were also concerned with privacy features. Sometimes, journalists can’t reveal their sources. For situations like that, there’s a special type of edit called “redaction”. This allows someone to replace some of the information about the image creator with zeros and then sign that change with their own certificate.

To showcase the capabilities of C2PA, a collection of test images and videos was created. You can check out the Content Credentials website to see the credentials, creation history, and editing history of these images.

The Content Credentials website reveals the full background of C2PA images

Natural limitations

Unfortunately, digital signatures for images won’t solve the fakes problem overnight. After all, there are already billions of images online that haven’t been signed by anyone and aren’t going anywhere. However, as more and more reputable information sources switch to publishing only signed images, any photo without a digital signature will start to be viewed with suspicion. Real photos and videos with timestamps and location data will be almost impossible to pass off as something else, and AI-generated content will be easier to spot.

Kaspersky official blog – ​Read More

Note-taking apps and to-do lists with end-to-end encryption | Kaspersky official blog

Peeking into someone’s personal diaries or notebooks has always been seen as an invasion of privacy. And since to-do lists and diaries went digital, it’s not just nosy friends you have to worry about — tech companies are in on the action too. They used to pry into your documents to target you with ads, but now there’s a new game in town: using your data to train AI. Just in the past few weeks, we learned that Reddit, Tumblr, and even DocuSign are using or selling texts generated by their users to train large language models. And in light of recent years’ large-scale ransomware incidents, hacking of note-taking apps and a mass leak of user data — your data! — is a possibility you shouldn’t ignore.

So, how do you keep your digital notes both convenient and secure? Enter end-to-end encryption. You might be familiar with the concept from secure messaging apps: your messages can only be decrypted and viewed on your device and the device of the person you’re texting. The company running the service can’t see a thing because they don’t have the decryption key.

Although most users prefer note apps that come with their phones (like Apple’s Notes) or office suite (like Microsoft OneNote), these apps aren’t exactly Fort Knox when it comes to privacy. Some, like Google Keep, don’t even offer end-to-end encryption. Others, such as Apple’s Notes, support it for individual notes or folders. That’s why there are dedicated, albeit lesser-known apps for truly confidential notes. Let’s take a look at a few and see how they stack up.

Joplin

Platforms: Windows (32/64 bit), macOS (Intel/Apple Silicon), Linux, iOS, Android

Personal license: free

Sync options: proprietary Joplin Cloud, Dropbox, ownCloud, Nextcloud, OneDrive, S3, WebDAV via plug-ins

Native platform sync: starts at €2.99/month

Open format: no, but you can export to text

Open source: yes

Website: joplinapp.org

Joplin feels like it was designed by someone who likes the idea behind Evernote, but who has been put off by the bloat and closed-source nature of that app in recent years. Notes are stored in markdown text format. Joplin supports attachments, nested folders, tags, and notebooks. There are just two templates: “note” and “to-do list”. Searching is lightning fast.

Syncing between devices relies on “drivers” — basically plug-ins written for each service. Joplin’s developers maintain almost a dozen of these drivers for all the popular sync services, such as Dropbox. Smooth collaboration and extra features such as emailing a note to yourself require a subscription to the proprietary Joplin Cloud, but it’s pretty affordable. Students and teachers get a 50% discount.

End-to-end encryption is disabled by default, but once you turn it on, your entire database and all attachments are encrypted automatically. There’s a slight quirk: on a PC, the developers have made an odd architectural choice by storing attachments in both encrypted and unencrypted versions.

Joplin has over 200 plug-ins to add features, but setting them up can be a bit of a hassle.

Recently, the developers added text recognition for images. However, since notes are encrypted, the server can’t read them, so searching within photos and PDFs only works after processing the note on your computer.

Joplin can import notes in the proprietary Evernote format and export all data as sets of plaintext files.

Obsidian

Platforms: Windows (32/64 bit, ARM), macOS (Intel/Apple Silicon), Linux, iOS, Android

Personal license: free

Sync options: proprietary service, FTP, Dropbox, S3, and other services via plug-ins

Native platform sync: starts at $4/month

Open format: yes

Open source: no

Website: obsidian.md

Obsidian differs from other note-taking apps through its strong emphasis on organization. It’s super easy to link notes together, create groups and hierarchies, and even build mindmaps in canvas mode. Each note is just a text file stored locally, so you can work on any of them in other apps too.

Obsidian also has a thriving online community, which has built over 1500 plug-ins. These let you connect Obsidian to dozens of external services, handle specific types of notes (from recipes to chemical formulas), automatically process text with ChatGPT, and much more.

To sync your data between devices, you can subscribe to Obsidian’s own paid service, use a third-party plug-in, or just store your notes in a shared cloud folder on Dropbox or OneDrive. Of these, only the native Obsidian Sync service provides encryption. When you enable sync, you can choose between “managed” and “end-to-end” encryption. It goes without saying that the latter is the right choice.

You can import notes from a bunch of different formats using a dedicated plug-in created by the Obsidian team. These include Notion, Evernote, Apple Notes, Microsoft OneNote, and Google Keep.

Students and teachers get a 40% discount.

Standard Notes

Platforms: Windows (64 bit), macOS (Intel/Apple Silicon), Linux, iOS, Android, Web

Personal license: free

Sync options: native or self-hosted

Native platform sync: starts at $7.5/month ($90 billed annually)

Open format: no, but you can export to text

Open source: yes

Website: standardnotes.com

Standard Notes is built on two core principles: flexible note templates for various needs, and a high level of privacy. End-to-end sync encryption is on by default, your notes are encrypted on your device, and you need two-factor authentication to log in. Unlike its competitors discussed above, Standard Notes has a web application, so you can enjoy all of its features in a browser.

As for the note templates, you can use these to store anything you want: from code snippets and to-do lists to financial spreadsheets and even passwords. Speaking of which, Standard Notes can be used for both storing passwords and generating one-time authentication codes (TOTPs). You can even protect individual notes with an extra password for an extra layer of security.

One cool feature of Standard Notes is its “infinite undo”: according to the developers, the app keeps the edit history for each note from the moment it’s created. This might be a lifesaver when working on larger documents like a book or doctoral thesis. Standard Notes supports plug-ins, but there aren’t many to choose from.

Sync options include self-hosting a Standard Notes server or using the proprietary cloud. The Productivity plan will set you back $90 annually, or you can store and sync simple text notes with end-to-end encryption on the free Standard plan. Some of the features we mentioned are only available in the $120-per-year Professional plan, which also includes 100 GB of encrypted file storage, and subscription-sharing with up to five accounts. If you self-host, you still need to buy a license, but it comes at a heavy discount: $39 annually or $113.42 for five years. Students get a 30% discount.

Standard Notes can import data from Evernote, Apple’s Notes, Simplenote, Google Keep, or a set of plain text files.

Extra security

Of course, encryption is of no use if someone steals the data from your computer directly. Data thieves typically use a special type of malware called “infostealers”. These can snatch your files and even intercept passwords as you type them. So, in addition to one of these privacy-focused note-taking apps, make sure to use a comprehensive security system on all your smartphones and computers.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 341 | Kaspersky official blog

Episode 341 of the Transatlantic Cable podcast kicks off with news that a data broker leak has revealed sensitive data about people who visited the infamous island. From there, the team discuss news that the UN peace keepers are being told to shore up their cyber-defences, after warnings that nation-state attackers are actively looking to target them.

To wrap up the team discuss look at a story which is itself baffling: one of the world’s most wanted men is leaving restaurant reviews on Google, and has done for the last 5 years. The second story is around Elon Musk’s Nuralink project, with the first ever patient using the tool to play Mario Kart with his dad.

If you liked what you heard, please consider subscribing.

Jeffrey Epstein’s Island Visitors Exposed by Data Broker
UN Peace Operations Under Fire From State-Sponsored Hackers
Investigation finds Christopher Kinahan Sr left ‘digital trail’ of Google reviews
I’m world’s first Neuralink patient

Kaspersky official blog – ​Read More