NIST introduces first post-quantum encryption standards | Kaspersky official blog

After many years of research and testing, in mid-August 2023, the U.S. National Institute of Standards and Technology (NIST) finally introduced fully-fledged post-quantum encryption standards — FIPS 203, FIPS 204, and FIPS 205. So let’s discuss them and see why they should be adopted as soon as possible.

Why do we need post-quantum cryptography?

First, let’s briefly outline the threat quantum computers pose to cryptography. The issue lies in the fact that quantum computing can be used to break asymmetric encryption. Why is this important? As a rule, today’s communication encryption typically uses a dual system:

All messages are encrypted using a symmetric algorithm (like AES), which involves a single key shared by all participants. Symmetric algorithms work well and fast, but there’s a problem: the key must be somehow securely transmitted between interlocutors without being intercepted.
That’s why asymmetric encryption is used to transmit this key (like RSA or ECDH). Here, each participant has a pair of keys — a private and a public one — which are mathematically related. Messages are encrypted with the public key, and decrypted only with the private one. Asymmetric encryption is slower, so it’s impractical to use it for all messages.

The privacy of correspondence is ensured by the fact that calculating a private key from the corresponding public key is an extremely resource-intensive task — potentially taking decades, centuries, or even millions of years to solve. That is — if we’re using traditional computers.

Quantum computers significantly speed up such calculations. Specifically, Shor’s quantum algorithm can crack private keys for asymmetrical encryption much faster than its creators expected — in minutes or hours rather than years and centuries.

Once the private key for asymmetric encryption has been calculated, the symmetric key used to encrypt the main correspondence can also be obtained. Thus, the entire conversation can be read.

In addition to communication protocols, this also puts digital signatures at risk. In the majority of cases, digital signatures rely on the same asymmetric encryption algorithms (RSA, ECDSA) that are vulnerable to attacks by quantum computers.

Today’s symmetric encryption algorithms, on the other hand, are much less at risk from quantum computers than asymmetric ones. For example, in the case of AES, finding a 256-bit key using Grover’s quantum algorithm is like finding a 128-bit key on a regular computer. The same applies to hashing algorithms.

The trio of post-quantum cryptography standards: FIPS 203, FIPS 204, and FIPS 205

The primary task for cryptographers has become the development of quantum-resistant asymmetric encryption algorithms, which could be used in key transfer and digital signature mechanisms. The result of this effort: the post-quantum encryption standards FIPS 203, FIPS 204, and FIPS 205, introduced by the U.S. National Institute of Standards and Technology (NIST).

FIPS 203

FIPS 203 describes a key encapsulation mechanism based on lattice theory — ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism). This asymmetric cryptographic system — which is resistant to quantum algorithm attacks — is designed to transfer encryption keys between interlocutors.

ML-KEM was developed as part of CRYSTALS (Cryptographic Suite for Algebraic Lattices) and is also known as CRYSTALS-Kyber, or simply Kyber.

FIPS 203 features three parameter variants for ML-KEM:

ML-KEM-512: Security level 1 (equivalent to AES-128);
ML-KEM-768: Security level 3 (equivalent to AES-192);
ML-KEM-1024: Security level 5 (equivalent to AES-256).

FIPS 204

FIPS 204 defines a digital signature mechanism, also based on algebraic lattices, called ML-DSA (Module-Lattice-Based Digital Signature Algorithm). Previously known as CRYSTALS-Dilithium, this mechanism was developed within the same CRYSTALS project as Kyber.

FIPS 204 has three parameter variants for ML-DSA:

ML-DSA-44: Security level 2 (equivalent to SHA3-256);
ML-DSA-65: Security level 3;
ML-DSA-87: Security level 5.

FIPS 205

The third standard, FIPS 205, describes an alternative digital signature mechanism: SLH-DSA (Stateless Hash-Based Digital Signature Algorithm). Unlike the other two cryptosystems, which are based on algebraic lattices, SLH-DSA is based on hashing. This mechanism is also known as SPHINCS+.

This standard involves the use of both the SHA2 hash function with a fixed output length, as well as the SHAKE function with an arbitrary length. For each base cryptographic-strength level, SLH-DSA offers sets of parameters optimized for a higher speed (f — fast), or a smaller signature size (s — small). Thus, FIPS 205 has more variety — with as many as 12 parameter options:

SLH-DSA-SHA2-128s, SLH-DSA-SHAKE-128s, SLH-DSA-SHA2-128f, SLH-DSA-SHAKE-128f: Security level 1;
SLH-DSA-SHA2-192s, SLH-DSA-SHAKE-192s, SLH-DSA-SHA2-192f, SLH-DSA-SHAKE-192f: Security level 3;
SLH-DSA-SHA2-256s, SLH-DSA-SHAKE-256s, SLH-DSA-SHA2-256f, SLH-DSA-SHAKE-256f: Security level 5.

HNDL, and why it’s time to start using post-quantum encryption

For now, the threat of quantum algorithms breaking asymmetric encryption is mostly theoretical. Existing quantum computers lack the power to actually do it in practice.

Until last year, it was believed that sufficiently powerful quantum systems were still a decade away. However, a 2023 paper suggested ways to optimize hacking using a combination of classic and quantum computing. As a result, the timeline for achieving quantum supremacy seems to have shifted: RSA-2048 could very well be broken within a few years.

It’s also important to remember the concept of HNDL — “harvest now, decrypt later” (or SNDL — “store now, decrypt later”). Attackers with significant resources could already be collecting and storing data that can’t currently be decrypted. Once quantum computers with sufficient power become available, they’ll immediately begin retroactive decryption. Of course, when this fateful moment comes, it will already be too late, so quantum-resistant encryption standards should be implemented right now.

The ideal approach to deploying post-quantum cryptography based on established IT industry practices is hybrid encryption; that is, encrypting data in two layers: first with a classical algorithm, then with a post-quantum one. This forces attackers to contend with both cryptosystems — significantly lowering the chances of a successful breach. This approach is already being used by Signal, Apple, Google, and Zoom.

Kaspersky official blog – ​Read More

Deep-TEMPEST: image hijacking via HDMI | Kaspersky official blog

Thanks to scientists at the University of the Republic (Uruguay), we now have a much better understanding of how to reconstruct an image from spurious radio emissions from monitors; more specifically — from signals leaked during data transmission via HDMI connectors and cables. Using state-of-the-art machine-learning algorithms, the Uruguayan researchers demonstrated how to use such radio noise to reconstruct text displayed on an external monitor.

What, no one’s done it before?

Sure, it’s not the first attempt at a side-channel attack aimed at reconstructing an image from radio signal emissions. A method of intercepting radio noise from a display in a neighboring room — known as a certain TEMPEST attack — was described in a study published in… 1985! Back then, Dutch researcher Wim van Eck demonstrated that it’s possible to intercept a signal from a nearby monitor. In our post about the related EM Eye attack, we talked extensively about these historical studies, so we won’t repeat ourselves here.

However, van Eck’s experiment has lost much of its usefulness today. It used a monitor from 40 years ago with a cathode-ray tube and analog data transmission. Also, the captured image back then was easy to analyze, with white letters on a black background and no graphics. Today, with a digital HDMI interface, it’s much more difficult to intercept the image, and, more importantly, to restore data. But that’s precisely what the Uruguayan team has managed to do.

How does the modern-day van Eck-like interception work?

Data is transmitted digitally to the monitor via an HDMI cable. The volume of data involved is vast. The computer transmits 60 or more frames to the monitor every second, with each frame containing millions of different-colored dots. Using a software-defined radio (SDR), we can intercept signals generated by this data stream. But can we then extract useful information from this extremely weak noise?

Schematic of the new spying method proposed by the Uruguayan team. Source

The authors called this attack Deep-TEMPEST — a nod to the use of deep-learning AI. The diagram clearly shows how noisy the intercepted data is before processing: we see a discolored shadow of the original image, in which only the location of the main elements can be guessed (a browser window with an open Wikipedia page was used for the experiment). It’s just about possible to distinguish the navigation menu at the top and the image in the center of the screen, but absolutely impossible to read the text or make out the image.

Image captured and processed by Deep-TEMPEST. Source

And here’s the result after processing. The picture quality hasn’t improved, so making out the image is no easier. But the text was recognized in its entirety, and even if the machine-learning algorithm tripped up on a couple of letters, it doesn’t greatly affect the final result. Let’s look at another example:

Deep-TEMPEST attack result in detail. Source

Above is the captured image. Some letters are distinguishable, but the text is basically unreadable. Below is the original image – a screenshot fragment. In the middle is the image after processing by the machine-learning algorithm. Some adjacent letters are hard to discern, but overall the text is quite easy to read.

How did the researchers get this result?

The Uruguayan team’s main achievement is that they developed their own method of data analysis. This was partly due to enhanced neural network training, which allowed text recognition from a rough image. To do this, the team needed pairs that consisted of an original screenshot and the corresponding SDR-captured image. Building a dataset big enough for training (several thousands of pairs) is a difficult, time-consuming task. So the researchers took a slightly different path: about half of the dataset they obtained by displaying an image on the screen and intercepting the signal; the other half they simply generated using a self-written algorithm that gives a reliable picture of the captured information based on the relevant screenshot. This proved sufficient to train the machine-learning algorithm.

The team’s second stroke of genius was the use of a neural network that delivered high-quality results without much expense. The test bed was created from relatively affordable radio-data interception tools; open-source software was used. As we said, HDMI carries vast amounts of data to the connected monitor. To analyze spurious radio emissions during such transmission, it’s important to intercept a large spectrum of radio frequencies — the bigger the band, the better the result. Ideally, what’s needed is a high-end SDR receiver capable of capturing a frequency band of up to 3200 megahertz — a piece of kit that costs about US$25 000. In this case, however, the researchers got by with a USRP 200-mini receiver (US$1500) — capable of analyzing a much narrower frequency band of up to 56 megahertz. But thanks to the enhanced neural network trained to recognize such partial information, they could compensate for the lack of raw data.

Deep-TEMPEST attack test bed. On the left is the target computer connected to a monitor. Key: (1) antenna, (2) radio signal filters and amplifier, (3) SDR receiver, (4) laptop for intercepting radio emissions and analyzing the data. Source

Open-source software and libraries were used to process the data. Code, screenshots and other resources have been made available on GitHub, so anyone who wishes to can reproduce the results.

Limited scope of application

In the 1999 novel Cryptonomicon by Neal Stephenson, one of the characters, upon discovering that he’s being monitored by “van Eck phreaking”, starts making things difficult for those spying in him by changing the color of letters and replacing the monochrome text background with a video clip. Generally speaking, the countermeasures against TEMPEST-type attacks described by Stephenson a quarter century ago are still effective. You can add noise to an image such that the user won’t even notice — and interception is impossible.

Naturally, the question arises: is the juice worth the squeeze? Is it really necessary to defend against such highly specialized attacks? Of course, in the vast majority of practical cases, there’s nothing to fear from this attack – much better to focus on guarding against real threats posed by malware. But if you work with super-valuable data that super-professionals are after, then it might be worth considering such attacks as part of your threat model.

Also, don’t disregard this study out of hand just because it describes interception from an external monitor. Okay, you might use a laptop, but the image is sent to the built-in display using roughly the same principles — only the transmission interface may be slightly different, while the radiation level will be slightly lower. But this can be addressed by refining the algorithms and upgrading the test equipment. So hats off to the Uruguayan researchers — for showing us once again just how complex the real world is beyond “software” and “operating systems”.

Kaspersky official blog – ​Read More

How to protect and preserve your data in Telegram in 2024 | Kaspersky official blog

At the time of writing, Pavel Durov has been charged in France, but hasn’t appeared in court yet. How things will pan out in court remains very unclear, but in the meantime scammers are already exploiting the massive attention and panic surrounding Telegram, while much dubious advice on social media is circulating regarding what to do now with the app. Our two-cents in a nutshell: Telegram users should remain calm, and act depending only on the facts as they currently stand. Now for what we can recommend today in detail…

Chat privacy and the “keys to Telegram”

Put simply, most chats on Telegram cannot be considered confidential — and this has always been the case. If you’ve been exchanging sensitive information on Telegram without using secret chats, consider it compromised. Move your private communications to another messenger following these recommendations.

Many news outlets suggest that the main complaint against Durov and Telegram is their refusal to cooperate with the French authorities and provide the “keys to Telegram”. Supposedly, Durov possesses some kind of cryptographic keys, which can be used to read users’ messages. In fact, few people really know how the Telegram server is structured, but from the available information, it is known that the bulk of correspondence is stored on servers in minimally encrypted form — that is, the decryption keys are stored within the same Telegram infrastructure. The creators claim that chats are stored in one country, while keys are stored in another, but considering that all the servers communicate with each other, it’s not clear how effective this security measure is in practice. It would help if the servers were confiscated in one country, but that’s about it. End-to-end encryption, which is standard in other messengers (WhatsApp, Signal and even Viber), is called “secret chat” in Telegram. It’s somewhat hidden in the depths of the interface and needs to be manually activated for selected personal chats. All group chats, channels, and standard personal correspondence lack end-to-end encryption and can be read at least on Telegram servers. Moreover, for both secret chats and everything else, Telegram uses its own non-standard protocol — MTProto — which has been found to contain serious cryptographic vulnerabilities. Therefore, Telegram correspondence can theoretically be read by:

Telegram server administrators
Hackers who’ve successfully breached Telegram servers and installed spyware
Third parties with some kind of access granted by Telegram administrators
A third party that has discovered cryptographic vulnerabilities in Telegram protocols and can read (selectively or in full) at least non-secret chats by intercepting the traffic of some users

Deleting correspondence

Some categories of users have been advised to delete old chats in Telegram, such as work-related ones. This advice seems questionable, because in databases (where correspondence is stored on the server), entries are rarely actually deleted; they’re simply marked as such. Moreover, like any major IT infrastructure, Telegram likely implements a robust data backup system, meaning “deleted” messages will be kept at least in database backups. It may be more effective for both chat participants (or group admins) to completely delete the chat. However, the issue of backups would still remain.

Backing up chats

A number of observers have expressed concerns that Telegram could be removed from app stores, blocked, or otherwise disrupted. While this seems unlikely, backing up important correspondence, photos and documents is still good practice in digital hygiene.

To save a backup of important personal correspondence, you need to install Telegram on your computer (official client here), log into your account, and then navigate to Settings → Advanced → Export Telegram data.

In the pop-up window, you can select the data you want to export (personal chats, group chats — with or without photos and videos), set download size limits, and choose the data format — HTML, which can be viewed in any browser, or JSON for automated processing by third-party apps.

Downloading the data to your computer could take several hours and may require dozens or even hundreds of gigabytes of free space, depending on how much you use Telegram and the export settings. You can close the export window, but be sure not to exit the app itself or disconnect your computer from the internet or the mains. We recommend only using the backup feature in the official client.

“Preventing Telegram’s deletion” from smartphones

First, let’s look at iOS. The folks at Cupertino don’t remove apps from users’ smartphones — even if apps are removed from the App Store, so any advice about stopping Telegram being deleted from iPhones is bogus. Moreover, a popular method for “Telegram deletion prevention” circulating online — that using the Screen Time menu — doesn’t prevent Apple from deleting apps; it only prevents certain users (e.g., children) from deleting apps themselves: as such it’s a parental control feature. And there’s more: Durov’s arrest has revived the old false claim about Telegram being removed remotely from iPhones, which both Apple and Telegram officially denied back in 2021.

As for Android, Google also doesn’t typically delete apps — except when it’s 100% malicious software. True, such guarantees don’t apply to all holders of other ecosystems (Samsung, Xiaomi and so on), but on Android it’s easy to install Telegram directly from the Telegram website.

Alternative clients

There are unofficial but still functional and legal clients for Telegram, and even an “official alternative client” — Telegram X. These clients all use the Telegram API, but it’s unclear whether they provide any additional benefits or increased security. The top five alternative clients on Google Play each talk about “improved security” – but only refer to features like hiding chats on a device.

Of course, you may end up downloading malware disguised as an alternative Telegram client — scammers don’t miss an opportunity to exploit the app’s popularity. If you’re considering alternative clients, follow these safety guidelines:

Download them only from official app stores.
Make sure the app has been around for a while, and has high ratings and a large number of downloads.
Use reliable antivirus protection across all platforms such as Kaspersky Premium.

Fundraising for Durov and defending free speech

This isn’t directly related to Telegram chats, but it’s important to beware also of scammers posing as those raising funds for Pavel Durov’s legal defense (like, he really needs the cash), while actually aiming to steal payment information or cryptocurrency donations. Treat such requests with extreme suspicion, and verify whether the alleged organization really exists and really is conducting such a campaign. For more on charity scams, check out our dedicated article.

Kaspersky official blog – ​Read More

Top-5 leaks of all time | Kaspersky official blog

Recent years have seen a steady rise in the amount of compromised data out there. News reports about new leaks and hacks are an almost daily occurrence, and we at Kaspersky continue to use plenty of electronic ink to tell you about the need for robust protection — now more than ever.

Today we take a dive into history and recall (with a shudder) the biggest and baddest data breaches (DBs) of all time. To find out how much and what kind of information was leaked, who was affected, and much more besides — read on…

1. RockYou2024

In brief: hackers collected data from past leaks, and rolled out the largest-ever compilation of real user passwords: 10 billion records!

When: 2024.

Who was affected: users worldwide without strong protection.

RockYou2024 is the king of leaks, and a thorn in the side of anyone who thought hackers weren’t interested in them. In July 2024, cybercriminals leaked a gigantic collection of passwords on a hacking forum: 9,948,575,739 unique records in total. Despite being a compilation based on the old RockYou2021 leak, RockYou2024 still… rocks, so to speak.

Our expert, Alexey Antonov, analyzed the breach, and found that 83% of the leaked passwords were crackable by a smart guessing algorithm in under an hour, with only 4% of them (328 million) able to be considered strong: requiring over a year to crack using a smart algorithm. For details on how smart algorithms work, see our password strength study, which, analyzing real user passwords leaked on the dark web, shows that far too many of us are still shockingly blasé about password security.

In analyzing the latest leak, Alexey filtered out all non-relevant records, and worked with the remaining array of… 8.2 billion passwords stored somewhere in plaintext!

2. CAM4

In brief: a misconfigured server exposed 11 billion customer records to the public domain — sensitive information indeed given that CAM4 is… an adult site!

When: 2020.

Who was affected: users of the adult site CAM4.

This story is of interest for two reasons: what information was leaked, and how. Among the “standard” leaked details (first name, last name, email address, payment logs, etc.) was information of a far more intimate nature: gender preferences and sexual orientation. Users had to give this information at signup before they could enjoy the content of the adult streaming platform.

The leak was caused by an insecure Elasticsearch database. However, it didn’t end so badly – and embarrassingly: if we were to compile all the reports of leaks related to this DB into a physical book, we’d get quite a doorstop — within which the story of CAM4 would occupy a small but important chapter: “The largest data leak in history that never was”. Fortunately, the database was shut down within half-an-hour after discovering the error, and later moved to an internal local network. Users’ personal data was deleted.

3. Yahoo

In brief: A hacker attack affected all three billion users of the platform — but Yahoo admitted this only three years later.

When: 2012, 2013… or was it 2014? Even Yahoo doesn’t know for sure.

Who was affected: all Yahoo users.

More than a decade ago now, Yahoo was hacked (it all started with a phishing email), leading to a series of news stories about a rumored data leak. Initial reports mentioned a couple of hundred million hacked accounts, then that rose to around 500 million, then, in 2017, on the eve of the company’s deal with Verizon, it turned out that all three billion accounts were affected. The hackers got hold of names, email addresses, dates of birth, and phone numbers. Even worse, they had access to the accounts of users who went years without changing their passwords. Now do you see why it’s so important to change passwords regularly and delete old profiles?

This incident is yet further proof that even tech giants sometimes fail to store user data properly. In the case of Yahoo, attackers found a database of unencrypted security questions and answers, and some accounts had no two-factor authentication at all. So, the moral of the story is: don’t rely on social networks or online platforms to secure your personal accounts. Make up or generate strong passwords and store them in Kaspersky Password Manager. And if you’re worried your data may already have leaked, install any of our home security solutions: Kaspersky Standard and Kaspersky Plus both let you specify all the email addresses that you and your family use to sign in to online services. The application regularly checks these addresses and reports any data breaches involving accounts linked to them.

In Kaspersky Premium, in addition to an email list, you can add phone numbers — these are usually used to identify users of more sensitive online services such as banking. Our application searches for these numbers and addresses in all fresh database leaks, and, if found, warns you and advises what to do (read more about how we protect you against personal data leaks online or on the dark web).

4. UIDAI (Aadhaar)

In brief: the biometric data of almost all citizens and residents of India went up for sale.

When: 2018.

Who was affected: 1.1 billion citizens and residents of India.

The Unique Identification Authority of India (UIDAI) operates the largest bio-identification system in the world, storing the personal data, fingerprints, and iris photos of more than a billion folks in India.

While many countries around the world are only planning to implement biometric identification, India has had such a system in place for over a decade already. UIDAI was set up so that every single resident of India would have a unique official state identity number, Aadhaar.

But in 2018, following a string of data leaks, cybercriminals not only got their hands on the database, but sold it for as little as 500 rupees (about US$6 at today’s exchange rate). Another massive data breach occurred in 2023, this time impacting 815 million Indians.

Banks and law enforcement agencies continue to advise victims of the leaks to disable biometric authentication for financial services. But that’s no guarantee of security, since their names, passport numbers, photos, fingerprints, and other information are likely in cybercriminal hands.

5. Facebook

In brief: the company failed to notify users about a data breach it had known about for a full two years.

When: 2019.

Who was affected: 533 million Facebook users.

No one is surprised anymore at seeing the words “Facebook” and “leak” side by side. The platform regularly falls victim to hacker attacks and internal leaks. This particular breach — the largest in the company’s history — saw the names, phone numbers, and location data of 533 million users fall into the clutches of cybercriminals. They then posted the data on a hacking forum where anyone could download it all for free. And not only regular users’ account data, but that of public figures, including EU Justice Commissioner Didier Reynders, and then-Prime Minister (now Foreign Minister) Xavier Bettel of Luxembourg.

If you suspect that you too may have been hit by the Facebook data leak, use our Password Checker tool to find out whether your password was compromised in this or other leaks.

The leaked data was current for 2018–2019, although information about it appeared only in 2021. How did that happen? The fact is that hackers exploited the vulnerability in 2019, which Facebook patched straight away, but then forgot (or preferred not) to inform users of the incident. As a result, Meta faced more heavy criticism, plus a hefty €265 million fine (~US$276 million in 2021).

What do these leaks teach us?

The common thread linking all these stories is: “Big Tech helps those who help themselves”. In other words, we are primarily responsible for the security of our data; not Facebook, not Yahoo, not even governments. Look after your accounts yourself, make up or generate strong passwords, store them in a secure password manager, and take special care when it comes to biometric data.

Do not reuse passwords. If you’re a “one password for all occasions” kind of person and have been using the internet for at least a few years, we’ve some bad news for you (in the link).
Check if your passwords have been compromised. If you have our protection, you can use our Data Leak Checker tool to enter a list of email addresses and check your user accounts. Kaspersky Premium users also have the option to check phone numbers using the Identify Theft Protection feature. The applications automatically check this information for exposure in new leaks. And in our password manager, just select Password Check from the menu, or click the key icon on the taskbar, and all stored passwords are checked for strength, uniqueness, and leaks. Everyone else can use our free Password Checker
Use two-factor authentication (2FA) wherever possible.
Do not store passwords in browsers. Use a password manager to generate unique, cryptographically strong passwords for all important accounts, and then you only need think up and remember just one — main — password that serves as the master key to all other passwords. This protects and encrypts your password vault and other vital data.

Kaspersky official blog – ​Read More

Safe LibreOffice settings for all platforms | Kaspersky official blog

The aggressive introduction of AI in Microsoft products, geopolitical tensions, and a series of cybersecurity incidents involving the Redmond giant are pushing many organizations worldwide to switch to open-source alternatives to Windows and Office. To replace the latter, both OpenOffice and its offshoot LibreOffice are very popular. They’re available on all major platforms — including Linux, offer functionality comparable to MS Office, and come with the licenses suitable for large companies.

Due to their similarity to MS Office, the risks associated with using these suites are also similar: software vulnerabilities or unsecure settings can result in the execution of malicious code on the computer, or stealthily redirect the user to phishing links. And these threats aren’t mere theory — malicious documents in .odt files and other “open” document formats have been encountered in the wild. To mitigate these risks, the German Federal Office for Information Security (BSI) has issued public recommendations for secure LibreOffice settings. Let’s look together at the most important ones when using LibreOffice in organizations.

Configuration tips

The tips below apply to safe setup of LibreOffice on Linux, MacOS, or Windows in a managed corporate environment (through group policies and other centralized control tools). The tips concern the Writer, Calc, Impress, Base, Math, and Draw components of version 7.2.x. The recommended settings are based on the following considerations:

The end user should make the fewest possible decisions affecting security.
The functionality of the application should not be significantly reduced.
Unnecessary features should be deactivated to reduce the attack surface.
Whenever possible, transfer of data from the product to the manufacturer should be disabled.
External cloud services should be avoided unless they’re necessary for the organization’s business processes.

Configuration storage

LibreOffice settings can be modified by the administrator or by the user. Initial administrative settings are stored in the LibreOffice folder. On all platforms, the settings are applied as XML files (settings.xml), but they can also be stored in platform-specific formats (registry in Windows, dconf in Linux). For medium and large organizations, XML is recommended.

If a setting shouldn’t be modified by users, it can be marked as finalized in the administrator settings.
For example, below is a settings snippet that disables saving the document-author information (the RemovePersonalInfoOnSaving setting in the group org.openoffice.Office.Common/Security/Scripting) and prohibits changing this setting:

<item oor:path=”/org.openoffice.Office.Common/Security/Scripting”>
<prop about:name=”RemovePersonalInfoOnSaving” about:finalized=”false” about:op=”fuse” oor:type=”xs:boolean”>
<value>true</value>
</prop>
</item>

Folders for administrative settings (in version 7.2) are listed below:

Linux: /opt/libreoffice7.2/share/registry/res
MacOS: /Applications/LibreOffice.app/Contents/Resources/registry/res
Windows: C:Program FilesLibreOfficeshareregistryres

Settings to change

Many of LibreOffice’s settings are secure by default. Here, we’ll focus on those that need to be tightened.

Macro execution

By default, any signed macros are executed, so this setting must be tightened to the max — allowing only macros from trusted folders to be executed. So in the group org.openoffice.Office.Common/Security/Scripting, set the MacroSecurityLevel to 3:

<prop over:name=”MacroSecurityLevel” over:finalized=”true” over:op=”fuse” over:type=”xs:int”>
<value>3</value>
</prop>

To disable macros entirely, set the DisableMacrosExecution option from the same group to true with the finalized tag.

Trusted folders

By default, LibreOffice updates the list of trusted folders based on user activity — often including folders like Downloads. To clearly set trusted document storage locations, list them in the SecureURL option. The list can be left empty.

<item oor:path=”/org.openoffice.Office.Common/Security/Scripting ear:type=”oor:string-list”>
<plug about:name=”SecureURL” about:finalized=”true” about:op=”fuse”/>
</item>

Loading external images

Images from external sources can be embedded into documents. This creates significant risks of phishing and vulnerability exploitation, so this option should be disabled: set BlockUntrustedRefererLinks to true with the finalized tag in the /org.openoffice.Office.Common/Security/Scripting group.

Updating linked data

Linked content loaded in Calc can also be malicious, so updates should be blocked by setting the Link option to 1+finalized in the /org.openoffice.Office.Calc/Content/Update group.

The corresponding setting in Writer has different numeric values for some reason; block it by setting Link to 0+finalized in /org.openoffice.Office.Writer/Content/Update.

Exotic files

To disable loading of Abiword, Hangul Office, StarOffice XML, and other irrelevant formats, set LoadExoticFileFormats to 0 in the /org.openoffice.Office.Common/Security group.

Additionally, any of the 100+ supported file formats can be blocked by setting the Enabled option to false+finalized for any format in the group
/org.openoffice.TypeDetection.Filter/Filters/org.openoffice.TypeDetection.Filter:Filter[‘NAME’].
Replace NAME with the name of the format to be blocked.

System authentication

LibreOffice applications can automatically access external URLs using the credentials of the current user, potentially leading to credential leakage. To disable this behavior, set an empty list in the AuthenticateUsingSystemCredentials option:

<item oor:path=”/org.openoffice.Office.Common/Passwords”>
<prop oor:name=”AuthenticateUsingSystemCredentials” oor:finalized=”true” over:op=”fuse” ear:type=”oor:string-list”/>
</item>

Installing extensions

It’s recommended to disable user installation of extensions and allow extensions to be added only centrally through administrator privileges: set DisableExtensionInstallation to true+finalized in the /org.openoffice.Office.ExtensionManager/ExtensionSecurity group.

To centralize the removal of extensions and disable the ability to do this manually by the user, set DisableExtensionRemoval to true+finalized in the same group.

Updates

LibreOffice applications automatically check for updates, and prompt the user to install them. If updates and patches are managed centrally within the organization, this option can be disabled by setting AutoCheckEnabled to false+finalized in the /org.openoffice.Office.Jobs/Jobs/org.openoffice.Office.Jobs:Job[‘UpdateCheck’]/Arguments group.

Installation of fonts, language packs, and databases (Linux only)

Although these additions may seem harmless, for security reasons, automatic installation should be disabled. Set the EnableFontInstallation, EnableLangpackInstallation, and EnableBaseInstallation options to false+finalized in the /org.openoffice.Office.Common/PackageKit group.

Disable telemetry

Set the CollectUsageInformation and CrashReport options to false+finalized in the /org.openoffice.Office.Common/Misc group.

Document-signing certificates (Linux only)

By default, any folder can be chosen for the NSS database, which stores certificates. This isn’t secure and can lead to certificate leaks from uncontrolled locations. The administrator should specify a storage location designated by the organization using the CertDir option:

<item oor:path=”/org.openoffice.Office.Common/Security/Scripting”>
<prop over:name=”CertDir” over:op=”fuse” over:type=”xs:string”/>
</item>

Removing personal data (document author data)

If document distribution cannot be controlled, author data often needs to be hidden. To make LibreOffice remove this data when saving a document, add the RemovePersonalInfoOnSaving setting (true+finalized) in the /org.openoffice.Office.Common/Security/Scripting group.

This mode makes it more complicated to collaborate on a document as it’s harder to identify the author of any changes, so it’s not suitable for all organizational roles.

BSI also recommends disabling the saving of full PGP keys in signed documents, as they also contain author’s personal data: set MinimalKeyExport to true+finalized in the /org.openoffice.Office.Common/Security/OpenPGP group.

Settings to lock

These settings are initially set to be secure, but should be prevented from being changed by adding the finalized attribute.

Group name
Setting name
Value

/org.openoffice.Inet/Settings
ooInetProxyType
1

/org.openoffice.Office.Common/Security/Scripting
HyperlinksWithCtrlClick
true

/org.openoffice.Office.Security/Hyperlinks
Open
1

/org.openoffice.Office.Common/Security/Scripting
CheckDocumentEvents
true

/org.openoffice.Office.Common/Passwords
UseStorage
False

/org.openoffice.Office.Common/Passwords
TrySystemCredentialsFirst
false

/org.openoffice.Office.Jobs/Jobs/org.openoffice.Office.Jobs:Job[‘UpdateCheck’]/Arguments
ExtendedUserAgent
false

 

Additional protective layers

On any platform, users may encounter targeted cyberattacks and malicious documents. Therefore, secure OS and office suite settings should be complemented by a comprehensive set of layered defense measures:

Multi-factor authentication
Centralized access rights management
Mandatory EDR agent on all workstations and servers
Centralized security event monitoring using SIEM, or preferably XDR solutions.

Kaspersky official blog – ​Read More

How to hack wireless bicycle gears | Kaspersky official blog

I’ve worked in cybersecurity for years, and sometimes I think I’ve seen it all: there’s nothing hackers could possibly do that would surprise, much less shock me. Baby monitors? Hacked. Cars? Hacked, over and over — and all kinds of makes. And not just cars, but car washes too. Toy robots, pet feeders, TV remotes… Fish tank anyone? No – really: it’s been done!

But what about bicycles? They seemed to be hackproof — until recently. In mid-August 2024, researchers published a paper describing a successful cyberattack on a bike. More precisely — on one fitted with Shimano Di2 gear-shifting technology.

Electronic gears — Shimano Di2 and the like

First, a few words of clarification for those not up to speed, so to speak, with the latest trends in cycling technology. Let’s start by saying that Japan’s Shimano is the world’s largest maker of key components for bicycles; basically – the main parts that are added to a frame to make up a working bicycle, such as drivetrains, braking systems, and so on. Although the company specializes in traditional mechanical equipment, for some time now (since 2001) it has been experimenting with electronics.

Classic gear-shifting systems on bikes rely on cables that physically connect the gear-derailleurs (bike-chain guiders across sprockets) to the gear-shifters on the handlebars. With electronic systems, however, there’s no such physical connection: the shifter normally sends a command to the derailleur wirelessly, and this changes gear with the help of a small electric motor.

Electronic gear-shifting systems can also be wired. In this case, instead of a cable, a wire connects the shifter and the derailleur through which commands are transmitted. Most in vogue of late, however, are wireless systems, in which the shifter sends commands to the derailleur with a radio signal.

Shimano Di2 electronic gear-shifting systems currently dominate the high-end segment of the company’s product line. The same is happening across the model lineups of its main competitors: America’s SRAM (which introduced wireless gear shifters first) and Italy’s Campagnolo.

In other words, a great many road, gravel and mountain bikes in the upper price band have been using electronic gear shifters for quite a while already, and increasingly these are wireless.

The wireless version of the Shimano Di2 actually isn’t all that wireless. Inside the bike frame there are quite a few wires: A and B represent wires that run from the battery to the front and rear derailleurs, respectively. Source

The switch from mechanics to electronics makes sense on the face of it — among other things, electronic systems offer greater speed, precision, and ease of use. That said, going wireless does look like innovation for the sake of innovation, as the practical benefits for the cyclist aren’t all too obvious. At the same time, the smarter a system becomes, the more troubles could arise.

And now it’s time to get to the heart of this post: bike hacking…

Security study of the Shimano Di2 wireless gear-shifting system

A team of researchers from Northeastern University (Boston) and the University of California (San Diego) analyzed the security of the Shimano Di2 system. The specific groupsets they looked at were the Shimano 105 Di2 (for mid-range road bikes) and the Shimano DURA-ACE Di2 (the very top of the line for professional cyclists).

In terms of communication capabilities, these two systems are identical and fully compatible. They both use Bluetooth Low Energy to communicate with the Shimano smartphone app, and the ANT+ protocol to connect to the bike’s computers. More importantly, however, the shifters and derailleurs communicate using Shimano’s proprietary protocol on the fixed frequency of 2.478 GHz.

This communication is, in fact, rather primitive: the shifter commands the derailleur to change gear up or down, and the derailleur confirms receipt of the command; if confirmation isn’t received, the command is resent. All commands are encrypted, and the encryption key appears to be unique for each paired set of shifters and derailleurs. All looks hunky-dory save for one thing: the transmitted packets have neither a timestamp nor a one-time code. Accordingly, the commands are always the same for each shifter/derailleur pair, which makes the system vulnerable to a replay attack. This means that attackers don’t even need to decrypt the transmitted messages — they can intercept the encrypted commands and use them to shift gears on a victim’s bike.

To intercept and replay commands, the researchers used an off-the-shelf software-defined radio. Source

Using a software-defined radio (SDR), the researchers were able to intercept and replay commands, and thus gain control over the gear shifting. What’s more, the effective attack range — even without modifying the equipment or using amplifiers or directional antennas — was 10 meters, which is more than enough in the real world.

Why Shimano Di2 attacks are dangerous

As the researchers note, professional cycling is a highly competitive sport with big money involved. Cheating — especially the use of banned substances — is no stranger to the sport. And an equally underhand advantage could be gained by exploiting vulnerabilities in a competitor’s equipment. Therefore, cyberattacks in the world of professional cycling could easily become a thing.

The equipment used for such attacks can be miniaturized and hidden either on a cheating cyclist or a support vehicle, or even set up somewhere on the race track or route. Moreover, malicious commands can be sent remotely by a support group.

A command to upshift gear during a climb or sprint, for instance, could seriously affect an opponent’s performance. And an attack on the front derailleur, which changes gears more abruptly, could bring the bike to a halt. In a worst-case scenario, an unexpected and abrupt gear change could damage the chain or cause it to fly off, potentially injuring the cyclist.

Vulnerabilities in the Shimano Di2 allow an attacker to remotely control a bike’s gear shifting or carry out a DoS attack. Source

Besides malicious gear-shifting, the researchers also explored the possibility of what they call “targeted jamming” of communications between the shifters and derailleurs. The idea is to send continuous repeat commands to the victim’s bike at a certain frequency. For example, if the upshift command is repeated over and over, the gear shifter will hit top gear and stay there, no longer responding to genuine commands from the shifter (based on the rider’s selection). This is essentially a DoS attack on the gear-shifting system.

The upshot

As the authors note, they chose Shimano as the subject of their study simply because the company has the largest market share. They didn’t examine the wireless systems of Shimano’s competitors, SRAM and Campagnolo, but admit that these too may well be vulnerable to such attacks.

Shimano was informed of the vulnerability, and appears to have taken it seriously — having already developed an update. At the time of this post’s being published, however, only professional cycling teams had received it. Shimano has given assurances to make the update available to the general public later — bikes can be updated via the E-TUBE PROJECT Cyclist app.

The good news for non-professional cyclists is that the risk of exploitation is negligible. But if your bike is fitted with the Shimano Di2 wireless version, be sure to install the update when it becomes available — just in case.

Kaspersky official blog – ​Read More

Episode 360 looks at fake Taylor Swift, Nvidia un the docs, TV ads and much more! | Kaspersky official blog

Episode 360 of the transatlantic cable podcast kicks off with news that Nvidia are on the receiving end of a class-action law-suit, alleging that they scraped YouTube videos without creators’ consent.  From there, the team discuss news around Taylor Swift AI images being shared by Donald Trump and an additional story around how photography is quickly being swamped by generative A.I.

To close, the team discuss a story around how your humble television is being invaded by advertisers.

If you like what you heard, please consider subscribing.

Nvidia Sued for Scraping YouTube After 404 Media Investigation
Swift Could Sue Trump Under State Law for Fake AI Endorsement
The AI photo editing era is here, and it’s every person for themselves
Your TV set has become a digital billboard

Kaspersky official blog – ​Read More

Improvements to our SIEM in Q2 2024 | Kaspersky official blog

We meticulously study the techniques most frequently used by attackers, and promptly refine or add detection logic to our SIEM system to identify those technics. Specifically, in the update to the Kaspersky Unified Monitoring and Analysis Platform released in the second quarter of 2024, we supplemented and expanded the logic for detecting the technique of disabling/modifying a local firewall (Impair Defenses: Disable or Modify System Firewall T1562.004 in the MITRE classification), which ranks among the top tactics, techniques, and procedures (TTPs) used by attackers.

How attackers disable or modify a local firewall

The T1562.004 technique allows attackers to bypass defenses and gain the ability to connect to C2 servers over the network or enable an atypical application to have basic network access.

There are two common methods for modifying or disabling the host firewall: (i) using the netsh utility, or (ii) modifying the Windows registry settings. Here are examples of popular command lines used by attackers for these purposes:

netsh firewall add allowedprogram
netsh firewall set opmode mode=disable
netsh advfirewall set currentprofile state off
netsh advfirewall set allprofiles state off

Example of a registry key and value added by attackers, allowing incoming UDP traffic for the application C:Users<user>AppDataLocalTempserver.exe:

HKLMSYSTEMControlSet001servicesSharedAccessParametersFirewallPolicyFirewallRules

Registry_value_name: {20E9A179-7502-465F-99C4-CC85D61E7B23}

Registry_value:’v2.10|Action=Allow|Active=TRUE|Dir=In|Protocol=17|Profile=Public|App=C:

Users<user>AppDataLocalTempserver.exe|Name=server.exe|’}

Another method attackers use to disable the Firewall is by stopping the mpssvc service. This is typically done with the net utility net stop mpssvc.

net stop mpssvc

How our SIEM solution detects T1562.004

This is achieved using the new R240 rule; in particular, by detecting and correlating the following events:

Attacker stopping the local firewall service to bypass its restrictions
Attacker disabling or modifying the local firewall policy to bypass it (configuring or disabling the firewall via netsh.exe)
Attacker changing local firewall rules through the registry to bypass its restrictions (modifying rules through the Windows registry)
Attacker disabling the local firewall through the registry
Attacker manipulating the local firewall by modifying its policies

With its latest update, the platform now offers more than 605 rules, including 474 containing direct detection logic. We’ve also refined 20 existing rules by fixing or adjusting their conditions.

Why we focus on the MITRE classification

MITRE ATT&CK for Enterprise serves as the de facto industry standard guideline for classifying and describing cyberattacks and intrusions, and is made up of 201 techniques, 424 sub-techniques, and thousands of procedures. Therefore, when deciding how to further develop our SIEM platform — the Kaspersky Unified Monitoring and Analysis Platform — we rely, among other things, on the MITRE classification.

As per our plan set out in a previous post, we’ve started labeling current rules in accordance with MITRE attack methods and tactics — aiming to expand the system’s functionality and reflect the level of protection against known threats. This is important because it allows us to structure the detection logic and ensure that the rules are comprehensive — with no “blind spots”. We also rely on MITRE when developing OOTB (out-of-the-box) content for our SIEM platform. Currently, our solution covers 309 MITRE ATT&CK techniques and sub-techniques.

Other additions and improvements to the SIEM system

In addition to the detection logic for T1562.004 mentioned above, we’ve added normalizers to the Kaspersky Unified Monitoring and Analysis Platform SIEM system to support the following event sources:

[OOTB] Microsoft Products, [OOTB] Microsoft Products for Kaspersky Unified Monitoring and Analysis Platform 3, [OOTB] Microsoft Products via KES WIN: normalizers to process some events from the Security and System logs of the Microsoft Windows Server operating system. The [OOTB] Microsoft Products via KES WIN normalizer supports a limited number of audit event types transmitted to KUMA KES WIN 12.6 through syslog.
[OOTB] Extreme Networks Summit Wireless Controller: a normalizer for certain audit events from the Extreme Networks Summit wireless controller (model: WM3700, firmware version: 5.5.5.0-018R).
[OOTB] Kaspersky Security for MS Exchange SQL: a normalizer for Kaspersky Security for Exchange (KSE) version 9.0 system events stored in the database.
[OOTB] TIONIX VDI file: a normalizer supporting the processing of some TIONIX VDI (version 2.8) system events stored in the tionix_lntmov.log file.
[OOTB] SolarWinds Dameware MRC xml: a normalizer supporting the processing of some Dameware Mini Remote Control (MRC) version 7.5 system events stored in the Windows Application log. The normalizer processes events created by the “dwmrcs” provider.
[OOTB] H3C Routers syslog: a normalizer for certain types of events coming from H3C (Huawei-3Com) SR6600 network devices (Comware 7 firmware) through syslog. The normalizer supports the “standard” event format (RFC 3164-compliant format).
[OOTB] Cisco WLC syslog: a normalizer for certain types of events coming from Cisco WLC network devices (2500 Series Wireless Controllers, 5500 Series Wireless Controllers, 8500 Series Wireless Controllers, Flex 7500 Series Wireless Controllers) through syslog.
[OOTB] Huawei iManager 2000 file: a normalizer supporting the processing of some of the Huawei iManager 2000 system events stored in clientlogsrpc and clientlogsdeployossDeployment files.

Our experts have also refined the following normalizers:

For Microsoft products: the redesigned Windows normalizer is now publicly available.
For the PT NAD system: a new normalizer has been developed for PT NAD versions 11.1, 11.0.
For UNIX-like operating systems: additional event types are now supported.
For Check Point: improvements to the normalizer supporting Check Point R81.
For the Citrix NetScaler system: additional events from Citrix ADC 5550 — NS13.0 are now supported.
For FreeIPA: the redesigned normalizer is now publicly available.

In total, we now support around 250 sources, and we keep expanding this list while improving the quality of each connector. The full list of supported event sources in the Kaspersky Unified Monitoring and Analysis Platform — version 3.2, can be found in the technical support section. Information on out-of-the-box correlation rules is also available there.

Kaspersky official blog – ​Read More

Windows Downdate: exploitation techniques and countermeasures

All software applications, including operating systems, contain vulnerabilities, so regular updates to patch them are a cornerstone of cybersecurity. The researchers who invented the Windows Downdate attack targeted this very update mechanism, aiming to stealthily roll back a fully updated Windows system to an older version containing vulnerable files and services. This leaves the system exposed to well-known exploits and deep-level compromise — including the hypervisor and secure kernel. Worse, standard update and system-health checks will report that everything’s up to date and fine.

Attack mechanism

The researchers actually found two separate flaws with slightly different operating mechanisms. One vulnerability — assigned the CVE-2024-21302 ID and dubbed Downdate — is based on a flaw in the update installation process: the downloaded update components are controlled, protected from modification, and digitally signed, but at one of the intermediate installation stages (between reboots), the update procedure creates and then uses a file containing a list of planned actions (pending.xml). If attackers are able to create their own version of that file and then add information about it to the registry, Windows Modules Installer service (TrustedInstaller) will execute the instructions in it upon reboot.

In actual fact, the contents of pending.xml do get verified, but it’s done during previous installation stages — TrustedInstaller doesn’t re-verify it. Of course, it’s impossible to write whatever you like to the file and install arbitrary files this way — since they must be signed by Microsoft, but replacing system files with older files developed by Microsoft is quite feasible. This can re-expose the system to long-patched vulnerabilities — including critical ones. Adding the necessary keys related to pending.xml to the registry requires administrator privileges, after which a system reboot must be initiated. However, these are the only significant limitations. This attack doesn’t require elevated privileges (for which Windows dims the display and prompts an admin for additional permission), and most security tools won’t flag the actions performed during the attack as suspicious.

The second vulnerability — CVE-2024-38202 — allows an actor to manipulate the Windows.old folder, where the update system stores the previous Windows installation. Although modifying files in this folder requires special privileges, an attacker with regular user-rights can rename the folder, create a new Windows.old from scratch, and place outdated, vulnerable versions of Windows system files in it. Initiating a system restore then rolls Windows back to the vulnerable installation. Certain privileges are required for system restoration, but these aren’t administrator privileges and are sometimes granted to regular users.

VBS bypass and password theft

Since 2015, the Windows architecture has been redesigned to prevent a Windows kernel compromise leading to that of the whole system. This involves a range of measures collectively known as virtualization-based security (VBS). Among other things, the system hypervisor is used to isolate OS components and create a secure kernel for performing the most sensitive operations, storing passwords, and so on.

To prevent attackers from disabling VBS, Windows can be configured to make this impossible — even with administrator rights. The only way to disable this protection is by rebooting the computer in a special mode and entering a keyboard command. This feature is called a Unified Extensible Firmware Interface (UEFI) lock. The Windows Downdate attack bypasses this restriction as well by replacing files with modified, outdated, and vulnerable versions. VBS doesn’t check system files for up-to-dateness, so they can be substituted with older, vulnerable versions with no detectable signs or error messages. That is, VBS isn’t disabled technically, but the feature no longer performs its security function.

This attack allows for the replacement of secure-kernel and hypervisor files with two-year-old versions containing multiple vulnerabilities whose exploitation leads to privilege escalation. As a result, attackers can gain maximum system privileges, full access to the hypervisor and memory-protection processes, and the ability to easily read credentials, hashed passwords, and also NTLM hashes from memory (which can be used for expanding the network attack).

Protection against Downdate

Microsoft was informed of the Downdate vulnerabilities in February 2024, but it wasn’t until August that details were released as part of its monthly Patch Tuesday rollout. Fixing the bugs proved to be a tough task fraught with side effects — including the crashing of some Windows systems. Therefore, instead of rushing to publish another patch, Microsoft for now has simply issued some tips to mitigate the risks. These include the following:

Auditing users authorized to perform system-restore and update operations, minimizing the number of such users, and revoking permissions where possible.
Implementing access control lists (ACL/DACL) to restrict access to, and modification of update files.
Configuring event monitoring for instances where elevated privileges are used to modify or replace update files — this could be an indicator of vulnerability exploitation.
Similarly, monitoring the modification and replacement of files associated with the VBS subsystem and system-file backups.

Monitoring these events using SIEM and EDR is relatively straightforward. However, false positives can be expected, so distinguishing legitimate sysadmin activity from that of hackers ultimately falls to the security team.

All of the above applies not only to physical, but also virtual Windows machines in cloud environments. For virtual machines in Azure, we also advise tracking unusual attempts to log in with administrator credentials. Enable MFA and change the credentials in case such an attempt is detected.

One other, more drastic tip: revoke administrator privileges for employees who don’t need them, and mandate that genuine administrators (i) only perform administrative actions under their respective account, and (ii) use a separate account for other work.

Risky fixes

For those looking for more security, Microsoft offers the update KB5042562, which mitigates the severity of CVE-2024-21302. With this installed, outdated versions of VBS system files are added to the revoked list and can no longer be run on an updated computer. This policy (SkuSiPolicy.p7b) is applied at the UEFI level, so when using it you need to update not only the OS but also backup removable boot media. It’s also important to be aware that rollback to older installations of Windows would no longer be possible. What’s more, the update forcibly activates the User Mode Code Integrity (UMCI) feature, which itself can cause compatibility and performance issues.

In general, administrators are advised to carefully weigh the risks, and thoroughly study the procedure and its potential side effects. Going forward, Microsoft promises to release patches and additional security measures for all relevant versions of Windows — up to Windows 10, version 1507, and Windows Server 2016.

Kaspersky official blog – ​Read More

Privacy-Preserving Attribution by Mozilla: what is it and what’s it for? | Kaspersky official blog

In July 2024, with the latest version of its Firefox browser, Mozilla introduced a technology called Privacy-Preserving Attribution (PPA) — designed to track how effective online advertising is. The feature is enabled by default in Firefox 128.

This has already caught the eye of online privacy advocates, and led to headlines like “Now Mozilla too is selling user data”. The clamor got so loud that Firefox CTO, Bobby Holley, had to take to Reddit to explain to users what Mozilla actually did and why.

Now’s the time to take a closer look at what PPA is, why it’s needed in the first place, and why it’s appeared now.

Google Ad Topics and Facebook Link History

First, a bit of backstory. As you may recall, way back in 2019 the developers of the world’s most popular browser — Google Chrome — began hatching plans to completely disable support for third-party cookies.

These tiny files have been tracking user actions online for 30 years now. The technology is both the backbone of the online advertising industry, and the chief means of violating users’ privacy.

Some time ago, as a replacement, Google unveiled an in-house development called Ad Topics. With this technology, tracking is based on users’ Chrome browser history, and interaction history with Android apps. The rollout of Ad Topics was expected to be followed by the phasing out of support for third-party cookies in Chrome in H2 2024.

Another major digital advertising player to develop its own user-tracking technology is Meta, which likewise relies on third-party cookies. Called Link History, it makes sure that all external links in the Facebook mobile apps now get opened in its built-in browser — where the company can still snoop on your actions.

The bottom line is that ending support for third-party cookies hands even more control over to Google and Meta — owner of the world’s most popular browser and mobile OS, and of the world’s most popular social network, respectively — while smaller players will become even more dependent on them.

At the same time, user data continues to be collected on an industrial scale, and primarily by the usual suspects when it comes to claims of privacy violation: yes, Google and Facebook.

The question arises: is it not possible to develop some mechanism to allow advertisers to track the effectiveness of advertising without mass collection of user data? The answer comes in the shape of Privacy-Preserving Attribution.

Meet Prio, a privacy-preserving aggregation system

To better understand the history of this technology, we have to go back a bit in time — to 2017, when cryptographers Henry Corrigan-Gibbs and Dan Boneh of Stanford University presented a research paper. In it, they described a privacy-oriented system for collecting aggregated statistics, which they called Prio.

To greatly simplify matters, Prio is based on the following mechanism. Let’s say you’re interested in the average age of a certain number of users, but you want to preserve their privacy. You set up two (or more) piggy banks and ask each user to count out the number of coins corresponding to their age and, without showing them to anyone, randomly drop the coins into different money boxes.

Then you tip the coins out of the piggy banks into a pile, count them and divide by the number of users. The result is what you wanted: the average age of the users. And if at least one of the piggy banks keeps its secret (i.e., doesn’t tell anyone what went into it), then it’s impossible to determine how many coins any one user put into the boxes.

Prio’s main stages of information processing. Source

Prio overlays this basic mechanism with a lot of cryptography to protect information from interception and ensure the validity of data received. There’s no way for users to slip answers into the system, for whatever reason, that could distort the results. The main concept lies in the use of two or more aggregators that collect random shares of the sought information.

Prio’s algorithms have another key feature: they greatly improve system performance compared to previous methods of reliable anonymized data collection — by 50–100 times, say the researchers.

Distributed Aggregation Protocol

Mozilla got interested in Prio back in 2018. The first fruit of this interest was its development of the experimental system Firefox Origin Telemetry — based on Prio. Notably, this system was designed to privately gather telemetry on the browser’s ability to combat ad trackers.

Then, in February 2022, Mozilla unveiled Interoperable Private Attribution (IPA) technology, developed jointly with Meta, which, it seems, served as the prototype to PPA.

May 2022 saw the publication of a zero draft of the Prio-based Distributed Aggregation Protocol (DAP). The draft was authored by representatives of Mozilla and the Internet Security Research Group (ISRG) — a non-profit known for the Let’s Encrypt project to democratize the use of HTTPS — as well as two Cloudflare employees.

While working on the protocol, ISRG was also building a DAP-based system for collecting anonymized statistics, known as Divvi Up. This system is primarily intended to collect various technical telemetry to improve website performance, such as page load-time.

Schematic of the basic operating principle of the DAP protocol. Source

Finally, in October 2023, Divvi Up and Mozilla announced a collaboration to implement DAP in the Firefox browser. As part of this joint effort, a system of two aggregators was created — one of which operates on the Mozilla side, the other on the Divvi Up side.

How PPA works

It’s this Divvi Up/Mozilla system that’s currently being deployed with PPA technology. So far, it’s just an experiment involving a limited number of sites.

In general outline, it works as follows:

The website asks the browser to remember instances of successful ad views.
If the user performs some action that the site considers useful (for example, buys a product), the site queries the browser to find out if the user saw the ad.
The browser doesn’t tell the site anything, but sends information through the DAP protocol to the aggregation servers.
All such reports are accumulated in aggregators, and the site periodically receives a summary.

As a result, the site learns that out of X number of users who saw a certain ad, Y number of users performed actions deemed useful for the site. But neither the site nor the aggregation system knows anything about who these users were, what else they did online, etc.

Why we need PPA

In the above-mentioned statement on Reddit, Firefox’s CTO explained what Mozilla was aiming for by introducing PPA along with the new version of its browser.

The company’s reasoning is roughly the following. Online advertising, at least at this stage of the internet’s development, is a necessary evil. And it’s understandable that advertisers want to be able to measure its effectiveness. But the tools currently used for this disregard user privacy.

Meanwhile, any talk about how to somehow restrict advertisers’ tracking of users’ actions is met with protests from the former. No data collection, they argue, means they’re deprived of a tool for assessing online advertising.

Basically, PPA is an experimental tool that allows advertisers to get the feedback they need without collecting and storing data on what users did.

If the experiment shows the technology can satisfy advertisers’ needs, it will give privacy advocates a weighty argument in future dealings with regulators and lawmakers. Broadly speaking, it will prove that total online surveillance is unnecessary, and should be limited by law.

Block third-party cookies now

As it happens, almost immediately after the uproar surrounding Mozilla’s new rollout, Google announced a complete reversal of its plans to disable third-party cookies. Getting rid of stale technology can be harder than it might seem — as Microsoft found out when trying to bury Internet Explorer.

The good news is that, unlike Internet Explorer, which is indeed hard to weed out of Windows, third-party cookies are something that users can handle on their own. All modern browsers make it easy to block them — see our guide for full details.

Bear in mind that Google’s refusal to get rid of cookies doesn’t spell the end of Ad Topics — the company intends to continue the experiment. So we recommend disabling this feature too, and here’s how to do it in Chrome and Android.

And if you use the Facebook mobile app, it’s worth turning off Link History. Again, our guide explains how.

Also, you can and should make use of the Private Browsing feature in our Kaspersky Standard, Kaspersky Plus and Kaspersky Premium subscription plans to block ad trackers (by no means all of which use cookies).

Lastly, we recommend using our free Privacy Checker service, where you can find instructions on setting up privacy for the most common applications, services and social networks for different operating systems.

As for PPA, the technology looks pretty useful. If you think otherwise, here are simple instructions to disable it in Firefox. As for me, I prefer to support the development of this technology, so will continue to use it in my browser.

Kaspersky official blog – ​Read More