Transatlantic Cable podcast episode 349 | Kaspersky official blog

Episode 349 of the Transatlantic Cable podcast kicks off with a discussion on Microsoft’s newly announced Copilot+ feature for personal computers. This feature, touted to give PCs a “photographic memory,” raises significant privacy concerns as it can log everything a user does by taking screenshots every few seconds. Privacy advocates fear the potential for exploitation by hackers and the implications of such extensive data collection.

Next, the podcast discusses the recent floods in Rio Grande do Sul, Brazil, and the rise of AI-generated misinformation during the disaster. The team highlights how false images and videos have been spreading on social media, complicating rescue efforts and public awareness.

The episode then delves into the vulnerabilities of high-end car keyless entry systems. Despite advancements like ultra-wideband communications, a recent demonstration by Chinese researchers showed that the latest Tesla Model 3 is still susceptible to relay attacks, allowing thieves to unlock and steal the vehicle with minimal equipment.

To wrap up, the team discusses the arrest of Lin Rui-siang, who was living a double life as an IT specialist and a dark web drug market operator. Lin, under the alias “Pharoah,” ran the Incognito Market, which facilitated over $100 million in narcotics sales before executing an exit scam and attempting to extort users. His arrest at JFK airport by the FBI brought an end to his criminal activities.

If you liked what you heard, please consider subscribing.

Microsoft’s AI screenshot function is being called a privacy nightmare.

Brazil’s flood disaster set off a torrent of AI misinformation.

Teslas can still be stolen with a cheap radio hack despite new keyless tech.

He Trained Cops to Fight Crypto Crime—and Allegedly Ran a $100M Dark-Web Drug Market.

Kaspersky official blog – ​Read More

The most dangerous CVEs of 2023 and 2024: fix these today

The number of software vulnerabilities discovered annually continues to grow, with total vulnerabilities discovered in a year fast approaching the 30,000 mark. But it’s important for cybersecurity teams to identify precisely which vulnerabilities attackers are actually exploiting. Changes in the list of criminals’ favorite vulnerabilities greatly influence which updates or countermeasures should be prioritized. That’s why we regularly monitor these changes. Thus, here are the conclusions that can be drawn from our Exploit and Vulnerability Report for Q1 2024.

Vulnerabilities are becoming increasingly critical; exploits — easily available

Thanks to bug bounty programs and automation, vulnerability hunting has increased significantly in scale. This means vulnerabilities are discovered more frequently, and when researchers find an interesting attack vector, the first identified vulnerability is often followed by a whole series of others — as we recently saw with Ivanti solutions. 2023 set a five-year record for the number of critical vulnerabilities found. At the same time, vulnerabilities are becoming increasingly accessible to an ever-wider range of attackers and defenders — for more than 12% of discovered vulnerabilities’ proofs of concept (PoC) became publicly available shortly after.

Exponential growth of Linux threats

Although the myth that “no one attacks Linux” has already been dispelled, many specialists still underestimate the scale of Linux threats. Over the last year, the number of exploited CVEs in Linux and popular Linux applications increased more than threefold. The lion’s share of exploitation attempts target servers, as well as various devices based on *nix systems.

A striking example of the interest of attackers in Linux was the multi-year operation to compromise the XZ library and utilities in order to create an SSH backdoor in popular Linux distributions.

OSs contain more critical flaws, but other applications are exploited more often

Operating systems were found to contain the most critical vulnerabilities with available exploits; however,  critical defects in OSs are rarely useful for initially penetrating an organization’s information infrastructure. Therefore, if you look at the top vulnerabilities actually exploited in APT cyberattacks, the picture changes significantly.

In 2023, the top spot in the exploited vulnerabilities list changed: after many years of its being MS Office, WinRAR took its place with CVE-2023-38831 — used by many espionage and criminal groups to deliver malware. However, the second, third, and fifth places in 2023 were still occupied by Office flaws, with the infamous Log4shell joining them in fourth place. Two vulnerabilities in MS Exchange were also among the most frequently exploited.

In first quarter of 2024, the situation has changed completely: very convenient security holes in internet-accessible services have opened up for attackers, allowing mass exploitation — namely in the MSP application ConnectWise, and also Ivanti’s Connect Secure and Policy Secure. In the popularity ranking, WinRAR has dropped to third place, and Office has disappeared from the top altogether.

Organizations are too slow in patching

Only three vulnerabilities from the top 10 last year were discovered in 2023. The rest of the actively exploited CVEs date back to 2022, 2020, and even 2017. This means that a significant number of companies either selectively update their IT systems or leave some issues unaddressed for several years without applying countermeasures at all. IT departments can rarely allocate enough resources to patch everything on time, so a smart medium-term solution is to invest in products for automatic detection of vulnerable objects in IT infrastructure and software updating.

The first weeks after a vulnerability is publicly disclosed are the most critical

Attackers try to take full advantage of newly published vulnerabilities, so the first weeks after an exploit appears see the most activity. This should be considered when planning update cycles. It’s essential to have a response plan in case a critical vulnerability appears that directly affects your IT infrastructure and requires immediate patching. Of course, the automation tools mentioned above greatly assist in this.

New attack vectors

You can’t focus only on office applications and “peripheral” services. Depending on an organization’s IT infrastructure, significant risks can arise from the exploitation of other vectors — less popular but very effective for achieving specific malicious goals. Besides the already mentioned CVE-2024-3094 in XZ Utils, other vulnerabilities of interest to attackers include CVE-2024-21626 in runc — allowing escape from a container, and CVE-2024-27198 in the CI/CD tool TeamCity — providing access to software developer systems.

Protection recommendations

Maintain an up-to-date and in-depth understanding of the company’s IT assets, keeping detailed records of existing servers, services, accounts, and applications.

Implement an update management system that ensures the prompt identification of vulnerable software and patching. The Kaspersky Vulnerability Assessment and Patch Management solution combined with the Kaspersky Vulnerability Data Feed is ideal for this.

Use security solutions capable of both preventing the launch of malware and detecting and stopping attempts to exploit known vulnerabilities on all computers and servers in your organization.

Implement a comprehensive multi-level protection system that can detect anomalies in the infrastructure and targeted attacks on your organization, including attempts to exploit vulnerabilities and the use of legitimate software by attackers. For this, the Kaspersky Symphony solution, which can be adapted to the needs of companies of varying size, is perfectly suited.

Kaspersky official blog – ​Read More

New security and privacy features in Android 15 | Kaspersky official blog

At the recent I/O 2024 developer conference in California, Google presented the second beta version of its Android 15 operating system — codenamed Vanilla Ice Cream. The company also gave us a closer look at the new security and privacy features coming with the update.

While the final release of Android 15 is still a few months away — slated for the third quarter of 2024 — we can already explore the new security features this operating system has in store for Android users.

AI-powered smartphone theft protection

The most significant security upgrade (but by no means the only one) is a suite of new features designed to protect against theft of the smartphone and the user data contained within. Google plans to make some of these features available not only in Android 15 but also for older versions of the operating system (starting with Android 10) through service updates.

First up is factory reset protection. To prevent thieves from wiping a stolen phone and quickly selling it, Android 15 will let you set up a lock that prevents resetting the device without the owner’s password.

Android 15 will also introduce a so-called “private space” for apps. Some apps like banking ones or instant messengers can be hidden and protected with an additional PIN code — preventing thieves from accessing sensitive data.

Android 15 will feature a “private space” to hide and protect selected apps with a separate PIN code

Furthermore, Google plans to add protection for the most critical settings in case a thief manages to get  hold of an unlocked smartphone. Disabling Find My Device or changing the screen lock timeout will require authentication using a PIN, password, or biometrics.

But that’s not all: there’ll also be protection against thieves who’ve snooped on or otherwise obtained the PIN code. Accessing critical settings like changing the PIN, disabling anti-theft, or using passkeys will require biometric authentication. According to Google, this settings protection will be available on some devices “later this year”.

Additional anti-theft features in Android

Now let’s talk about the new features that will be available not only in Android 15 but also in versions 10 and above. First, there’s AI-powered, accelerometer-based automatic screen locking. The screen will automatically lock if the system detects movements characteristic of someone snatching the phone and quickly running or driving away.

Android will automatically lock if it detects movement patterns indicative of smartphone theft

Additionally, the smartphone will automatically lock if a thief tries to keep it disconnected it from the internet for a long time. Automatic locking can also be set for other situations — for example, after a significant number of unsuccessful authentication attempts. Finally, Android will feature remote locking — allowing you to lock the phone’s screen from a different device.

Smartphones can also be remotely locked

Protection of personal data when screen sharing and recording

Android 15 also focuses on protecting user data from scams such as fake tech-support. Attackers might ask the user to share their screen (or record their actions and send a video) and instruct them to perform dangerous actions (such as logging in to an account). This way, scammers can obtain valuable information like login credentials, financial data, and so on.

First, screen sharing in Android 15 will (by default) only share the specific app the user is interacting with, and not the system interface (such as the status bar and notifications, which might contain personal information). But switching to full-screen sharing will still be possible if needed.

Android 15 will hide notification content during screen sharing

Second, regardless of the screen sharing mode, the system will only display notification content if the app developer has provided a special “public version” for it. Otherwise the content will be hidden.

Third, Android 15 will automatically detect and hide windows that contain one-time passwords. If a user opens an app window with a one-time password (for example, Messages) while sharing or recording their screen, the window contents won’t be displayed. Additionally, Android 15 will automatically hide login, password, and card data entered during screen sharing.

During screen sharing, Android 15 will automatically detect and hide windows containing one-time passwords

These measures protect not only against attackers specifically targeting user data, but also against accidental disclosure of personal information during screen sharing or recording.

Enhanced Restricted Settings

We’ve already discussed the so-called Restricted Settings that Android features from version 13 onward. This is additional protection against the misuse of two potentially dangerous features — access to notifications and Accessibility services.

You can read about the risks associated with these features at the link above. Here, let’s briefly recall the main idea of this protection: Restricted Settings prevent users from granting permission to these features for apps not downloaded from the app store.

When a user tries to grant dangerous permissions to an app downloaded from outside the store, a window titled “Restricted Settings” appears

Unfortunately, in both Android 13 and 14, this protective mechanism is very easy to bypass. The problem is that the system determines whether an app was downloaded from the store or not by the method used to install it. This allows a malicious app downloaded from any source using an “incorrect” method to subsequently install another malicious app using the “correct” method.

As a result of this two-step process, the second app is no longer considered dangerous, isn’t subject to restrictions, and can both request and gain access to notifications and Accessibility services.

In Android 15, Google plans to use a slightly different mechanism called Enhanced Confirmation Mode. From the user’s perspective, nothing will change — the interface will function as before. However, “under the hood”, instead of checking the app installation method, this mechanism will refer to an XML file built into the operating system containing a list of trusted installers.

Simply put, Google is going to hardcode a list of safe sources for downloading apps. Apps downloaded from elsewhere will be automatically blocked from accessing notifications and Accessibility services. Whether this will close the loophole, we’ll find out after the official release of Android 15.

Protecting one-time codes in notifications

In addition to the improved Restricted Settings, Android 15 will feature additional protection against apps intercepting one-time passwords when accessing notifications from other apps.

Here’s how it works: when an app requests access to a notification, the operating system analyzes the notification and removes the one-time password from its contents before passing it to the app.

However, some app categories — for example, apps of wearables connected through the Companion Device Manager — will still have access to the full content of notifications. Therefore, malware creators may be able to exploit this loophole to continue intercepting one-time passwords.

Warnings about insecure cellular networks

Android 15 will also introduce new features to protect against attackers using malicious cellular base stations to intercept data or spy on smartphone owners.

Firstly, the operating system will warn users if their cellular connection is unencrypted — meaning their calls and text messages could be intercepted in plain text.

Android 15 will warn about insecure cellular connections

Secondly, Android 15 will notify users if a malicious base station or specialized tracking device is recording their location using their device ID (IMSI or IMEI). To do this, the operating system will monitor requests from the cellular network to these identifiers.

It should be noted that both these functions must be supported by the smartphone’s hardware. Therefore, they’re unlikely to appear on older devices upgraded to Android 15. Even among new models initially shipping with Vanilla Ice Cream, probably not all will support these features — it’ll be up to the smartphone manufacturers whether to implement these functions or not.

New app protection features

Next up in the Android 15 security enhancements are improvements to the Play Integrity API. This service allows Android app developers to identify fraudulent activity within their apps, as well as instances where the user is at risk, and use various additional security measures in such cases.

In particular, in Android 15, app developers will be able to check if another app is running simultaneously with their app and recording the screen, displaying its windows on top of their app’s interface, or controlling the device on behalf of the user. If such threats are detected, developers can, for example, hide certain information or warn the user about the threat.

Play Integrity API enables app developers to detect malicious activity and take steps to mitigate threats

Developers will also be able to check if Google Play Protect is running on the device and if any known malware has been detected in the system. Again, if a threat is detected, the app can restrict certain actions, request additional confirmation from the user, and so on.

On-device Google Play Protect

Finally, another security innovation in Android 15 is that Google Play Protect will now operate not only within the official Google Play app store but also directly on user devices. Google calls this “live threat detection”.

The operating system (with the help of AI) will analyze app behavior — in particular, the use of dangerous permissions and interaction with other apps and services. If potentially dangerous behavior is detected, the app will be sent to Google Cloud for review.

“Unsafe app” warning from Google Play Protect

Does this mean you can now ditch your third-party antivirus for Android? Not so fast, tiger. Ultimately, the effectiveness of anti-malware protection depends on how thoroughly a vendor can search for and study new threats.

Automation is certainly important here — that’s why we started using machine learning for threat research many years ago, long before it became trendy. But the work of human experts is equally crucial. And on this score, as numerous cases of malware infiltrating Google Play demonstrate, Google is still not doing so well — often lacking the resources to solve this problem.

Therefore, we recommend usinga comprehensive security solution on all your Android devices — including those running Android 15. It’ll complement perfectly the new privacy and security features. Moreover, much of what will only be introduced in the upcoming update — for example the functions for theft protection, finding your device, or protecting individual apps with a PIN — we implemented a long time ago and support even on older versions of Android. Check out this detailed review of the most interesting features in Kaspersky: Antivirus & VPN.

Kaspersky official blog – ​Read More

Unsaflok: how to forge keycards for Saflok locks | Kaspersky official blog

A group of researchers has published information about the so-called Unsaflok attack, which exploits a number of vulnerabilities in the company dormakaba’s Saflok hotel door locks. We explain how this attack works, why it’s dangerous, and how hotel owners and guests can protect themselves against it.

How the Unsaflok attack works

The most important thing to know about the Unsaflok attack is that it permits the forging of keycards for electronic Saflok locks, which are widely used in hotels around the world. All an attacker needs is a RFID key from a targeted hotel where these locks are installed. Getting hold of one is easy: for example, the keycard to the attacker’s own room would suffice. Data obtained from this card would be enough to program a keycard so it can open any door in the hotel.

No particularly exotic equipment is required for this either: to read legitimate keycards and also forge keycards, an attacker can use a laptop with an RFID card reader/writer connected to it. Even a regular Android smartphone with NFC can do the trick.

A laptop with a contactless smart-card reader/writer can be used to forge keycards. However, a regular Android smartphone with NFC would also do. Source

Various hacking tools that work with RFID — such as the popular Flipper Zero or the somewhat more exotic Proxmark3 — can also be used for the Unsaflok attack.

It turns out the researchers discovered the possibility of attacking Saflok locks back in 2022. However, adhering to responsible vulnerability disclosure procedures, they gave the manufacturer considerable time to develop protective measures and begin updating the locks. To protect the safety of hotels and their guests, full details of the attack mechanism as well as the proof-of-concept have not yet been published. The researchers promise to share more details about Unsaflok in the future.

Which locks are vulnerable to the Unsaflok attack

According to researchers, all locks using the dormakaba Saflok system are vulnerable to the attack, including (but not limited to) the RT Series, MT Series, Quantum Series, Saffire Series, and Confidant Series. According to the dormakaba website, Saflok locks have been manufactured since 1988 — for more than 30 years.

The Saflok RT series is one of the most common types of dormakaba Saflok locks. Source

How common are these locks? As the researchers themselves say, vulnerable Saflok locks are used in over 13,000 hotels in 131 countries worldwide — installed on around three million doors. If data is to be believed stating that there are a total of 17.5 million hotel rooms in the world, it turns out that roughly one in six hotel locks is vulnerable to the Unsaflok attack.

dormakaba developed an update that protects against the Unsaflok attack and began updating the locks in November 2023. However, we’re talking about thousands of hotels and millions of locks, each of which must be individually updated or completely replaced, as well as vast quantities of related equipment. Therefore, the update process takes a considerably long time. According to the researchers, by March 2024, 36% of the vulnerable locks had been updated.

Safety tips for guests

Saflok locks are easy to recognize — the most popular series, which you’re most likely to encounter in hotels, were shown in the illustrations above. And here you can see what the other models of vulnerable locks look like.

However, it’s not possible to distinguish a vulnerable lock from an already updated one by appearance, as outwardly they look exactly the same. However, the type of keycard can help with that: if the hotel uses MIFARE Classic keycards with Saflok locks, then these locks are still vulnerable to the Unsaflok attack. If the hotel has already switched to MIFARE Ultralight C keycards, this is a sign that the locks have been updated. You can determine the keycard type by using the NFC TagInfo by NXP app (Android, iOS).

The researchers emphasize that the mere use of MIFARE Classic keycards doesn’t necessarily mean that the hotel’s locks are insecure — other lock systems that use these same cards haven’t been found to have problems. The danger lies specifically in the combination of MIFARE Classic cards and Saflok locks. If you come across this combo, be aware that the lock may not provide reliable protection against unauthorized entry into the given room.

It’s worth noting that the internal latch in Saflok locks is also electronically controlled and can be opened with a keycard — including a forged one. Therefore, it’s pointless using it to protect against intrusion. Instead, you should lock the door with a chain, or a separate deadbolt if there is one.

Safety tips for hotel owners

The researchers note that they aren’t aware of any real-life cases of the Unsaflok attack being used against hotels. However, they don’t rule out the possibility that someone had already discovered the vulnerabilities in Saflok locks before them — after all, these locks have been on the market for several decades.

Therefore, it’s quite possible that malicious actors are already using this attack to break into hotel rooms, and since such an intrusion looks the same as legitimate use of the lock, it’s not so easy to notice a break-in.

The researchers mention that it’s possible to detect an Unsaflok attack by examining the entry/exit logs using the Saflok HH6 programmer: due to the nature of the vulnerability, entry with a forged key for all doors might be attributed to an “incorrect keycard or incorrect employee”.

And of course, the main advice: eliminate the vulnerabilities in your dormakaba Saflok locks so as not to put your clients and their property at risk. As you might guess, this means updating your locks as soon as possible. For questions regarding updating Saflok locks, contact the manufacturer’s technical support service.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 348 | Kaspersky official blog

Episode 348 of the Transatlantic Cable podcast kicks off with news that Google plan to introduce a new AI tool to help detect if you’re being scammed in a phone call – a boon for those who fall prey to scams.  From there the team discuss news that Scarlett Johansson isn’t best pleased about the likeness of ChatGPT’s new voice, which sounds eerily familiar to her own.

To wrap up the team discuss two stories, firstly around how an ‘AI porn-maker’ (yes people, that’s now a job) accidentally leaked his own customer data. The second story centres around BT’s decision to move away from copper-cable landlines in the UK to an all-digital future – and it’s got several people annoyed.

If you liked what you heard, please consider subscribing.

Android is getting an AI-powered scam call detection feature
ChatGPT suspends Scarlett Johansson-like voice as actor speaks out against OpenAI
Nonconsensual AI Porn Maker Accidentally Leaks His Customers’ Emails
BT scraps digital landline switch deadline

Kaspersky official blog – ​Read More

Is it possible to spy on keystrokes from an Android on-screen keyboard? | Kaspersky official blog

“Hackers can spy on every keystroke of Honor, OPPO, Samsung, Vivo, and Xiaomi smartphones over the internet” – alarming headlines like this have been circulating in the media over the past few weeks. Their origin was a rather serious study on vulnerabilities in keyboard traffic encryption. Attackers who are able to observe network traffic, for example, through an infected home router, can indeed intercept every keystroke and uncover all your passwords and secrets. But don’t rush to trade in your Android for an iPhone just yet – this only concerns Chinese language input using the pinyin system, and only if the “cloud prediction” feature is enabled. Nevertheless, we thought it would be worth investigating the situation with other languages and keyboards from other manufacturers.

Why many pinyin keyboards are vulnerable to eavesdropping

The pinyin writing system, also known as the Chinese phonetic alphabet, helps users write Chinese words using Latin letters and diacritics. It’s the official romanization system for the Chinese language, adopted by the UN among others. Drawing Chinese characters on a smartphone is rather inconvenient, so the pinyin input method is very popular, used by over a billion people, according to some estimates. Unlike many other languages, word prediction for Chinese, especially in pinyin, is difficult to implement directly on a smartphone – it’s a computationally complex task. Therefore, almost all keyboards (or more precisely, input methods – IMEs) use “cloud prediction”, meaning they instantaneously send the pinyin characters entered by the user to a server and receive word completion suggestions in return. Sometimes the “cloud” function can be turned off, but this reduces the speed and quality of the Chinese input.

To predict the text entered in pinyin, the keyboard sends data to the server

Of course, all the characters you type are accessible to the keyboard developers due to the “cloud prediction” system. But that’s not all! Character-by-character data exchange requires special encryption, which many developers fail to implement correctly. As a result, all keystrokes and corresponding predictions can be easily decrypted by outsiders.

You can find details about each of the errors found in the original source, but overall, of the nine keyboards analyzed, only the pinyin IME in Huawei smartphones had correctly implemented TLS encryption and resisted attacks. However, IMEs from Baidu, Honor, iFlytek, OPPO, Samsung, Tencent, Vivo, and Xiaomi were found to be vulnerable to varying degrees, with Honor’s standard pinyin keyboard (Baidu 3.1) and QQ pinyin failing to receive updates even after the researchers contacted the developers. Pinyin users are advised to update their IME to the latest version, and if no updates are available, to download a different pinyin IME.

Do other keyboards send keystrokes?

There is no direct technical need for this. For most languages, word and sentence endings can be predicted directly on the device, so popular keyboards don’t require character-by-character data transfer. Nevertheless, data about entered text may be sent to the server for personal dictionary synchronization between devices, for machine learning, or for other purposes not directly related to the primary function of the keyboard – such as advertising analytics.

Whether you want such data to be stored on Google and Microsoft servers is a matter of personal choice, but it’s unlikely that anyone would be interested in sharing it with outsiders. At least one such incident was publicized in 2016 – the SwiftKey keyboard was found to be predicting email addresses and other personal dictionary entries of other users. After the incident, Microsoft temporarily disabled the synchronization service, presumably to fix the errors. If you don’t want your personal dictionary stored on Microsoft’s servers, don’t create a SwiftKey account, and if you already have one, deactivate it and delete the data stored in the cloud by following these instructions.

There have been no other widely known cases of typed text being leaked. However, research has shown that popular keyboards actively monitor metadata as you type. For example, Google’s Gboard and Microsoft’s SwiftKey send data about every word entered: language, word length, the exact input time, and the app in which the word was entered. SwiftKey also sends statistics on how much effort was saved: how many words were typed in full, how many were automatically predicted, and how many were swiped. Considering that both keyboards send the user’s unique advertising ID to the “headquarters”, this creates ample opportunity for profiling – for example, it becomes possible to determine which users are corresponding with each other in any messenger.

If you create a SwiftKey account and don’t disable the “Help Microsoft improve” option, then according to the privacy policy, “small samples” of typed text may be sent to the server. How this works and the size of these “small samples” is unknown.

“Help Microsoft improve”… what? Collecting your data?

Google allows you to disable the “Share Usage Statistics” option in Gboard, which significantly reduces the amount of information transmitted: word lengths and apps where the keyboard was used are no longer included.

Disabling the “Share Usage Statistics” option in Gboard significantly reduces the amount of information collected

In terms of cryptography, data exchange in Gboard and SwiftKey did not raise any concerns among the researchers, as both apps rely on the standard TLS implementation in the operating system and are resistant to common cryptographic attacks. Therefore, traffic interception in these apps is unlikely.

In addition to Gboard and SwiftKey, the authors also analyzed the popular AnySoftKeyboard app. It fully lived up to its reputation as a keyboard for privacy diehards by not sending any telemetry to servers.

Is it possible for passwords and other confidential data to leak from a smartphone?

An app doesn’t have to be a keyboard to intercept sensitive data. For example, TikTok monitors all data copied to the clipboard, even though this function seems unnecessary for a social network. Malware on Android often activates accessibility features and administrator rights on smartphones to capture data from input fields and directly from files of “interesting” apps.

On the other hand, an Android keyboard can “leak” not only typed text. For example, the AI.Type keyboard caused a data leak for 31 million users. For some reason, it collected data such as phone numbers, exact geolocations, and even the contents of address books.

How to protect yourself from keyboard and input field spying

Whenever possible, use a keyboard that doesn’t send unnecessary data to the server. Before installing a new keyboard app, search the web for information about it – if there have been any scandals associated with it, it will show up immediately.
If you’re more concerned about the keyboard’s convenience than its privacy (we don’t judge, the keyboard is important), go through the settings and disable the synchronization and statistics transfer options wherever possible. These may be hidden under various names, including “Account”, “Cloud”, “Help us improve”, and even “Audio donations”.
Check which Android permissions the keyboard needs and revoke any that it doesn’t need. Access to contacts or the camera is definitely not necessary for a keyboard.
Only install apps from trusted sources, check the app’s reputation, and, again, don’t give it excessive permissions.
Use comprehensive protection for all your Android and iOS smartphones, such as Kaspersky Premium.

Kaspersky official blog – ​Read More

Updating our SIEM system to version 3.0.3 | Kaspersky official blog

For many InfoSec teams, security information and event management (SIEM) is at the heart of what they do. A company’s security depends to a large extent on how well its SIEM system allows experts to focus directly on combating threats and avoid routine tasks. That’s why almost every update of our Kaspersky Unified Monitoring and Analysis Platform is aimed at improving the user interface, automating routine processes and adding features to make the work of security teams easier. Many of the improvements are based on feedback from our customers’ InfoSec experts. In particular, the latest version of the platform (3.0.3) introduces the following features and improvements.

Writing filter conditions and correlation rules as code

Previously, analysts had to set filters and write correlation rules by clicking the conditions they needed. In this update, the redesigned interface now allows advanced users to write rules and conditions as code. Builder mode remains: filter and selector conditions are automatically translated between builder and code modes.

Same rule condition in builder and code modes

What’s more, builder mode also lets you write conditions using the keyboard. As soon as you start entering a filter condition, Kaspersky Unified Monitoring and Analysis Platform will suggest suitable options from event fields, dictionaries, active sheets, etc. To narrow down the range of options, simply enter the appropriate prefix. For your convenience, condition types are highlighted in different colors.

Code mode lets you quickly edit correlation rule conditions, as well as select and copy conditions as code and easily transfer them between different rules or different selectors within a rule. The same code blocks can also be moved to filters (a separate system resource), which greatly simplifies their creation.

Extended event schema

Kaspersky Unified Monitoring and Analysis Platform retains Common Event Format (CEF) as the basis for the event schema, but we have added the ability to create custom fields, which means you can now implement any taxonomy. No more being limited to vendor-defined fields, you can name event fields anything you want to make it easier to write search queries. Custom fields are typed and must begin with a prefix that determines both its type and the array type. Fields with arrays can only be used in JSON and KV normalizers.

Example of normalization using CEF fields and custom fields

Automatic identification of event source

Kaspersky Unified Monitoring and Analysis Platform administrators no longer need to set up a separate collector for each event type or open ports for each collector on the firewall – in the new version we have implemented the ability to collect events of different formats with a single collector. The collector selects the correct normalizer based on the source IP address. Using a chain of normalizers is permitted. For example, the [OOTB] Syslog header normalizer accepts events from multiple servers and allows you to define a DeviceProcessName and direct bind events to the [OOTB] BIND Syslog normalizer and squid events to the [OOTB] Squid access Syslog normalizer.

Kaspersky Unified Monitoring and Analysis Platform: Event parsing

The following event normalization options are now available:

1 collector – 1 normalizer. We recommend using this method if you have many events of the same type or many IP addresses from which events of the same type may originate. In terms of SIEM performance, configuring a collector with only one normalizer would be optimal.

1 collector – multiple normalizers, based on IP addresses. This method is available for collectors with a UDP, TCP or HTTP connector. If a UDP, TCP or HTTP connector is specified in the collector at the Transport step, then at the Event Parsing step, on the Parsing settings tab, you can specify multiple IP addresses and select which normalizer to use for events arriving from those addresses. The following types of normalizers are available: JSON, CEF, regexp, Syslog, CSV, KV, XML. For Syslog or regexp normalizers, you can specify additional normalization conditions depending on the value of the DeviceProcessName field.

These are by no means the only updates to Kaspersky Unified Monitoring and Analysis Platform. There are also changes related to context tables, simplified binding of rules to correlators and other improvements. All of them are designed to improve the user experience for InfoSec professionals – see the full list here. To learn more about our SIEM system, Kaspersky Unified Monitoring and Analysis Platform, please visit the official product page.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 347 | Kaspersky official blog

Episode 347 of the Transatlantic Cable podcast begins with news that Dell have been hit by a data breach, however details on the breach are scarce. Following that the team discuss another data breach, this time affecting Europol.

To wrap up the team discuss two stories, the first around Spanish police pulling data on suspects from sources such as Proton mail and Apple. The final story is around Securelist’s latest APT report, looking at Q1 2024.

If you liked what you heard, please consider subscribing.

Dell Discloses Data Breach As Hacker Sells 49 Million Customer Data
Europol Hacked? IntelBroker Claims Major Law Enforcement Breach
Encrypted services Apple, Proton and Wire helped Spanish police identify activist
APT trends report Q1 2024

Kaspersky official blog – ​Read More

Two-stage Dropbox spear phishing | Kaspersky official blog

Phishers are increasingly using sophisticated targeted attacks. In addition to leveraging a variety of legitimate online services, they employ social engineering to trick the victim into following a link. We recently uncovered another in a series of unconventional multi-stage phishing schemes that merits at least a warning to employees who handle financial documents.

The first email

The attack begins with an email to the victim that appears to be from a real auditing firm. In it, the sender says that they tried to send an audited financial statement, but it was too large to email, so it had to be uploaded to Dropbox. Note that the email is sent from a real address on the company’s mail server (the attackers most likely hijacked the mailbox).

The first email from an “auditing firm” is intended to soften up the victim

From the perspective of any mail security system, this email is perfectly legitimate – indistinguishable from normal business correspondence. It contains no links, comes from a legitimate company address, and merely informs the recipient of a failed attempt to send an audit via email. This message is bound to get the attention of the accountant reading it. It contains a disclaimer that the content is confidential and intended solely for the recipient, and the company in whose name it was sent has a large online presence. All in all, it looks pretty convincing.

The only small red flag is the information that the report had to be resent using Dropbox Application Secured Upload. There is no such thing. A file uploaded to Dropbox can be password-protected, but nothing more. The real purpose of this phrase is presumably to prepare the recipient for the fact that some form of authentication will be required to download the report.

The second email

Next comes a notification directly from Dropbox itself. It states that the auditor from the previous email has shared a file called “audited financial statements” and asked that it be reviewed, signed, and returned for processing.

A perfectly normal Dropbox notification stating that a file has been shared with the recipient

There is nothing suspicious about this email either. It contains a link to a perfectly legitimate online data storage service (which is why they use Dropbox). If the notification had arrived without any accompanying message, it would most likely have been ignored. However, the recipient has been primed, so they are more likely to go to the Dropbox website and try to view the document.

Dropbox file

When the victim clicks the link, they see a blurred document and a window opens on top of it requesting authentication using office credentials. Here, however, seeing is not believing, for both the blurred background and the window with a button are in fact parts of a single image inserted into a PDF file.

PDF file uploaded to Dropbox that mimics an authentication request

The victim doesn’t even need to click the VIEW DOCUMENT button – the entire surface of the image is essentially one big button. The link underneath it leads (via an intermediate site with a redirect) to a script that launches a form to enter login credentials – just what the attackers want.

All company employees need to be aware that work passwords should only be entered on sites that clearly belong to their company. Neither Dropbox nor external auditors should know your work password and therefore cannot verify its authenticity.

How to stay safe

As attackers come up with ever more sophisticated schemes to steal corporate credentials, we recommend implementing solutions that provide information security on multiple levels. First, use corporate mail server protection, and second, install a security solution with reliable anti-phishing technologies on all internet-facing work devices.

Kaspersky official blog – ​Read More

How carmakers sell driver data to insurers | Kaspersky official blog

Early in the movie “The Fifth Element”, there is a sequence that shows the dystopian nature of the future world: Korben Dallas’s smart taxi fines him for a traffic violation and revokes his license. Back in 1997, this seemed like science fiction – and it was. Today it’s turning into reality. But first things first.

Not so long ago, we looked at the potential dangers associated with the amount of data modern vehicles collect about their owners. Then, even more recently, an investigation revealed what this might mean in practice for drivers.

It turns out that carmakers, through specialized data brokers, are already selling telematics data to insurance companies, who are using it to raise the cost of insurance for careless drivers. Most alarming of all, however, is that car owners are often kept in the dark about all of this. Let’s investigate further.

Gamification of safe driving with far-reaching consequences

It all started in the US when owners of General Motors vehicles (parent company of the Chevrolet, Cadillac, GMC, and Buick brands) noticed a sharp rise in their auto insurance premiums compared to the previous period. The reason, it transpired, was the practice of risk profiling by data broker LexisNexis. LexisNexis works with auto insurers to supply them with driver information, usually about accidents and traffic fines. But vehicle owners hit by the premium hike had no history of accidents or dangerous driving!

The profiles compiled by LexisNexis were found to contain detailed data on all trips made in the insured vehicle, including start and end times, duration, distance and, crucially, all instances of hard acceleration and braking. And it was this data that insurers were using to increase insurance premiums for less-than-perfect drivers. Where did the data broker get such detailed information?

From General Motors’ OnStar Smart Driver. That is the name of the “safe driving gamification” feature built into General Motors vehicles and the myChevrolet, myCadillac, myGMC, and myBuick mobile apps. The feature tracks hard acceleration and braking, speeding, and other dangerous events, and rewards “good” driving with virtual awards.

The OnStar Smart Driver safe driving gamification feature is built into myChevrolet, myCadillac, myGMC, and myBuick mobile apps by General Motors. Source

What’s more, according to some car owners, they didn’t enable the feature themselves – the car dealer did it for them. Crucially, neither General Motors’ apps nor the terms of use explicitly warned users that OnStar Smart Driver data would be shared with insurance-related data brokers.

This lack of transparency extended to the privacy statement on the OnStar website. While the statement mentions the possibility of sharing collected data with third parties, insurers are not specifically listed, and the text generally aims for maximum vagueness.

Along the way, LexisNexis was discovered to be working with three other automakers besides General Motors – Kia, Mitsubishi, and Subaru – all of which have similar safe driving gamification programs under names like “Driving Score” or “Driver Feedback”.

According to the LexisNexis website, the companies that work with the data broker include General Motors, Kia, Mitsubishi, and Subaru. Source

At the same time, another data broker – Verisk – was found to be providing telematics data to car insurers. Its automotive clients include General Motors, Honda, Hyundai, and Ford.

Another broker, Verisk, lists General Motors, Honda, Hyundai, and Ford in its telematics sales service description. Source

As a result, many drivers found themselves, in effect, locked into a car insurance policy with costs based on driving habits. It’s just that such programs used to be voluntary, offering a basic discount for participation – and even then, most drivers opted out. Now it appears that carmakers are enrolling customers not only without their consent, but without their knowledge.

According to available information, this is currently only happening to drivers in the US. But what starts in the States usually migrates, so similar practices may soon appear in other regions.

How to protect yourself from data-hungry cars

Unfortunately, there is no silver bullet to stop your automobile from harvesting data. Most new vehicles already come with built-in telematics collection as standard. And the number is only going to grow so that in a year or two these cars will make up more than 90% of the market. Naturally, the maker of your car won’t make it easy or even possible to turn off telematics.

If you’re ready to consider the factor of your car collecting data on you for third parties (or, in simple words, spying), then read our post with detailed tips on how you can try to get rid of surveillance by carmakers. Spoiler alert: it’s not easy and requires careful study of the documentation, as well as sacrificing some of the benefits of connected cars, so these tips won’t work for everyone.

As for the scenario described in this post of selling driver data to insurers, our advice is to search the in-vehicle menu and mobile app for a safe driving gamification feature and disable it. It may be called “Smart Driver”, “Driving Score”, “Driver Feedback”, or something similar. US-based drivers are also advised to request their data from LexisNexis and Verisk to be prepared for nasty surprises, and to see if it’s possible to delete information that has already been collected.

Kaspersky official blog – ​Read More