Transatlantic Cable podcast episode 330 | Kaspersky official blog

Episode 330 of the Transatlantic Cable podcast kicks things off with talk around the potential for A.I poisoning, which could allow malicious actors to turn AI chatbots into ‘sleeper agents’. From there the team talk about eBay and a truly bizarre story involving spiders, cockroaches and death threats, as well as China’s crackdown on casino’s, which has led to an underground boom in crypto-casinos.

If you like what you heard, please consider subscribing.

AI poisoning could turn open models into destructive “sleeper agents”
Defending reality: Truth in an age of synthetic media
eBay pays $3m fine in blogger harassment case
China’s gambling crackdown spawned wave of illegal online casinos

Kaspersky official blog – ​Read More

Authentication bypass exploit in GoAnywhere MFT | Kaspersky official blog

Researchers have analyzed the CVE-2024-0204 vulnerability in Fortra GoAnywhere MFT software (MFT standing for managed file transfer) and published exploit code that takes advantage of it. We explain the danger, and what organizations that use this software should do about it.

Vulnerability CVE-2024-0204 in GoAnywhere MFT

Let’s start by briefly recounting the story of this vulnerability in GoAnywhere. In fact, Fortra, the company developing this solution, patched this vulnerability back in early December 2023 with the release of GoAnywhere MFT 7.4.1. However, at that time the company chose not to disclose any information about the vulnerability, limiting itself to sending private recommendations to clients.

The essence of the vulnerability is as follows. After a user completes initial setup of GoAnywhere, the product’s internal logic blocks access to the initial account setup page. Then when they attempt to access this page, they’re redirected either to the admin panel (if they’re authenticated as an administrator) or to the authentication page.

However, researchers discovered that an alternative path to the InitialAccountSetup.xhtml file can be used, which the redirection logic does not take into account. In this scenario, GoAnywhere MFT allows anyone to access this page and create a new user account with administrator privileges.

As proof of the attack’s feasibility, the researchers wrote and published a short script that can create admin accounts in vulnerable versions of GoAnywhere MFT. All an attacker needs is to specify a new account name, a password (the only requirement is that it contains at least eight characters, which is interesting in itself), and the path:

Part of the exploit code for the CVE-2024-0204 vulnerability. Highlighted in red is the alternative path to the initial account setup page that enables the creation of users with administrator privileges

In general, this vulnerability closely resembles that discovered in Atlassian Confluence Data Center and Confluence Server a few months ago; there, too, it was possible to create admin accounts in a few simple steps.

Fortra assigned vulnerability CVE-2024-0204 “critical” status, with a CVSS 3.1 score of 9.8 out of 10.

A little context is necessary here. In 2023, the Clop ransomware group already exploited vulnerabilities in Fortra GoAnywhere MFT and also similar products from other developers — Progress MOVEit, Accellion FTA, and SolarWinds Serv-U — to attack hundreds of organizations worldwide. In particular, companies such as Procter & Gamble, Community Health Systems (CHS, one of the largest hospital networks in the U.S.A.), and the municipality of Toronto suffered from the exploitation of the GoAnywhere MFT vulnerability.

How to defend against CVE-2024-0204 exploitation

The obvious way to protect against exploitation of this vulnerability is to update GoAnywhere MFT to version 7.4.1 immediately, which fixes the logic for denying access to the InitialAccountSetup.xhtml page.

If you can’t install the update for some reason, you can try one of two simple workarounds:

Delete the InitialAccountSetup.xhtml file in the installation folder and restart the service;

or

Replace InitialAccountSetup.xhtml with a blank file and restart the service.

You should also use an EDR (Endpoint Detection and Response) solution to monitor suspicious activity in the corporate network. If your internal cybersecurity team lacks the skills or resources for this, you can use an external service to continuously hunt for threats to your organization and swiftly respond to them.

Kaspersky official blog – ​Read More

How to turn off Facebook link history and why | Kaspersky official blog

Facebook recently launched a new feature called link history. This post explains what link history is, why Facebook rolled it out, why you should turn it off, and most importantly — how.

What is Facebook link history?

Facebook mobile apps come with a built-in browser. Whenever you follow an external link posted on Facebook, it opens in this very browser. Recently the social network decided to start collecting the history of all the links you click, and to use this data to show you targeted ads.

Why does Facebook need it? Because it’s not just the largest social network in the world, but also one of the most powerful global advertising platforms — second only to Google in terms of scale and capabilities. Previously, to collect data on user interests and show targeted ads based on it, Facebook used third-party cookies. However, support for third-party cookies is being phased out in the world’s most popular browser — Google Chrome.

Google has devised its own mechanism for tracking users and targeting ads — known as Google Ad Topics. To collect data, this technology makes active use of the Google Chrome browser and the Android operating system. Not so long ago, we explained how to opt out of this Google tracking.

Now Facebook has decided to track users through the browser built into its various mobile app versions. That’s how the link-history feature was born. But it offers no additional benefits to regular users — despite Facebook trumpeting the convenience of being able to find any link you ever opened at any moment. But if you don’t like the idea of Facebook tracking your every move, it’s best to turn off the feature; thankfully, it’s easy to do.

How to turn off Facebook link history

First, let’s clarify that link history is only available in Facebook mobile apps. The feature is missing when you use the web version of the social network. It’s also neither available in Facebook Lite (if only because this app has no built-in browser), nor (at least for now) in the Messenger app.

The first time a user opens an external link posted on the social network after Facebook introduced link history, they’re asked for their consent to use the feature.

The screen requesting permission to turn on link history is only shown once

As you’d probably expect, link history is enabled by default. So most users likely give consent without too much thought — just to get Facebook off their backs and to show the page they want.

If you’ve already opted in to link history and now want to turn it off, there are two easy ways to do so.

The first way to turn off link history

In the Facebook app, open Menu by tapping the hamburger icon (the three lines in the upper-right corner on Android), or the Profile icon in the lower-right corner on iOS.
Go to Settings & privacy — the easiest way is by tapping the gear icon.
Scroll down to Browser and tap it.
In the window that opens, toggle Allow link history
Also, while you’re at it, tap the Clear button next to Link history.

Turning off Facebook link history through Settings & privacy on Android

The second way to turn off link history

In the app, tap any link posted on Facebook. This will open the app’s built-in browser.
In it, tap the ellipsis icon (upper-right corner on Android, lower-right on iOS).
Select Go to Settings.
In the window that opens, toggle Allow link history off and tap the Clear button next to Link history.

Turning off Facebook link history through the built-in browser on iOS

All done. Facebook will no longer collect your link history. While you’re at it, don’t forget to stop Google tracking you by disabling Google Ad Topics. To avoid online tracking in general, use the Private Browsing feature in Kaspersky applications.

Kaspersky official blog – ​Read More

37C3: how ethical hackers broke DRM on trains | Kaspersky official blog

Polish hackers from Dragon Sector told the 37th Chaos Communication Congress (37C3) late last year how they’d hacked into digital rights management (DRM) for trains, and, more importantly — why.

Why Polish hackers broke into trains

Around five years ago, Poland’s Koleje Dolnośląskie (KD) rail operator bought 11 Impuls 45WE trains from domestic manufacturer Newag. Fast-forward to recent times, and after five years of heavy use it was time for a service and some maintenance: a rather complex and expensive process that a train has to undergo after clocking up a million kilometers.

To select a workshop to service the trains, KD arranged a tender. Newag was among the bidders, but they lost to Serwis Pojazdów Szynowych (SPS), which underbid them by a significant margin.

However, once SPS was done with servicing the first of the trains, they found that it simply wouldn’t start up any more — despite seeming to be fine both mechanically and electrically. All kinds of diagnostic instruments revealed that the train had zero defects in it, and all the mechanics and electricians that worked on it agreed. No matter: the train simply would not start.

Shortly after, several other trains serviced by SPS — plus another taken to a different shop — ended up in a similar condition. This is when SPS, after trying repeatedly to unravel the mystery, decided to bring in a (white-hat) hacker team.

Inside the driver’s cabin of one of the Newag Impuls trains that were investigated. Source

Manufacturer’s malicious implants and backdoors in the train firmware

The researchers spent several months reverse-engineering, analyzing, and comparing the firmware from the trains that had been bricked and those still running. As a result, they learned how to start up the mysteriously broken-down trains, while at the same time discovering a number of interesting mechanisms embedded in the code by Newag’s software developers.

For example, they found that one of the trains’ computer systems contained code that checked GPS coordinates. If the train spent more than 10 days in any one of certain specified areas, it wouldn’t start anymore. What were those areas? The coordinates were associated with several third-party repair shops. Newag’s own workshops were featured in the code too, but the train lock wasn’t triggered in those, which means they were probably used for testing.

Areas on the map where the trains would be locked. Source

Another mechanism in the code immobilized the train after detecting that the serial number of one of the parts had changed (indicating that this part had been replaced). To mobilize the train again, a predefined combination of keys on the onboard computer in the driver’s cabin had to be pressed.

A further interesting booby trap was found inside one of the trains’ systems. It reported a compressor malfunction if the current day of the month was the 21st or later, the month was either 11th or later and the year was 2021 or later. It turned out that November 2021, was the scheduled maintenance date for that particular train. The trigger was miraculously avoided because the train left for maintenance earlier than planned and returned for a service only in January 2022, the 1st month, which is obviously before 11th.

Another example: one of the trains was found to contain a device marked “UDP<->CAN Converter”, which was connected to a GSM modem to receive lock status information from the onboard computer.

The most frequently found mechanism — and we should note here that each train had a different set of mechanisms — was designed to lock the train if it remained parked for a certain number of days, which signified maintenance for a train in active service. In total, Dragon Sector investigated 30 Impuls trains operated by KD and other rail carriers. A whopping 24 of them were found to contain malicious implants of some sort.

One of the researchers next to the train. Source

How to protect your systems from malicious implants

This story just goes to show that you can encounter malicious implants in the most unexpected of places and in all kinds of IT systems. So, no matter what kind of project you’re working on, if it contains any third-party code — let alone a whole system based on it — it makes sense to at least run an information security audit before going live.

Kaspersky official blog – ​Read More

Kaspersky Standard wins Product of the Year award from AV-Comparatives | Kaspersky official blog

Great news! The latest generation of our security solutions for home users has received a Product of the Year 2023 award. It’s the result of extensive multi-stage testing conducted by independent European test lab AV-Comparatives over the course of 2023, which examined and evaluated 16 security solutions from popular vendors. Here’s what this victory means, what it consists of, how the testing was done, and what other awards we picked up.

Our Kaspersky Standard security solution was named Product of the Year 2023 after in-depth testing by AV-Comparatives

What does “Product of the Year” actually mean?

The tests were carried out on our basic security solution for home users — Kaspersky Standard — but its outstanding results apply equally to all our endpoint products. The reason is simple: all our solutions use the same detection and protection technologies stack that was thoroughly tested by AV-Comparatives.

Thus, this top award, Product of the Year 2023, applies equally to our more advanced home protection solutions — Kaspersky Plus and Kaspersky Premium — and also our business products, such as Kaspersky Endpoint Security for Business and Kaspersky Small Office Security.

So what does it take to earn the coveted Product of the Year title?

A security solution needs to take part in seven tests throughout the year and consistently achieve the highest Advanced+ score in each of them. These tests examine the quality of protection against common threats and targeted attacks, resistance to false positives, and the impact on overall system performance. This golden triad of metrics forms the basis of a comprehensive evaluation of security solution performance.

That the testing is continuous over the course of a year is important since malware developers hardly sit around twiddling their thumbs — new threats emerge all the time, and existing ones evolve with breathtaking speed. Consequently, security solution developers must keep moving forward at the same pace. That’s why assessing performance at a single point in time is misleading — to get a true picture of a solution’s effectiveness requires extensive and repeated testing all year long. Which is precisely what AV-Comparatives does.

AV-Comparatives examined 16 security solutions from the largest vendors in its tests. Winning such a significant contest clearly demonstrates the highest level of protection provided by our products.

The seven rounds of tests — some of which individually lasted several months — that our protection took part in to eventually win the Product of the Year award were the following:

March 2023: Malware Protection Test spring series
April 2023: Performance Test spring series
February–May 2023: Real-World Protection Test first series
September 2023: Malware Protection Test autumn series
September–October 2023: Advanced Threat Protection Test
October 2023: Performance Test autumn series
July–October 2023: Real-World Protection Test second series

To earn AV-Comparatives’ Product of the Year title, a security solution needs to get the highest score in each stage of testing. And our product rose to the challenge: in each of the tests listed above, Kaspersky Standard scooped the top score — Advanced+.

The Product of the Year award went to Kaspersky Standard based on top marks in all seven of a series of AV-Comparatives’ tests in 2023

How AV-Comparatives tests security solutions

Now for a closer look at AV-Comparatives’ testing methodology. The different tests evaluate the different capabilities of the security solutions taking part.

Malware Protection Test

This test examines the solution’s ability to detect prevalent malware. In the first phase of the test, malicious files (AV-Comparatives uses just over 10,000 malware samples) are written to the drive of the test computer, after which they’re scanned by the tested security solution — at first offline, without internet access, and then online. Any malicious files that were missed by the protective solution during static scanning are then run. If the product fails to prevent or reverse all the malware’s actions within a certain time, the threat is considered to have been missed. Based on the number of threats missed, AV-Comparatives assigns a protection score to the solution.

Also during this test, the security solutions are evaluated for false positives. High-quality protection shouldn’t mistakenly flag clean applications or safe activities. After all, if one cries wolf too often, the user will begin to ignore the warnings, and sooner or later malware will strike. Not to mention that false alarms are extremely annoying.

The final score is based on these two metrics. An Advanced+ score means reliable protection with a minimum of false positives.

Real-World Protection Test

This test focuses on protection against the most current web-hosted threats at the time of testing. Malware (both malicious files and web exploits) is out there on the internet, and the solutions being tested can deploy their whole arsenals of built-in security technologies to detect the threats. Detection and blocking of a threat with subsequent rollback of all changes can occur at any stage: when opening a dangerous link, when downloading and saving a malicious file, or when the malware is already running. In any of these cases, the solution is marked a success.

As before, both the number of missed threats and also the number of false positives are taken into account for the final score. Advanced+ is awarded to products that minimize both these metrics.

Advanced Threat Protection Test

This test assesses the ability of the solution to withstand targeted attacks. To this end, AV-Comparatives designs and launches 15 attacks to simulate real-world ones, using diverse tools, tactics and techniques, with various initial conditions and along different vectors.

A test for false positives is also carried out. This checks whether the solution blocks any potentially risky, but not necessarily dangerous, activity (such as opening email attachments), which increases the level of protection at the expense of user convenience and productivity.

Performance Test

Another critical aspect of a security solution’s evaluation is its impact on system performance. Here, the lab engineers emulate a number of typical user scenarios to evaluate how the solution under test affects their run time. The list of scenarios includes:

Copying and recopying files
Archiving and unpacking files
Installing and uninstalling programs
Starting and restarting programs
Downloading files from the internet
Web browsing

Additionally, system-performance drops are measured against the PCMark 10 benchmark.

Based on these measurements, AV-Comparatives calculates the total impact of each solution on system performance (the lower this metric, the better), then applies a statistical model to assign a final score to the products: Advanced+, Advanced, Standard, Tested, Not passed. Naturally, Advanced+ means minimal impact on computer performance.

What other AV-Comparatives awards did Kaspersky pick up in 2023?

Besides Kaspersky Standard being named Product of the Year, our products received several other important awards based on AV-Comparatives’ tests in 2023:

Real World Protection 2023 Silver
Malware Protection 2023 Silver
Advanced Threat Protection Consumer 2023 Silver
Best Overall Speed 2023 Bronze
Lowest False Positives 2023 Bronze
Certified Advanced Threat Protection 2023
Strategic Leader 2023 for Endpoint Prevention and Response Test 2023
Approved Enterprise Business Security 2023

We have a long-standing commitment to using independent research by recognized test labs to impartially assess the quality of our solutions and address identified weaknesses when upgrading our technologies. For 20 years now, the independent test lab AV-Comparatives has been putting our solutions through their paces, confirming time and again our quality of protection and conferring a multitude of awards.

Throughout the whole two decades, we’ve received the highest Product of the Year award seven times; no other vendor of security solutions has had such a number of victories. And if we add to this all the Outstanding Product and Top Rated awards we’ve also received over the years, it turns out that Kaspersky security solutions have received top recognitions from AV-Comparatives’ experts a full 16 times in 20 years!

Besides this, AV-Comparatives has also awarded us:

57 Gold, Silver, and Bronze awards in a variety of specialized tests
Two consecutive Strategic Leader awards in 2022 and 2023, for high results in protection against targeted attacks by the Kaspersky EDR Expert solution
Confirmation of 100% anti-tampering protection (Anti-Tampering Test 2023)
Confirmation of 100% protection against LSASS attacks (LSASS Credential Dumping Test 2022)
Confirmation of top-quality Network Array Storage protection (Test of AV solution for Storage)
and numerous other awards

Learn more about the awards we’ve received, and check out our performance dynamics in independent tests from year to year by visiting our TOP 3 Metrics page.

Kaspersky official blog – ​Read More

Why using Google OAuth in work applications is unsafe

Organizations sometimes rely on Google OAuth to authenticate users. They tend to assume that Google is all-powerful and wise, so its verdict on whether to grant access to a user is taken as read.

Alas, such blind faith is dangerous: the “Sign in with Google” option is seriously flawed. In December 2023, researcher Dylan Ayrey at Truffle Security discovered a rather nasty vulnerability in Google OAuth that allows employees to retain access to corporate resources after parting company with their employer. There are also ways for a total stranger to exploit this bug and gain access.

What’s wrong with Google OAuth sign-in

The vulnerability exists due to a number of factors. First: Google allows users to create Google accounts using any email — not just Gmail. To sign in to a company’s Google Workspace, email addresses with the domain name of the company are commonly used. For instance, an employee of the hypothetical company Example Inc. might have the email address alanna@example.com.

Google OAuth is used by various work platforms in many organizations. For example, here’s the “Sign In with Google” button on slack.slack.com

Second: Google (along with a number of other online services) supports what is known as sub-addressing. This lets you create alias addresses by appending a plus sign (+) to an existing mail address, followed by whatever you like. One use for this could be for managing email flows.

For example, when registering an account with an online bank, one could specify the address alanna+bank@example.com; when registering with a communication service provider — alanna+telco@example.com. Formally, these are different addresses, but emails will arrive in the same mailbox — alanna@example.com. And because the contents of the “To:” field differ, incoming messages can be handled differently with the use of certain rules.

Example of signing in to Slack with Google using an alias email address with a plus sign

Third: in many work platforms such as Zoom and Slack, authorization through the “Sign In with Google” button uses the domain of the email address specified when registering the Google account. So, in our example, to connect to Example Inc.’s workspace example.slack.com, you need an @example.com address.

Finally, fourth: it’s possible to edit the email address in a Google account. Here, sub-addressing can be employed by changing, say, alanna@example.com to alanna+whatever@example.com. That done, a new Google account can be registered with the address alanna@example.com.

This results in two different Google accounts that can be used to sign in to Example Inc.’s work platforms (like Slack and Zoom) through Google OAuth. The problem is that the second address remains invisible to the corporate Google Workspace administrator, so they’re unable to delete or disable this account. Thus, a laid-off employee could still have access to corporate resources.

Exploiting the Google OAuth vulnerability and gaining entry without initial access

How feasible is all this in practice? Entirely. Ayrey tested the possibility of exploiting the vulnerability in Google OAuth in his own company’s Slack and Zoom, and found that it is indeed possible to create such phantom accounts. Non-expert, regular users could take advantage of it too: no special knowhow or skills are needed.

An example of exploiting the vulnerability in Google OAuth to grant Slack access to an account registered to an email sub-address. Source

Note that, besides Slack and Zoom, this vulnerability affects dozens of lesser-known corporate tools that use Google OAuth authentication.

In some cases, attackers can gain access to an organization’s cloud tools even if they didn’t initially have access to the corporate email of the target company. The Zendesk ticketing system, for example, can be used for this purpose.

The idea is that the service allows submitting requests via email. An email address with the company domain is created for the request, and the request creator (that is, anyone) is able to view the contents of all correspondence related to this request. It turns out that it’s possible for a user to register a Google account with this address and, through the request, get an email with a confirmation link. They can then successfully exploit the vulnerability in Google OAuth to sign in to the target company’s Zoom and Slack without having initial access to its resources.

How to protect against the Google OAuth vulnerability

The researcher notified Google about the vulnerability several months ago through its bug bounty program; the company recognized it as an issue (albeit of low priority and severity) and even paid out a reward (of $1337). Ayrey additionally reported the problem to some online services, including Slack.

However, no one is rushing to fix the vulnerability, so protection against it seems to be on the shoulders of company employees who administer work platforms. Fortunately, in most cases, this poses no particular problem: it suffices to disable the “Sign In with Google” option.

And, naturally, it’s a good idea to guard against possible penetration deeper into the organization’s information infrastructure through platforms like Slack, which means monitoring what’s going on in said infrastructure. If your company’s information security department lacks the resources or expertise for this, deploy an external service such as Kaspersky Managed Detection and Response.

Kaspersky official blog – ​Read More

What cybersecurity threats to kids parents should be aware of in 2024? | Kaspersky official blog

In the era of modern technology, the age at which children are introduced to the digital world and technology is increasingly lower. This digital experience, however, can be marred by potential risks lurking online. As technology continues to advance, the tactics and strategies used by cybercriminals to target and exploit young internet users are also evolving.

Therefore, it’s crucial for parents to stay informed about the latest cybersecurity threats targeting kids to better protect them from potential harm. In this post, me and my colleague Anna Larkina will explore some of the key cybersecurity trends that parents should be aware of and provide tips on how to safeguard their children’s online activities.

Children will increasingly use AI tools that, so far, are not ready to provide the necessary level of cybersecurity and age-appropriate content

AI is continuing to revolutionize various industries, and its daily use ranges from chatbots to AI wearables, personalized online shopping recommendations, and other common uses. Of course, such global trends do not bypass the interest and curiosity of children, who can use AI tools to do their homework or simply chat with AI-enabled chatbots. According to a UN study, about 80 percent of youth claimed that they interact with AI multiple times a day. However, AI applications can pose numerous risks to young users involving data privacy loss, cyberthreats, and inappropriate content.

With the development of AI, numerous little-known applications have emerged with seemingly harmless features, such as uploading a photo to receive a modified version — whether it be an anime-style image or simple retouching. However, when adults, let alone children, upload their images to such applications, they never know in which databases their photos will ultimately remain and whether they will be used further. Even if your child decides to play with such an application, it is essential to use them extremely cautiously and ensure that there is no personal information that may identify the child’s identity — such as names, combined with addresses, or similar sensitive data — in the background of the photo, or consider avoiding such applications altogether.

Moreover, AI apps – chatbots in particular – can easily provide age-inappropriate content when prompted. This poses a heightened risk as teenagers might feel more comfortable sharing personal information with the chatbot than with their real-life acquaintances, as evidenced by instances where the chatbot gave advice on masking the smell of alcohol and pot to a user claiming to be 15. On an even more inappropriate level, there are a multitude of AI chatbots that are specifically designed to provide an “erotic” experience. Although some require a form of age verification, this is a dangerous trend, as some children might opt to lie about their age and the prevention, in cases like this, is insufficient.

It is estimated that on Facebook Messenger alone, there are over 300,000 chatbots in operation. However, not all of them are safe and may carry various risks, like the ones mentioned earlier. Therefore, it is extremely important to discuss with children the importance of privacy and the dangers of oversharing, as well as talking to them about their online experiences regularly. It also reiterates the significance of establishing a trusting relationship with the child. This will ensure that the child feels comfortable asking their parents for advice rather than turning to a chatbot for guidance.

The growth of malicious actors’ attacks on young gamers

According to statistics, 91 percent of children in UK aged 3-15 play games on any device. The vast gaming world is open to them, also making them vulnerable to cybercriminals’ attacks. For instance, in 2022, our security solutions detected more than 7 million attacks relating to popular children’s games, resulting in a 57 percent increase in attempted attacks compared to the previous year. The top children’s games by the number of users targeted even included games for the youngest children — Poppy Playtime and Toca Life World, which are designed for children 3-8-years-old.

What raises even more concerns is that sometimes children prefer to communicate with strangers on gaming platforms rather than on social media. In some games, unmoderated voice and text chats form a significant part of the experience. As more young people come online, criminals can build trust virtually, in the same way as they would entice someone in person — by offering gifts or promises of friendship. Once they lured the young victim by gaining their trust, cybercriminals obtain their personal information, suggesting they click on a phishing link, download a malicious file onto their device disguised as a game mod for Minecraft or Fortnite, or even groom them for more notorious purposes. This can be seen, in the documentary series “hacker:HUNTER“, co-produced by Kaspersky, as one of the episodes revealed how cybercriminals identify skilled children through online games and then groom them to carry out hacking tasks.

The number of ways to interact within the gaming world is increasing to include voice chats as well as AR and VR games. Both cybersecurity and social-related threats remain particular problems in children’s gaming. Parents must remain vigilant regarding their children’s behavior and maintain open communication to address any potential threats. Identifying a threat involves observing changes, such as sudden shifts in gaming habits that may indicate a cause for concern. To keep your child safe, stopping from downloading malicious files during their gaming experience, we advise installing a trusted security solution on their device.

The development of FinTech industry for kids marks the appearance of new threats

An increasing number of banks are providing specialized products and services designed for children, including banking cards for kids as young as 12 years old. This gives parents an array of potential advantages, such as the ability to monitor their child’s expenditures, establish daily spending limits, or remotely transfer funds for the child’s pocket money.

Yet, by introducing banking cards for children, the latter can become susceptible to financially motivated threat actors and vulnerable to conventional scams, such as promises of a free PlayStation 5 and other similar valuable devices after entering card details on a phishing site. Using social engineering techniques, cybercriminals might exploit children’s trust by posing as their peers and requesting card details or money transfers to their accounts.

As the Fintech industry for children continues to evolve, it is crucial to educate them not only about financial literacy but also the basics of cybersecurity. To achieve this, you can read Kaspersky Cybersecurity Alphabet together with your child. It is specifically designed to explain key online safety rules in a language easily comprehensible for children.

To avoid concerns about a child losing their card or sharing banking details, we recommend installing a digital NFC card on their phone instead of giving them a physical plastic card. Establish transaction confirmation with the parent, if the bank allows it. And, of course, the use of any technical solutions must be accompanied by an explanation of how to use them safely.

The number of smart home threat cases, with children being potential targets, will increase

In our interconnected world, an increasing number of devices, even everyday items like pet feeders, are becoming “smart” by connecting to the internet. However, as these devices become more sophisticated, they also become more susceptible to cyberattacks. This year, our researchers conducted a vulnerability study on a popular model of smart pet feeder. The findings revealed a number of serious security issues that could allow attackers to gain unauthorized access to the device and steal sensitive information, such as video footage, potentially turning the feeder into a surveillance tool.

Despite the increasing number of threats, manufacturers are not rushing to create cyber-immune devices that preemptively prevent potential exploits of vulnerabilities. Meanwhile, the variety of different IoT devices purchased in households continues to grow. These devices are becoming the norm for children, which also means that children can become tools for cybercriminals in an attack. For instance, if a smart device becomes a fully functional surveillance tool and a child is home alone, cybercriminals could contact them through the device and request sensitive information such as their name, address, or even their parents’ credit card number and times when their parents are not at home. In a scenario such as this one, beyond just hacking the device, there is a risk of financial data loss or even a physical attack.

As we cannot restrict children from using smart home devices, our responsibility as parents is to maximize the security of these devices. This includes at least adjusting default security settings, setting new passwords, and explaining basic cybersecurity rules to children who use IoT devices.

Children will demand that their personal online space is respected

As kids mature, they develop greater self-awareness, encompassing an understanding of their personal space, privacy, and sensitive data, both offline and in their online activities. The increasing accessibility of the Internet means more children are prone to become aware of this. Consequently, when a parent firmly communicates the intent to install a parenting digital app on their child’s devices, not all children will take it calmly.

This is why parents now require the skill to discuss their child’s online experience and the importance of parenting digital apps for online safety while respecting the child’s personal space. This involves establishing clear boundaries and expectations, discussing the reasons for using the app with the child. Regular check-ins are also vital, and adjustments to the restrictions should be made as the child matures and develops a sense of responsibility. Learn more in our guide on the First kids’ gadget, where, together with experienced child psychologist Saliha Afridi, our privacy experts analyze a series of important milestones to understand how to introduce such apps into a child’s life properly and establish a meaningful dialogue about cybersecurity online.

Children are eager to download apps that are unavailable in their country, but stumble upon malicious copies

If some app is unavailable in the region, the user starts looking for an alternative, but this alternative is often only malicious copies. Even if they turn to official app stores like Google Play, they still run the risk of falling prey to cybercriminals. From 2020 to 2022, our researchers have found more than 190 apps infected with Harly Trojan on Google Play, which signed users up for paid services without their knowledge. A conservative estimate of the number of downloads of these apps is 4.8 million, but the actual figure of victims may be even higher.

Children are not the only ones following this trend, adults are as well, which was highlighted in our latest Consumer cyberthreats predictions report as a part of the annual Kaspersky Security Bulletin. That’s why it’s crucial for kids and their parents to understand the fundamentals of cybersecurity. For instance, it’s important to pay attention to the permissions that an app requests when installing it — a simple calculator, for instance, shouldn’t need access to your location or contact list.

As we can see, many of the trends that are playing out in society are also affecting children, making them potential targets for attackers. This includes both the development and popularity of AI and smart homes, as well as the expansion of the world of gaming and the FinTech industry. We are convinced that protecting children from cybersecurity threats in 2024 requires proactive measures from parents.

By staying informed about the latest threats and actively monitoring their children’s online activities, parents can create a safer online environment for their kids.
It’s crucial for parents to have open communication with their children about the potential risks they may encounter online and to enforce strict guidelines to ensure their safety.
With the right tools such as Kaspersky Safe Kids, parents can effectively safeguard their children against cyber threats in the digital age.
To help parents introduce their children to cybersecurity amidst the evolving threat landscape, our experts have developed the Kaspersky Cybersecurity Alphabet with key concepts from the cybersecurity industry. In this book, your kid will get to know new technologies, learn the main cyber hygiene rules, find out how to avoid online threats, and recognize fraudsters’ tricks. After reading this book together, you’ll be sure that your kid knows how to distinguish phishing website, how VPN and QR-codes work, and even what honeypots and encryption are and what role they play in modern cybersecurity. You can download the pdf version of the book or the Kaspersky Cybersecurity Alphabet poster for free and go through the basics of cybersecurity with your child, building their cybersafe future.

Kaspersky official blog – ​Read More

Can TVs, smartphones, and smart assistants eavesdrop on your conversations? | Kaspersky official blog

Rumors of eavesdropping smart devices have been circulating for many years. Doubtless, you’ve heard a tale or two about how someone was discussing, say, the new coffee machine at work, and then got bombarded with online ads for, yes, coffee machines. We’ve already tested this hypothesis, and concluded that advertisers aren’t eavesdropping — they have many other less dramatic but far more effective ways of targeting ads. But perhaps the times are changing? News broke recently (here and here) about two marketing firms allegedly bragging about offering targeted ads based on just such eavesdropping. Granted, both companies later retracted their words and removed the relevant statements from their websites. Nevertheless, we decided to take a fresh look at the situation.

What the firms claimed

In calls with clients, podcasts, and blogs, CMG and Mindshift told much the same story — albeit devoid of any technical detail: smartphones and smart TVs allegedly help them recognize predetermined keywords in people’s conversations, which are then used to create custom audiences. These audiences, in the form of lists of phone numbers, email addresses, and anonymous advertising IDs, can be uploaded to various platforms (from YouTube and Facebook to Google AdWords and Microsoft Advertising) and leveraged to target ads at users.

If the second part about uploading custom audiences sounds quite plausible, the first is more than hazy. It’s not clear at all from the companies’ statements which apps and which technologies they use to collect information. But in the long (now deleted) blog post, the following non-technical passage stood out most of all: “We know what you’re thinking. Is this even legal? It is legal for phones and devices to listen to you. When a new app download or update prompts consumers with a multi-page term of use agreement somewhere in the fine print, Active Listening is often included.”

After being pestered by journalists, CMG removed the post from its blog and issued an apology/clarification, adding that there’s no eavesdropping involved, and the targeting data is “sourced by social media and other applications”.

The second company, Mindshift, just quietly erased all marketing messages about this form of advertising from its website.

When did they lie?

Clearly, the marketers “misspoke” either to their clients in promising voice-activated ads, or to the media Most likely it was the former; here’s why:

Modern operating systems indicate clearly when the microphone is in use by a legitimate app. And if, say, some weather app is constantly listening to the microphone, waiting for, say, the words “coffee machine” to come from your lips, the microphone icon will light up in the notification panel of all the most popular operating systems.
On smartphones and other mobile devices, continuous eavesdropping will drain the battery and eat up data. This will get noticed and cause a wave of hate.
Constantly analyzing audio streams from millions of users would require massive computing power and be financial folly — since advertising profits could never cover the costs of such a targeting operation.

Contrary to popular belief, the annual revenue of advertising platforms per user is quite small: less than $4 in Africa, around $10 on average worldwide, and up to $60 in the U.S. Given that these figures refer to income, not profit, there’s simply no money left for eavesdropping. Doubters are invited to study, for example, Google Cloud’s speech recognition pricing: even at the most discounted wholesale rate (two million+ minutes of audio recordings per month), converting speech to text costs 0.3 cents per minute. Assuming a minimum of three hours of speech recognition per day, the client would have to spend around $200 per year on each individual user — too much even for U.S. advertising firms.

What about voice assistants?

That said, the above reasoning may not hold true for devices that already listen to voice commands by nature of their primary purpose. First and foremost are smart speakers, as well as smartphones with voice assistants permanently on. Less obvious devices include smart TVs that also respond to voice commands.

According to Amazon, Alexa is always listening out for the wake word, but only records and sends voice data to the cloud upon hearing it, and stops as soon as interaction with the user is over. The company doesn’t deny that Alexa data is used for ad targeting, and independent studies confirm it. Some users consider such a practice to be illegal, but the lawsuit they filed against Amazon is still ongoing. Meanwhile, another action brought against Amazon by the U.S. Federal Communications Commission resulted in a modest $30 million settlement. The e-commerce giant was ordered to pay out for failing to delete children’s data collected by Alexa, in direct violation the U.S. Children’s Online Privacy Protection Act (COPPA). The company is also barred from using this illegally harvested data for business needs — in particular training algorithms.

And it’s long been an open secret that other voice assistant vendors also collect user interaction data: here’s the lowdown on Apple and Google. Now and then, these recordings are listened to by living people — to solve technical issues, train new algorithms, and so on. But are they used to target ads? Some studies confirm such practices on the part of Google and Amazon, although it’s more a case of using voice search or purchase history rather than constant eavesdropping. As for Apple, there was no link between ads and Siri in any study.

We did not find a study devoted to smart TV voice commands, but it has long been known that smart TVs collect detailed information about what users watch — including video data from external sources (Blue-ray Disc player, computer, and so on). It can’t be ruled out that voice interactions with the built-in assistant are also used more extensively than one might like.

Special case: spyware

True smartphone eavesdropping also occurs, of course, but here it’s not about mass surveillance for advertising purposes but targeted spying on a specific victim. There are many documented cases of such surveillance — the perpetrators of which can be jealous spouses, business competitors, and even bona fide intelligence agencies. But such eavesdropping requires malware to be installed on the victim’s smartphone — and often, “thanks” to vulnerabilities, this can happen without any action whatsoever on the part of the target. Once a smartphone is infected, the attacker’s options are virtually limitless. We have a string of posts dedicated to such cases: read about stalkerware, infected messenger mods, and, of course, the epic saga of our discovery of Triangulation, perhaps the most sophisticated Trojan for Apple devices there has ever been. In the face of such threats, caution alone won’t suffice — targeted measures are needed to keep your smartphone safe, which include installing a reliable protection solution.

How to guard against eavesdropping

Disable microphone permission on smartphones and tablets for all apps that don’t need it. In modern versions of mobile operating systems, in the same place under permissions and privacy management, you can see which apps used your phone’s microphone (and other sensors) and when. Make sure there’s nothing suspicious or unexpected in this list.
Control which apps have access to the microphone on your computer — the permission settings in the latest versions of Windows and macOS are much the same as on smartphones. And install reliable protection on your computer to prevent snooping through malware.
Consider turning off the voice assistant. Although it doesn’t listen in continuously, some unwanted snippets may end up in the recordings of your conversations with it. If you’re worried that the voices of your friends, family, or coworkers might get onto the servers of global corporations, use keyboards, mice, and touchscreens instead.
Turn off voice control on your TV. To make it easier to input names, connect a compact wireless keyboard to your smart TV.
Kiss smart speakers goodbye. For those who like to play music through speakers while checking recipes and chopping vegetables, this is the hardest tip to follow. But a smart speaker is pretty much the only gadget capable of eavesdropping on you that really does it all the time. So, you either have to live with that fact — or power them up only when you’re chopping vegetables.

Kaspersky official blog – ​Read More

Cloud SSO implementations, and how to reduce attack risks

Credentials leaks are still among attackers’ most-used penetration techniques. In 2023 Kaspersky Digital Footprint Intelligence experts found on the darknet more than 3100 ads offering access to corporate resources – some of them owned by Fortune 500 companies. To more effectively manage associated risks, minimize the number of vulnerable accounts, and detect and block unauthorized access attempts quicker, companies are adopting identity management systems, which we covered in detail previously. However, an effective identity management process isn’t feasible until most corporate systems support unified authentication. Internal systems usually depend on a centralized catalog – such as Active Directory – for unified authentication, whereas external SaaS systems talk to the corporate identity catalog via a single sign-on (SSO) platform, which can be located externally or hosted in the company’s infrastructure (such as ADFS).

For employees, it makes the log-in process as user-friendly as it gets. To sign in to an external system – such as Salesforce or Concur – the employee completes the standard authentication procedure, which includes entering a password and submitting a second authentication factor: a one-time password, USB token, or something else – depending on the company’s policy. No other logins or passwords are needed. Moreover, after you sign in to one of the systems in the morning, you’ll be authenticated in the others by default. In theory the process is secure, as the IT and infosec teams have full centralized control over accounts, password policies, MFA methods, and logs. In real life however, the standard of security implemented by external systems that support SSO may prove not so high.

SSO pitfalls

When the user signs in to a software-as-a-service (SaaS) system, the system server, the user’s client device, and the SSO platform go through a series of handshakes as the platform validates the user and issues the SaaS and the device with authentication tokens that confirm the user’s permissions. The token can get a range of attributes from the platform that have a bearing on security. These may include the following:

Token (and session) expiration, which requires the user to get authenticated again
Reference to a specific browser or mobile device
Specific IP addresses or IP range limits, which enable things like geographic restrictions
Extra conditions for session expiration, such as closing the browser or signing out of the SSO platform

The main challenge is that some cloud providers misinterpret or even ignore these restrictions, thus undermining the security model built by the infosec team. On top of that, some SaaS platforms have inadequate token validity controls, which leaves room for forgery.

How SSO implementation flaws are exploited by malicious actors

The most common scenario is some form of a token theft. This can be stealing cookies from the user’s computer, intercepting traffic, or capturing HAR files (traffic archives). The same token being used on a different device and from a different IP address is generally an urgent-enough signal for the SaaS platform that calls for revalidation and possibly, reauthentication. In the real world though, malicious actors often successfully use stolen tokens to sign in to the system on behalf of the legitimate user, while circumventing passwords, one-time codes, and other infosec protections.

Another frequent scenario is targeted phishing that relies on fake corporate websites and, if required, a reverse proxy like evilginx2, which steals passwords, MFA codes, and tokens too.

Improving SSO security

Examine your SaaS vendors. The infosec team can add SSO implementation of the SaaS provider to the list of questions that vendors are required to respond to when submitting their proposals. In particular, these are questions about observing various token restrictions, validation, expiration, and revocation. Further examination steps can include application code audits, integration testing, vulnerability analysis, and pentesting.

Plan compensatory measures. There’s a variety of methods to prevent token manipulation and theft. For example, the use of EDR on all computers significantly lowers the risk of being infected with malware, or redirected to a phishing site. Management of mobile devices (EMM/UEM) can sort out mobile access to corporate resources. In certain cases, we recommend barring unmanaged devices from corporate services.

Configure your traffic analysis and identity management systems to look at SSO requests and responses, so that they can identify suspicious requests that originate from unusual client applications or non-typical users, in unexpected IP address zones, and so on. Tokens that have excessively long lifetimes can be addressed with traffic control as well.

Insist on better SSO implementation. Many SaaS providers view SSO as a customer amenity, and a reason for offering a more expensive “enterprise” plan, whereas information security takes a back seat. You can partner with your procurement team to get some leverage over this, but things will change rather slowly. While talking to SaaS providers, it’s never a bad idea to ask about their plans for upgrading the SSO feature – such as support for the token restrictions mentioned above (geoblocking, expiration, and so on), or any plans to transition to using newer, better-standardized token exchange protocols – such as JWT or CAEP.

Kaspersky official blog – ​Read More

What is the principle of least privilege? | Kaspersky official blog

One of the most important concepts in information security is the principle of least privilege. In this post, we explore what it is, how it works, how adhering to this principle benefits businesses, and how to implement the principle of least privilege in practice.

How the principle of least privilege works

The principle of least privilege (PoLP) is also known as the principle of minimal privilege (PoMP) or, less commonly, the principle of least authority (PoLA).

The main idea is that access to resources in a system should be organized in such a way that any entity within the system has access only to those that the entity requires for its work, and no more.

In practice, this could involve different systems and different entities within a system. Either way, in terms of applying the principle of least privilege to enterprise security, this can be restated as follows: Any user of the organization’s information infrastructure should only have the right to access the data that is necessary for performing their work tasks.

If, in order to perform certain tasks, a user requires access to information they currently don’t have, their permissions can be elevated. This elevation can be permanent – if required by the user’s role, or temporary – if it’s only necessary for a specific project or task (in the latter case, this is called “privilege bracketing”).

Conversely, when a user no longer requires access to certain information for some reason, their permissions should be lowered in accordance with the principle of least privilege.

In particular, the principle implies that regular users should never be granted administrator or superuser rights. Not only are such privileges unnecessary for the duties of the average employee, but they also significantly increase risks.

Why is the principle of least privilege needed?

The principle of least privilege helps improve access management, and generally hardens the security of the company’s information infrastructure. Here are some of the important security objectives that can be achieved by applying the principle of least privilege.

Risk mitigation. By restricting access to the minimum necessary for users to perform their tasks, the likelihood of accidental or intentional misuse of privileges can be significantly reduced. This, in turn, helps lower the risks of successful perimeter penetration and unauthorized access to corporate resources.
Data protection. Limiting access helps protect confidential data. Users only have access to the data required for their work, thereby reducing the likelihood of their gaining access to sensitive information or, worse, causing its leakage or theft.
Minimizing the attack surface. Restricting user privileges makes it more difficult for attackers to exploit vulnerabilities and use malware and hacking tools that rely on the user’s privileges, thereby reducing the attack surface.
Localizing security incidents. If an organization’s network is breached, the principle of least privilege helps limit the scope of the incident and its consequences. Because any compromised accounts have minimal rights, potential damage is reduced, and lateral movement within the compromised system or network is impeded.
Identifying users responsible for an incident. Minimizing privileges significantly narrows down the circle of users who could be responsible for an incident. This speeds up the identification of those accountable when investigating security incidents or unauthorized actions.
Compliance with standards and regulations. Many regulatory requirements and standards emphasize the need for access control – particularly the principle of least privilege. Adhering to industry standards and best practices helps organizations avoid unpleasant consequences and sanctions.
Increasing operational efficiency. Implementing the principle of least privilege reduces risks for the organization’s information infrastructure. This includes reducing downtime associated with security incidents, thus improving the company’s operational efficiency.

How to implement the principle of least privilege in your organization

Implementing the principle of least privilege in an organization’s information infrastructure can be broken down into a few basic steps and tasks:

Conduct an inventory of resources, and audit the access rights users currently have.
Classify resources and create an access management model based on roles – each with specific rights.
As a starting point, assign users roles with minimal rights, and elevate their privileges only if necessary for their tasks.
Regularly conduct audits and review permissions – lowering privileges for users who no longer need access to certain resources for their tasks.
Apply the principle of privilege bracketing: when a user needs access to a larger number of resources for a task, try to elevate their privileges temporarily – not permanently.

And don’t forget about other protective measures

Of course, applying the principle of least privilege alone isn’t enough to secure a company’s information infrastructure. Other measures are also required:

Regular security audits.
Timely software updates.
Employee training on the basics of cybersecurity.
Deploying reliable protection on all corporate devices.

Kaspersky official blog – ​Read More