Toy robot security issues | Kaspersky official blog

Kaspersky experts recently studied the security of a popular toy robot model, finding major issues that allowed malicious actors to make a video call to any such robot, hijack the parental account, or, potentially, even upload modified firmware. Read on for the details.

What a toy robot can do

The toy robot model that we studied is a kind of hybrid between a smartphone/tablet and a smart-speaker on wheels that enables it to move about. The robot has no limbs, so rolling around the house is its only option to physically interact with its environment.

The robot’s centerpiece is a large touchscreen that can display a control UI, interactive learning apps for kids, and a lively, detailed animated cartoon-like face. Its facial expressions change with context: to their credit the developers did a great job on the robot’s personality.

You can control the robot with voice commands, but some of its features don’t support these, so sometimes you have to catch the robot and poke its face the built-in screen.

In addition to a built-in microphone and a rather loud speaker, the robot has a wide-angle camera placed just above the screen. A key feature touted by the vendor is parents’ ability to video-call their kids right through the robot.

On the front face, about halfway between the screen and the wheels, is an extra optical-object-recognition sensor that helps the robot avoid collisions. Obstacle recognition being totally independent of the main camera, the developers very usefully added a physical shutter that completely covers the latter.

So, if you’re concerned that someone might be peeping at you and/or your child through that camera — sadly not without reason as we’ll learn later — you can simply close the shutter. And in case you’re worried that someone might be eavesdropping on you through the built-in microphone, you can just turn off the robot (and judging by the time it takes to boot back up, this is an honest-to-goodness shutdown — not a sleep mode).

As you’d expect, an app for controlling and monitoring the toy is available for parents to use. And, as you must have guessed by now, it’s all connected to the internet and employs a bunch of cloud services under the hood. If you’re interested in the technical details, you can find these in the full version of the security research, which we’ve published on Securelist.

As usual, the more complex the system — the more likely it is to have security holes, which someone might try to exploit to do something unsavory. And here we’ve reached the key point of this post: after studying the robot closely, we found several serious vulnerabilities.

Unauthorized video calling

The first thing we found during our research was that malicious actors could make video calls to any robot of this kind. The vendor’s server issued video session tokens to anyone who had both the robot ID and the parent ID. The robot’s ID wasn’t hard to brute-force: every toy had a nine-character ID similar to the serial number printed on its body, with the first two characters being the same for every unit. And the parent’s ID could be obtained by sending a request with the robot ID to the manufacturer’s server without any authentication.

Thus, a malicious actor who wanted to call a random child could either try to guess a specific robot’s ID, or play a chat-roulette game by calling random IDs.

Complete parental account hijack

It doesn’t end there. The gullible system let anyone with a robot ID retrieve lots of personal information from the server: IP address, country of residence, kid’s name, gender, age — along with details of the parental account: parent’s email address, phone number, and the code that links the parental app to the robot.

This, in turn, opened the door for a far more hazardous attack: complete parental-account hijack. A malicious actor would only have needed to have taken a few simple steps:

The first one would have been to log in to the parental account from their own device by using the email address or phone number obtained previously. Authorization required submitting a six-digit one-time code, but login attempts were unlimited so trivial brute-forcing would have done the trick.
It would only have taken one click to unlink the robot from the true parental account.
Next would have been linking it to the attacker’s account. Account verification relied on the linking-code mentioned above, and the server would send it to all comers.

A successful attack would have resulted in the parents losing all access to the robot, and recovering it would have required contacting tech support. Even then, the attacker could still have repeated the whole process again, because all they needed was the robot ID, which remained unchanged.

Uploading modified firmware

Finally, as we studied the way that the robot’s various systems functioned, we discovered security issues with the software update process. Update packages came without a digital signature, and the robot installed a specially formatted update archive received from the vendor’s server without running any verifications first.

This opened possibilities for attacking the update server, replacing the archive with a modified one, and uploading malicious firmware that let the attacker execute arbitrary commands with superuser permissions on all robots. In theory, the attackers would then have been able to assume control over the robot’s movements, use the built-in cameras and microphones for spying, make calls to robots, and so on.

How to stay safe

This tale has a happy ending, though. We informed the toy’s developers about the issues we’d discovered, and they took steps to fix them. The vulnerabilities described above have all been fixed.

In closing, here are a few tips on staying safe while using various smart gadgets:

Remember that all kinds of smart devices — even toys — are typically highly complex digital systems whose developers often fail to ensure secure and reliable storage of user data.
As you shop for a device, be sure to closely read user feedback and reviews and, ideally, any security reports if you can find them.
Keep in mind that the mere discovery of vulnerabilities in a device doesn’t make it inferior: issues can be found anywhere. What you need to look for is the vendor’s response: it’s a good sign if any issues have been fixed. It’s not a good thing if the vendor appears not to care.
To avoid being spied or eavesdropped on by your smart devices, turn them off when you’re not using them, and shutter or tape over the camera.
Finally, it goes without saying that you should protect all your family members’ devices with a reliable security solution. A toy-robot hack is admittedly an exotic threat — but the likelihood of encountering other types of online threats is still very high these days.

Kaspersky official blog – ​Read More

Apple has released a new way to protect instant messaging in iMessage | Kaspersky official blog

The widespread use of quantum computers in the near future may allow hackers to decrypt messages that were encrypted with classical cryptography methods at astonishing speed. Apple has proposed a solution to this potential problem: after the next update of their OSes, conversations in iMessage will be protected by a new post-quantum cryptographic protocol called PQ3. This technology allows you to change the algorithms of end-to-end encryption with a public key so that they can work on classical non-quantum computers, but will provide protection against potential hacking coming from using future quantum computers.

Today we’ll go over how this new encryption protocol works, and why it’s needed.

How PQ3 works

All popular instant messaging applications and services today implement standard asymmetric encryption methods using a public and private key pair. The public key is used to encrypt sent messages and can be transmitted over insecure channels. The private key is most commonly used to create symmetric session keys that are then used to encrypt messages.

This level of security is sufficient for now, but Apple is playing it safe – fearing that hackers may be preparing for quantum computers ahead of time. Due to the low cost of data storage, attackers can collect huge amounts of encrypted data and store it until it can be decrypted using quantum computers.

To prevent this, Apple has developed a new cryptographic protection protocol called PQ3. The key exchange is now protected with an additional post-quantum component. It also minimizes the number of messages that could potentially be decrypted.

Types of cryptography used in messengers. Source

The PQ3 protocol will be available in iOS 17.4, iPadOS 17.4, macOS 14.4, and watchOS 10.4. The transition to the new protocol will be gradual: firstly, all user conversations on PQ3-enabled devices will be automatically switched to this protocol; then, later in 2024, Apple plans to completely replace the previously used protocol of end-to-end encryption.

Generally, credit is due to Apple for this imminent security boost; however, the company isn’t the first to provide post-quantum cybersecurity of instant messaging services and applications. In the fall of 2023, Signal’s developers added support for a similar protocol – PQXDH, which provides post-quantum instant messaging security for users of updated versions of Signal when creating new secure chats.

How the advent of PQ3 will affect the security of Apple users

In essence, Apple is adding a post-quantum component to iMessage’s overall message encryption scheme. In fact, PQ3 will only be one element in its security approach along with traditional ECDSA asymmetric encryption.

However, relying solely on post-quantum protection technologies isn’t advised. Igor Kuznetsov, Director of Kaspersky’s Global Research and Analysis Team (GReAT), commented on Apple’s innovations as follows:

“Since PQ3 still relies on traditional signature algorithms for message authentication, a man-in-middle attacker with a powerful quantum computer (yet to be created) may still have a chance of hacking it.

Does it offer protection against adversaries capable of compromising the device or unlocking it? No, PQ3 only protects the transport layer. Once a message is delivered to an iDevice, there’s no difference – it can be read from the screen, extracted by law enforcement after unlocking the phone, or exfiltrated by advanced attackers using Pegasus, TriangleDB or similar software.”

Thus, those concerned about the protection of their data should not rely only on modern post-quantum cryptographic protocols. It’s important to ensure full protection of your device to make sure third-parties can’t reach your instant messages.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 334 | Kaspersky official blog

In today’s episode of the Transatlantic Cable podcast, the team look at news that companies at the fore-front of generative AI are looking to ‘take action’ on deceptive AI in upcoming elections. From there, the team discuss news that the Canadian government is set to take action against devices such as Flipper Zero, in an apparent fight against criminal activity.

To wrap up, the team discuss news that international police agencies have taken down LockBit – the infamous ransomware gang. Additionally, the team discuss a bizarre story around Artificial Intelligence, blue aliens and job applications – yes, really.

If you liked what you heard, please consider subscribing.

Big tech vows action on ‘deceptive’ AI in elections
Feds Want to Ban the World’s Cutest Hacking Device
UK leads disruption of major cyber-criminal gang
Service Jobs Now Require Bizarre Personality Test From AI Company

Kaspersky official blog – ​Read More

Credential phishing targets ESPs through ESPs

Mailing lists that companies use to contact customers have always been an interesting target for cyberattacks. They can be used for spamming, phishing, and even more sophisticated scams. If, besides the databases, the attackers can gain access to a legitimate tool for sending bulk emails, this significantly increases the chances of success of any attack. After all, users who have agreed to receive emails and are accustomed to consuming information in this way are more likely to open a familiar newsletter than some unexpected missive. That’s why attackers regularly attempt to seize access to companies’ accounts held with email service providers (ESPs). In the latest phishing campaign we’ve uncovered, the attack method has been refined to target credentials on the website of the ESP SendGrid by sending phishing emails directly through the ESP itself.

Why is phishing through SendGrid more dangerous in this case?

Among the tips we usually give in phishing-related posts, we most often recommend taking a close look at the domain of the site in the button or text hyperlink that you’re invited to click or tap. ESPs, as a rule, don’t allow direct links to client websites to be inserted in an email, but rather serve as a kind of redirect — inside the link the email recipient sees the domain of the ESP, which then redirects them to the site specified by the mail authors when setting up the mailing campaign. Among other things, this is done to collect accurate analytics.

In this case, the phishing email appears to come from the ESP SendGrid, expressing concern about the customer’s security and highlighting the need to enable two-factor authentication (2FA) to prevent outsiders from taking control of their account. The email explains the benefits of 2FA and provides a link to update the security settings. This leads, as you’ve probably already guessed, to some address in the SendGrid domain (where the settings page would likely be located if the email really was from SendGrid).

To all email scanners, the phishing looks like a perfectly legitimate email sent from SendGrid’s servers with valid links pointing to the SendGrid domain. The only thing that might alert the recipient is the sender’s address. That’s because ESPs put the real customer’s domain and mailing ID there. Most often, phishers make use of hijacked accounts (ESPs subject new customers to rigorous checks, while old ones who’ve already fired off some bulk emails are considered reliable).

An email seemingly from SendGrid sent through SendGrid to phish a SendGrid account.

Phishing site

This is where the attackers’ originality comes to an end. SendGrid redirects the link-clicking victim to a regular phishing site mimicking an account login page. The site domain is “sendgreds”, which at first glance looks very similar to “sendgrid”.

A site mimicking the SendGrid login page. Note the domain in the address bar

How to stay safe

Since the email is sent through a legitimate service and shows no typical phishing signs, it may slip through the net of automatic filters. Therefore, to protect company users, we always recommend deploying solutions with advanced anti-phishing technology not only at the mail gateway level but on all devices that have access to the internet. This will block any attempted redirects to phishing sites.

And yes, for once it’s worth heeding the attackers’ advice and enabling 2FA. But not through a link in a suspicious email, but in the settings in your account on ESP’s website.

Kaspersky official blog – ​Read More

The biggest ransomware attacks of 2023 | Kaspersky official blog

Time was when any ransomware incident would spark a lively press and public reaction. Fast forward to the present, and the word “ransomware” in a headline doesn’t generate nearly as much interest: such attacks have become commonplace. Nonetheless, they continue to pose a grave threat to corporate security. This review spotlights the biggest and most high-profile incidents that occurred in 2023.

January 2023: LockBit attack on the UK’s Royal Mail

The year kicked off with the LockBit group attacking Royal Mail, the UK’s national postal service. The attack paralyzed international mail delivery, leaving millions of letters and parcels stuck in the company’s system. On top of that, the parcel tracking website, online payment system, and several other services were also crippled; and at the Royal Mail distribution center in Northern Ireland, printers began spewing out copies of the LockBit group’s distinctive orange ransom note.

The LockBit ransom note that printers at the Royal Mail distribution center began printing in earnest. Source

As is commonly the case with modern ransomware attacks, LockBit threatened to post stolen data online unless the ransom was paid. Royal Mail refused to pay up, so the data ended up being published.

February 2023: ESXiArgs attacks VMware ESXi servers worldwide

February saw a massive automated ESXiArgs ransomware attack on organizations through the RCE vulnerability CVE-2021-21974 in VMware ESXi servers. Although VMware released a patch for this vulnerability back in early 2021, the attack left more than 3000 VMware ESXi servers encrypted.

The attack operators demanded just over 2BTC (around $45,000 at the time of the attack). For each individual victim they generated a new Bitcoin wallet and put its address in the ransom note.

Ransom demand from the original version of ESXiArgs ransomware. Source

Just days after the attack began, the cybercriminals unleashed a new strain of the cryptomalware, making it far harder to recover encrypted virtual machines. To make their activities more difficult to trace, they also stopped giving out ransom wallet addresses, prompting victims to make contact through the P2P messenger Tox instead.

March 2023: Clop group widely exploits a zero-day in GoAnywhere MFT

In March 2023, the Clop group began widely exploiting a zero-day vulnerability in Fortra’s GoAnywhere MFT (managed file transfer) tool. Clop is well-known for its penchant for exploiting vulnerabilities in such services: in 2020–2021, the group attacked organizations through a hole in Accelon FTA, switching in late 2021 to exploiting a vulnerability in SolarWinds Serv-U.

In total, more than 100 organizations suffered attacks on vulnerable GoAnywhere MFT servers, including Procter & Gamble, the City of Toronto, and Community Health Systems — one of the largest healthcare providers in the U.S.

Map of GoAnywhere MFT servers connected to the internet. Source

April 2023: NCR Aloha POS terminals disabled by BlackCat attack

In April, the ALPHV group (aka BlackCat —  after the ransomware it uses) attacked NCR, a U.S. manufacturer and servicer of ATMs, barcode readers, payment terminals, and other retail and banking equipment.

The ransomware attack shut down the data centers handling the Aloha POS platform — which is used in restaurants, primarily fast food — for several days.

NCR Aloha POS platform disabled by the ALPHV/BlackCat group. Source

Essentially, the platform is a one-stop shop for managing catering operations: from processing payments, taking online orders, and operating a loyalty program, to managing the preparation of dishes in the kitchen and payroll accounting. As a result of the ransomware attack on NCR, many catering establishments were forced to revert to pen and paper.

May 2023: Royal ransomware attack on the City of Dallas

Early May saw a ransomware attack on municipal services in Dallas, Texas — the ninth most populous city in the U.S. Most affected were IT systems and communications of the Dallas Police Department, and printers on the City of Dallas network began churning out ransom notes.

The Royal ransom note printed out through City of Dallas network printers. Source

Later that month, there was another ransomware attack on an urban municipality: the target this time was the City of Augusta in the U.S. state of Georgia, and the perpetrators were the BlackByte group.

June 2023: Clop group launches massive attacks through zero-days in MOVEit Transfer

In June, the same Clop group responsible for the February attacks on Fortra GoAnywhere MFT began exploiting a zero-day vulnerability in another managed file transfer tool — Progress Software’s MOVEit Transfer.

This ransomware attack — one of the largest incidents of the year — affected numerous organizations, including the oil company Shell, the New York City Department of Education, the BBC media corporation, the British pharmacy chain Boots, the Irish airline Aer Lingus, the University of Georgia, and the German printing equipment manufacturer Heidelberger Druckmaschinen.

The Clop website instructs affected companies to contact the group for negotiations. Source

July 2023: University of Hawaii pays ransom to the NoEscape group

In July, the University of Hawaii admitted to paying off ransomwarers. The incident itself occurred a month earlier when all eyes were fixed on the attacks on MOVEit. During that time, a relatively new group going by the name of NoEscape infected one of the university departments, Hawaiian Community College, with ransomware.

Having stolen 65GB of data, the attackers threatened the university with publication. The personal information of 28,000 people was apparently at risk of compromise. It was this fact that convinced the university to pay the ransom to the extortionists.

NoEscape announces the hack of the University of Hawaii on its website. Source

Of note is that university staff had to temporarily shut down IT systems to stop the ransomware from spreading. Although the NoEscape group supplied a decryption key upon payment of the ransom, the restoration of the IT infrastructure was expected to take two months.

August 2023: Rhysida targets the healthcare sector

August was marked by a series of attacks by the Rhysida ransomware group on the healthcare sector. Prospect Medical Holdings (PMH), which operates 16 hospitals and 165 clinics across several American states, was the organization that suffered the most.

The hackers claimed to have stolen 1TB of corporate documents and a 1.3 TB SQL database containing 500,000 social security numbers, passports, driver’s licenses, patient medical records, as well as financial and legal documents. The cybercriminals demanded a 50BTC ransom (then around $1.3 million).

Ransom note from the Rhysida group. Source

September 2023: BlackCat attacks Caesars and MGM casinos

In early September, news broke of a ransomware attack on two of the biggest U.S. hotel and casino chains — Caesars and MGM — in one stroke. Behind the attacks was the ALPHV/BlackCat group, mentioned above in connection with the assault on the NCR Aloha POS platform.

The incident shut down the companies’ entire infrastructure — from hotel check-in systems to slot machines. Interestingly, the victims responded in very different ways. Caesars decided to pay the extortionists $15 million, half of the original $30 million demand.

MGM chose not to pay up, but rather to restore the infrastructure on its own. The recovery process took nine days, during which time the company lost $100 million (its own estimate), of which $10 million was direct costs related to restoring the downed IT systems.

Caesars and MGM own more than half of Las Vegas casinos

October 2023: BianLian group extorts Air Canada

A month later, the BianLian group targeted Canada’s flag carrier, Air Canada. The attackers claim they stole more than 210GB of various information, including employee/supplier data and confidential documents. In particular, the attackers managed to steal information on technical violations and security issues of the airline.

The BianLian website demands a ransom from Air Canada Source

November 2023: LockBit group exploits Citrix Bleed vulnerability

November was remembered for a Citrix Bleed vulnerability exploited by the LockBit group, which we also discussed above. Although patches for this vulnerability were published a month earlier, at the time of the large-scale attack more than 10,000 publicly accessible servers remained vulnerable. This is what the LockBit ransomware took advantage of to breach the systems of several major companies, steal data, and encrypt files.

Among the big-name victims was Boeing, whose stolen data the attackers ended up publishing without waiting for the ransom to be paid. The ransomware also hit the Industrial and Commercial Bank of China (ICBC), the largest commercial bank in the world.

The LockBit website demands a ransom from Boeing

The incident badly hurt the Australian arm of DP World, a major UAE-based logistics company that operates dozens of ports and container terminals worldwide. The attack on DP World Australia’s IT systems massively disrupted its logistics operations, leaving some 30,000 containers stranded in Australian ports.

December 2023: ALPHV/BlackCat infrastructure seized by law enforcement

Toward the end of the year, a joint operation by the FBI, the U.S. Department of Justice, Europol, and law enforcement agencies of several European countries deprived the ALPHV/BlackCat ransomware group of control over its infrastructure. Having hacked it, they quietly observed the cybercriminals’ actions for several months, collecting data decryption keys and aiding BlackCat victims.

In this way, the agencies rid more than 500 organizations worldwide of the ransom threat and saved around $68 million in potential payouts. This was followed in December by a final takeover of the servers, putting an end to BlackCat’s operations.

The joint law enforcement operation to seize ALPHV/BlackCat infrastructure. Source

Various statistics about the ransomware group’s operations were also made public. According to the FBI, during the two years of its activity, ALPHV/BlackCat breached more than a thousand organizations, demanded a total of more than $500 million from victims, and received around $300 million in ransom payments.

How to guard against ransomware attacks

Ransomware attacks are becoming more varied and sophisticated with each passing year, so there isn’t (and can’t be) one killer catch-all tip to prevent incidents. Defense measures must be comprehensive. Focus on the following tasks:

Train employees in cybersecurity awareness.
Implement and refine data storage and employee access
Back up important data regularly and isolate it from the network.
Install robust protection on all corporate devices.
Monitor suspicious activity on the corporate network using an Endpoint Detection and Response (EDR)
Outsource threat search and response to a specialist company if your in-house information security lacks the capability.

Kaspersky official blog – ​Read More

KeyTrap attack can take out a DNS server | Kaspersky official blog

A group of researchers representing several German universities and institutes have discovered a vulnerability in DNSSEC, a set of extensions to the DNS protocol designed to improve its security, and primarily to counter DNS spoofing.

An attack they dubbed KeyTrap, which exploits the vulnerability, can disable a DNS server by sending it a single malicious data packet. Read on to find out more about this attack.

How KeyTrap works and what makes it dangerous

The DNSSEC vulnerability has only recently become public knowledge, but it was discovered back in December 2023 and registered as CVE-2023-50387. It was assigned a CVSS 3.1 score of 7.5, and a severity rating of “High”. Complete information about the vulnerability and the attack associated with it is yet to be published.

Here’s how KeyTrap works. The malicious actor sets up a nameserver that responds to requests from caching DNS servers – that is, those which serve client requests directly – with a malicious packet. Next, the attacker has the caching-server request a DNS record from their malicious nameserver. The record sent in response is a cryptographically-signed malicious one. The way the signature is crafted causes the attacked DNS server trying to verify it to run at full CPU capacity for a long period of time.

According to the researchers, a single such malicious packet can freeze the DNS server for anywhere from 170 seconds to 16 hours – depending on the software it runs on. The KeyTrap attack can not only deny access to web content to all clients using the targeted DNS server, but also disrupt various infrastructural services such as spam protection, digital certificate management (PKI), and secure cross-domain routing (RPKI).

The researchers refer to KeyTrap as “the worst attack on DNS ever discovered”. Interestingly enough, the flaws in the signature validation logic making KeyTrap possible were discovered in one of the earliest versions of the DNSSEC specification, published as far back as… 1999. In other words, the vulnerability is about to turn 25!

The origins of KeyTrap can be traced back to RFC-2035, the DNSSEC specification published in 1999

Fending off KeyTrap

The researchers have alerted all DNS server software developers and major public DNS providers. Updates and security advisories to fix CVE-2023-50387 are now available for PowerDNS, NLnet Labs Unbound, and Internet Systems Consortium BIND9. If you are an administrator of a DNS server, it’s high time to install the updates.

Bear in mind, though, that the DNSSEC logic issues that have made KeyTrap possible are fundamental in nature and not easily fixed. Patches released by DNS software developers can only go some way toward solving the problem, as the vulnerability is part of standard, rather than specific implementations. “If we launch [KeyTrap] against a patched resolver, we still get 100 percent CPU usage but it can still respond,” said one of the researchers.

Practical exploitation of the flaw remains a possibility, with the potential result being unpredictable resolver failures. In case this happens, corporate network administrators would do well to prepare a list of backup DNS servers in advance so they can switch as needed to keep the network functioning normally and let users browse the web resources they need unimpeded.

Kaspersky official blog – ​Read More

How to run language models and other AI tools locally on your computer | Kaspersky official blog

Many people are already experimenting with generative neural networks and finding regular use for them, including at work. For example, ChatGPT and its analogs are regularly used by almost 60% of Americans (and not always with permission from management). However, all the data involved in such operations — both user prompts and model responses — are stored on servers of OpenAI, Google, and the rest. For tasks where such information leakage is unacceptable, you don’t need to abandon AI completely — you just need to invest a little effort (and perhaps money) to run the neural network locally on your own computer – even a laptop.

Cloud threats

The most popular AI assistants run on the cloud infrastructure of large companies. It’s efficient and fast, but your data processed by the model may be accessible to both the AI service provider and completely unrelated parties, as happened last year with ChatGPT.

Such incidents present varying levels of threat depending on what these AI assistants are used for. If you’re generating cute illustrations for some fairy tales you’ve written, or asking ChatGPT to create an itinerary for your upcoming weekend city break, it’s unlikely that a leak will lead to serious damage. However, if your conversation with a chatbot contains confidential info — personal data, passwords, or bank card numbers — a possible leak to the cloud is no longer acceptable. Thankfully, it’s relatively easy to prevent by pre-filtering the data — we’ve written a separate post about that.

However, in cases where either all the correspondence is confidential (for example, medical or financial information), or the reliability of pre-filtering is questionable (you need to process large volumes of data that no one will preview and filter), there’s only one solution: move the processing from the cloud to a local computer. Of course, running your own version of ChatGPT or Midjourney offline is unlikely to be successful, but other neural networks working locally provide comparable quality with less computational load.

What hardware do you need to run a neural network?

You’ve probably heard that working with neural networks requires super-powerful graphics cards, but in practice this isn’t always the case. Different AI models, depending on their specifics, may be demanding on such computer components as RAM, video memory, drive, and CPU (here, not only the processing speed is important, but also the processor’s support for certain vector instructions). The ability to load the model depends on the amount of RAM, and the size of the “context window” — that is, the memory of the previous conversation — depends on the amount of video memory. Typically, with a weak graphics card and CPU, generation occurs at a snail’s pace (one to two words per second for text models), so a computer with such a minimal setup is only appropriate for getting acquainted with a particular model and evaluating its basic suitability. For full-fledged everyday use, you’ll need to increase the RAM, upgrade the graphics card, or choose a faster AI model.

As a starting point, you can try working with computers that were considered relatively powerful back in 2017: processors no lower than Core i7 with support for AVX2 instructions, 16GB of RAM, and graphics cards with at least 4GB of memory. For Mac enthusiasts, models running on the Apple M1 chip and above will do, while the memory requirements are the same.

When choosing an AI model, you should first familiarize yourself with its system requirements. A search query like “model_name requirements” will help you assess whether it’s worth downloading this model given your available hardware. There are detailed studies available on the impact of memory size, CPU, and GPU on the performance of different models; for example, this one.

Good news for those who don’t have access to powerful hardware — there are simplified AI models that can perform practical tasks even on old hardware. Even if your graphics card is very basic and weak, it’s possible to run models and launch environments using only the CPU. Depending on your tasks, these can even work acceptably well.

Examples of how various computer builds work with popular language models

Choosing an AI model and the magic of quantization

A wide range of language models are available today, but many of them have limited practical applications. Nevertheless, there are easy-to-use and publicly available AI tools that are well-suited for specific tasks, be they generating text (for example, Mistral 7B), or creating code snippets (for example, Code Llama 13B). Therefore, when selecting a model, narrow down the choice to a few suitable candidates, and then make sure that your computer has the necessary resources to run them.

In any neural network, most of the memory strain is courtesy of weights — numerical coefficients describing the operation of each neuron in the network. Initially, when training the model, the weights are computed and stored as high-precision fractional numbers. However, it turns out that rounding the weights in the trained model allows the AI tool to be run on regular computers while only slightly decreasing the performance. This rounding process is called quantization, and with its help the model’s size can be reduced considerably — instead of 16 bits, each weight might use eight, four, or even two bits.

According to current research, a larger model with more parameters and quantization can sometimes give better results than a model with precise weight storage but fewer parameters.

Armed with this knowledge, you’re now ready to explore the treasure trove of open-source language models, namely the top Open LLM leaderboard. In this list, AI tools are sorted by several generation quality metrics, and filters make it easy to exclude models that are too large, too small, or too accurate.

List of language models sorted by filter set

After reading the model description and making sure it’s potentially a fit for your needs, test its performance in the cloud using Hugging Face or Google Colab services. This way, you can avoid downloading models which produce unsatisfactory results, saving you time. Once you’re satisfied with the initial test of the model, it’s time to see how it works locally!

Required software

Most of the open-source models are published on Hugging Face, but simply downloading them to your computer isn’t enough. To run them, you have to install specialized software, such as LLaMA.cpp, or — even easier — its “wrapper”, LM Studio. The latter allows you to select your desired model directly from the application, download it, and run it in a dialog box.

Another “out-of-the-box” way to use a chatbot locally is GPT4All. Here, the choice is limited to about a dozen language models, but most of them will run even on a computer with just 8GB of memory and a basic graphics card.

If generation is too slow, then you may need a model with coarser quantization (two bits instead of four). If generation is interrupted or execution errors occur, the problem is often insufficient memory — it’s worth looking for a model with fewer parameters or, again, with coarser quantization.

Many models on Hugging Face have already been quantized to varying degrees of precision, but if no one has quantized the model you want with the desired precision, you can do it yourself using GPTQ.

This week, another promising tool was released to public beta: Chat With RTX from NVIDIA. The manufacturer of the most sought-after AI chips has released a local chatbot capable of summarizing the content of YouTube videos, processing sets of documents, and much more — provided the user has a Windows PC with 16GB of memory and an NVIDIA RTX 30th or 40th series graphics card with 8GB or more of video memory. “Under the hood” are the same varieties of Mistral and Llama 2 from Hugging Face. Of course, powerful graphics cards can improve generation performance, but according to the feedback from the first testers, the existing beta is quite cumbersome (about 40GB) and difficult to install. However, NVIDIA’s Chat With RTX could become a very useful local AI assistant in the future.

The code for the game “Snake”, written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The applications listed above perform all computations locally, don’t send data to servers, and can run offline so you can safely share confidential information with them. However, to fully protect yourself against leaks, you need to ensure not only the security of the language model but also that of your computer – and that’s where our comprehensive security solution comes in. As confirmed in independent tests, Kaspersky Premium has practically no impact on your computer’s performance — an important advantage when working with local AI models.

Kaspersky official blog – ​Read More

Secure AI usage both at home and at work | Kaspersky official blog

Last year’s explosive growth in AI applications, services, and plug-ins looks set to only accelerate. From office applications and image editors to integrated development environments (IDEs) such as Visual Studio — AI is being added to familiar and long-used tools. Plenty of developers are creating thousands of new apps that tap the largest AI models. However, no one in this race has yet been able to solve the inherent security issues, first and foremost the minimizing of confidential data leaks, and also the level of account/device hacking through various AI tools — let alone create proper safeguards against a futuristic “evil AI”. Until someone comes up with an off-the-shelf solution for protecting the users of AI assistants, you’ll have to pick up a few skills and help yourself.

So, how do you use AI without regretting it later?

Filter important data

The privacy policy of OpenAI, the developer of ChatGPT, unequivocally states that any dialogs with the chatbot are saved and can be used for a number of purposes. First, these are solving technical issues and preventing terms-of-service violations: in case someone gets an idea to generate inappropriate content. Who would have thought it, right? In that case, chats may even be reviewed by a human. Second, the data may be used for training new GPT versions and making other product “improvements”.

Most other popular language models — be it Google’s Bard, Anthropic’s Claude, or Microsoft’s Bing and Copilot — have similar policies: they can all save dialogs in their entirety.

That said, inadvertent chat leaks have already occurred due to software bugs, with users seeing other people’s conversations instead of their own. The use of this data for training could also lead to a data leak from a pre-trained model: the AI assistant might give your information to someone if it believes it to be relevant for the response. Information security experts have even designed multiple attacks (one, two, three) aimed at stealing dialogs, and they’re unlikely to stop there.

So, remember: anything you write to a chatbot can be used against you. We recommend taking precautions when talking to AI.

Don’t send any personal data to a chatbot. No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI. You can replace these with asterisks or “REDACTED” in your request.

Don’t upload any documents. Numerous plug-ins and add-ons let you use chatbots for document processing. There might be a strong temptation to upload a work document to, say, get an executive summary. However, by carelessly uploading of a multi-page document, you risk leaking confidential data, intellectual property, or a commercial secret such as the release date of a new product or the entire team’s payroll. Or, worse than that, when processing documents received from external sources, you might be targeted with an attack that counts on the document being scanned by a language model.

Use privacy settings. Carefully review your large-language-model (LLM) vendor’s privacy policy and available settings: these can normally be leveraged to minimize tracking. For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.

Sending code? Clean up any confidential data. This tip goes out to those software engineers who use AI assistants for reviewing and improving their code: remove any API keys, server addresses, or any other information that could give away the structure of the application or the server configuration.

Limit the use of third-party applications and plug-ins

Follow the above tips every time — no matter what popular AI assistant you’re using. However, even this may not be sufficient to ensure privacy. The use of ChatGPT plug-ins, Bard extensions, or separate add-on applications gives rise to new types of threats.

First, your chat history may now be stored not only on Google or OpenAI servers but also on servers belonging to the third party that supports the plug-in or add-on, as well as in unlikely corners of your computer or smartphone.

Second, most plug-ins draw information from external sources: web searches, your Gmail inbox, or personal notes from services such as Notion, Jupyter, or Evernote. As a result, any of your data from those services may also end up on the servers where the plug-in or the language model itself is running. An integration like that may carry significant risks: for example, consider this attack that creates new GitHub repositories on behalf of the user.

Third, the publication and verification of plug-ins for AI assistants are currently a much less orderly process than, say, app-screening in the App Store or Google Play. Therefore, your chances of encountering a poorly working, badly written, buggy, or even plain malicious plug-in are fairly high — all the more so because it seems no one really checks the creators or their contacts.

How do you mitigate these risks? Our key tip here is to give it some time. The plug-in ecosystem is too young, the publication and support processes aren’t smooth enough, and the creators themselves don’t always take care to design plug-ins properly or comply with information security requirements. This whole ecosystem needs more time to mature and become securer and more reliable.

Besides, the value that many plug-ins and add-ons add to the stock ChatGPT version is minimal: minor UI tweaks and “system prompt” templates that customize the assistant for a specific task (“Act as a high-school physics teacher…”). These wrappers certainly aren’t worth trusting with your data, as you can accomplish the task just fine without them.

If you do need certain plug-in features right here and now, try to take maximum precautions available before using them.

Choose extensions and add-ons that have been around for at least several months and are being updated regularly.
Consider only plug-ins that have lots of downloads, and carefully read the reviews for any issues.
If the plug-in comes with a privacy policy, read it carefully before you start using the extension.
Opt for open-source tools.
If you possess even rudimentary coding skills — or coder friends — skim the code to make sure that it only sends data to declared servers and, ideally, AI model servers only.

Execution plug-ins call for special monitoring

So far, we’ve been discussing risks relating to data leaks; but this isn’t the only potential issue when using AI. Many plug-ins are capable of performing specific actions at the user’s command — such as ordering airline tickets. These tools provide malicious actors with a new attack vector: the victim is presented with a document, web page, video, or even an image that contains concealed instructions for the language model in addition to the main content. If the victim feeds the document or link to a chatbot, the latter will execute the malicious instructions — for example, by buying tickets with the victim’s money. This type of attack is referred to as prompt injection, and although the developers of various LLMs are trying to develop a safeguard against this threat, no one has managed it — and perhaps never will.

Luckily, most significant actions — especially those involving payment transactions such as purchasing tickets — require a double confirmation. However, interactions between language models and plug-ins create an attack surface so large that it’s difficult to guarantee consistent results from these measures.

Therefore, you need to be really thorough when selecting AI tools, and also make sure that they only receive trusted data for processing.

Kaspersky official blog – ​Read More

Cyberthreats to marketing | Kaspersky official blog

When it comes to attacks on businesses, the focus is usually on four aspects: finance, intellectual property, personal data, and IT infrastructure. However, we mustn’t forget that cybercriminals can also target company assets managed by PR and marketing — including e-mailouts, advertising platforms, social media channels, and promotional sites. At first glance, these may seem unattractive to the bad guys (“where’s the revenue?”), but in practice each can serve cybercriminals in their own “marketing activities”.

Malvertising

To the great surprise of many (even InfoSec experts), cybercriminals have been making active use of legitimate paid advertising for a number of years now. In one way or another they pay for banner ads and search placements, and employ corporate promotion tools. There are many examples of this phenomenon, which goes by the name of malvertising (malicious advertising). Usually, cybercriminals advertise fake pages of popular apps, fake promo campaigns of famous brands, and other fraudulent schemes aimed at a wide audience. Sometimes threat actors create an advertising account of their own and pay for advertising, but this method leaves too much of a trail (such as payment details). So a different method is more attractive to them: stealing login credentials and hacking the advertising account of a straight-arrow company, then promoting their sites through it. This has a double payoff for the cybercriminals: they get to spend others’ money without leaving excess traces. But the victim company, besides a gutted advertising account, gets one problem after another — including potentially being blocked by the advertising platform for distributing malicious content.

Downvoted and unfollowed

A variation of the above scheme is a takeover of social networks’ paid advertising accounts. The specifics of social media platforms create additional troubles for the target company.

First, access to corporate social media accounts is usually tied to employees’ personal accounts. It’s often enough for attackers to compromise an advertiser’s personal computer or steal their social network password to gain access not only to likes and cat pics but to the scope of action granted by the company they work for. That includes posting on the company’s social network page, sending emails to customers through the built-in communication mechanism, and placing paid advertising. Revoking these functions from a compromised employee is easy as long as they aren’t the main administrator of the corporate page — in which case, restoring access will be labor-intensive in the extreme.

Second, most advertising on social networks takes the form of “promoted posts” created on behalf of a particular company. If an attacker posts and promotes a fraudulent offer, the audience immediately sees who published it and can voice their complaints directly under the post. In this case, the company will suffer not just financial but visible reputational damage.

Third, on social networks many companies save “custom audiences” — ready-made collections of customers interested in various products and services or who have previously visited the company’s website. Although these usually can’t be pulled (that is, stolen) from a social network, unfortunately it’s possible to create malvertising on their basis that’s adapted to a specific audience and is thus more effective.

Unscheduled circular

Another effective way for cybercriminals to get free advertising is to hijack an account on an email service provider. If the attacked company is large enough, it may have millions of subscribers in its mailing list.

This access can be exploited in a number of ways: by mailing an irresistible fake offer to email addresses in the subscriber database; by covertly substituting links in planned advertising emails; or by simply downloading the subscriber database in order to send them phishing emails in other ways later on.

Again, the damage suffered is financial, reputational, and technical. By “technical” we mean the blocking of future incoming messages by mail servers. In other words, after the malicious mailouts, the victim company will have to resolve matters not only with the mailing platform but also potentially with specific email providers that have blocked you as a source of fraudulent correspondents.

A very nasty side effect of such an attack is the leakage of customers’ personal data. This is an incident in its own right — capable of inflicting not only reputational damage but also landing you with a fine from data protection regulators.

Fifty shades of website

A website hack can go unnoticed for a long time — especially for a small company that does business primarily through social networks or offline. From the cybercriminals’ point of view, the goals of a website hack vary depending on the type of site and the nature of the company’s business. Leaving aside cases when website compromise is part of a more sophisticated cyberattack, we can generally delineate the following varieties.

First, threat actors can install a web skimmer on an e-commerce site. This is a small, well-disguised piece of JavaScript embedded directly in the website code that steals card details when customers pay for a purchase. The customer doesn’t need to download or run anything — they simply pay for goods or services on the site, and the attackers skim off the money.

Second, attackers can create hidden subsections on the site and fill them with malicious content of their choosing. Such pages can be used for a wide variety of criminal activity, be it fake giveaways, fake sales, or distributing Trojanized software. Using a legitimate website for these purposes is ideal, just as long as the owners don’t notice that they have “guests”. There is, in fact, a whole industry centered around this practice. Especially popular are unattended sites created for some marketing campaign or one-time event and then forgotten about.

The damage to a company from a website hack is broad-ranging, and includes: increased site-related costs due to malicious traffic; a decrease in the number of real visitors due to a drop in the site’s SEO ranking; potential wrangles with customers or law enforcement over unexpected charges to customers’ cards.

Hotwired web forms

Even without hacking a company’s website, threat actors can use it for their own purposes. All they need is a website function that generates a confirmation email: a feedback form, an appointment form, and so on. Cybercriminals use automated systems to exploit such forms for spamming or phishing.

The mechanics are straightforward: the target’s address is entered into the form as a contact email, while the text of the fraudulent email itself goes in the Name or Subject field, for example, “Your money transfer is ready for issue (link)”. As a result, the victim receives a malicious email that reads something like: “Dear XXX, your money transfer is ready for issue (link). Thank you for contacting us. We’ll be in touch shortly”. Naturally, the anti-spam platforms eventually stop letting such emails through, and the victim company’s form loses some of its functionality. In addition, all recipients of such mail think less of the company, equating it with a spammer.

How to protect PR and marketing assets from cyberattacks

Since the described attacks are quite diverse, in-depth protection is called for. Here are the steps to take:

Conduct cybersecurity awareness training across the entire marketing department. Repeat it regularly;
Make sure that all employees adhere to password best practices: long, unique passwords for each platform and mandatory use of two-factor authentication — especially for social networks, mailing tools, and ad management platforms;
Eliminate the practice of using one password for all employees who need access to a corporate social network or other online tool;
Instruct employees to access mailing/advertising tools and the website admin panel only from work devices equipped with full protection in line with company standards (EDR or internet security, EMM/UEM, VPN);
Urge employees to install comprehensive protection on their personal computers and smartphones;
Introduce the practice of mandatory logout from mailing/advertising platforms and other similar accounts when not in use;
Remember to revoke access to social networks, mailing/advertising platforms, and website admin immediately after an employee departs the company;
Regularly review email lists sent out and ads currently running, together with detailed website traffic analytics so as to spot anomalies in good time;
Make sure that all software used on your websites (content management system, its extensions) and on work computers (such as OS, browser, and Office), is regularly and systematically updated to the very latest versions;
Work with your website support contractor to implement form validation and sanitization; in particular, to ensure that links can’t be inserted into fields that aren’t intended for such a purpose. Also set a “rate limit” to prevent the same actor from making hundreds of requests a day, plus a smart captcha to guard against bots.

 

Kaspersky official blog – ​Read More

Navigating the risks of online dating | Kaspersky official blog

Navigating the current dating landscape can be perplexing; it’s filled with apps, websites, catfishing, and lurking stalkers. While pre-Tinder dating had its challenges, it sure seemed to be less intricate.

Complicating matters is the heightened uncertainty about the identity of your virtual conversational partner, and the disconcerting possibility of digital stalking.

In fact, we recently commissioned a report on digital stalking to ascertain the reality of these risks and concerns. We engaged with over 21,000 participants to cast light on the alarming prevalence of digital abuse experienced by those in pursuit of love.

Revelations from the survey

As per our survey findings, 34% of respondents believe that googling or checking social media accounts of someone they’ve just started dating is a form of “due diligence”. While seemingly harmless, 23% reported encountering some form of online stalking from a new romantic interest, suggesting that some individuals may take a swift Google search a bit too far.

Furthermore, and somewhat alarmingly, over 90% of respondents expressed a willingness to share or consider sharing passwords that grant access to their location. While seemingly innocuous on the surface, there can loom there specter of stalkerware: silent software capable of continuously tracking user whereabouts and spying on messages.

How to protect yourself? Tips from the experts

We’ve compiled advice from leading online security, dating, and safety experts to help you navigate the waters of love safely this Valentine’s Day!

Enhanced password safety measures

Create complex passwords using a mix of letters, numbers, and symbols.
Never reuse passwords across different sites and apps; keep them private.
Use two-factor authentication for an added layer of security.
Change your password immediately if you’ve shared it with someone you’ve been dating but are no longer in touch with.
Use a password manager to keep all your passwords strong and safe.

Proactive verification techniques of online dating profiles

Run a reverse-image search for that profile; if it appears on multiple pages under various names, it’s likely a catfisher.
Look for inconsistencies in daters’ stories and profile details.
Be wary of sudden, intense expressions of love, or requests for money.
Use video calls to verify a dater’s identity before meeting in person.

Maximizing online dating profile security:

Conduct your own privacy audit of your social media accounts to understand what’s publicly visible.
Customize your privacy settings to control who can see your posts and personal information.
Regularly review your friends/followers list to ensure you know who has access to your information.

Strategic sharing guidelines:

Avoid posting details that could disclose your location, workplace, or routines.
Think twice before sharing emotionally charged or intimate content.
Be mindful of metadata or other identifiable clues in photos (like geotags) that can reveal your identity, location, or details you’d rather keep private.
Set personal boundaries on the type of information you share early on in a relationship; only reveal personal details gradually as trust builds over time.
Listen to your instincts – if something feels off, take a step back and give yourself a moment.
Consider how the data you share could be used to piece together a profile or compromise your physical safety.

Comprehensive safety plan for offline meetings:

Choose well-lit, public places for initial meetings.
Avoid sharing or displaying personal items that might reveal your address or sensitive information.
Arrange your own transportation to and from the meeting place.
Have a check-in system with a friend or family member.

As we embrace the possibilities for romance and connection in the digital age, let’s not forget the importance of our safety and wellbeing. By implementing these strategies, you can confidently explore the world of online dating while safeguarding both your digital and physical self. For more details, please take a look at our safe dating guide. And our premium security solution with identity protection and privacy features can help you keep calm and carry on… dating!

Kaspersky official blog – ​Read More