How to run language models and other AI tools locally on your computer | Kaspersky official blog

Many people are already experimenting with generative neural networks and finding regular use for them, including at work. For example, ChatGPT and its analogs are regularly used by almost 60% of Americans (and not always with permission from management). However, all the data involved in such operations — both user prompts and model responses — are stored on servers of OpenAI, Google, and the rest. For tasks where such information leakage is unacceptable, you don’t need to abandon AI completely — you just need to invest a little effort (and perhaps money) to run the neural network locally on your own computer – even a laptop.

Cloud threats

The most popular AI assistants run on the cloud infrastructure of large companies. It’s efficient and fast, but your data processed by the model may be accessible to both the AI service provider and completely unrelated parties, as happened last year with ChatGPT.

Such incidents present varying levels of threat depending on what these AI assistants are used for. If you’re generating cute illustrations for some fairy tales you’ve written, or asking ChatGPT to create an itinerary for your upcoming weekend city break, it’s unlikely that a leak will lead to serious damage. However, if your conversation with a chatbot contains confidential info — personal data, passwords, or bank card numbers — a possible leak to the cloud is no longer acceptable. Thankfully, it’s relatively easy to prevent by pre-filtering the data — we’ve written a separate post about that.

However, in cases where either all the correspondence is confidential (for example, medical or financial information), or the reliability of pre-filtering is questionable (you need to process large volumes of data that no one will preview and filter), there’s only one solution: move the processing from the cloud to a local computer. Of course, running your own version of ChatGPT or Midjourney offline is unlikely to be successful, but other neural networks working locally provide comparable quality with less computational load.

What hardware do you need to run a neural network?

You’ve probably heard that working with neural networks requires super-powerful graphics cards, but in practice this isn’t always the case. Different AI models, depending on their specifics, may be demanding on such computer components as RAM, video memory, drive, and CPU (here, not only the processing speed is important, but also the processor’s support for certain vector instructions). The ability to load the model depends on the amount of RAM, and the size of the “context window” — that is, the memory of the previous conversation — depends on the amount of video memory. Typically, with a weak graphics card and CPU, generation occurs at a snail’s pace (one to two words per second for text models), so a computer with such a minimal setup is only appropriate for getting acquainted with a particular model and evaluating its basic suitability. For full-fledged everyday use, you’ll need to increase the RAM, upgrade the graphics card, or choose a faster AI model.

As a starting point, you can try working with computers that were considered relatively powerful back in 2017: processors no lower than Core i7 with support for AVX2 instructions, 16GB of RAM, and graphics cards with at least 4GB of memory. For Mac enthusiasts, models running on the Apple M1 chip and above will do, while the memory requirements are the same.

When choosing an AI model, you should first familiarize yourself with its system requirements. A search query like “model_name requirements” will help you assess whether it’s worth downloading this model given your available hardware. There are detailed studies available on the impact of memory size, CPU, and GPU on the performance of different models; for example, this one.

Good news for those who don’t have access to powerful hardware — there are simplified AI models that can perform practical tasks even on old hardware. Even if your graphics card is very basic and weak, it’s possible to run models and launch environments using only the CPU. Depending on your tasks, these can even work acceptably well.

Examples of how various computer builds work with popular language models

Choosing an AI model and the magic of quantization

A wide range of language models are available today, but many of them have limited practical applications. Nevertheless, there are easy-to-use and publicly available AI tools that are well-suited for specific tasks, be they generating text (for example, Mistral 7B), or creating code snippets (for example, Code Llama 13B). Therefore, when selecting a model, narrow down the choice to a few suitable candidates, and then make sure that your computer has the necessary resources to run them.

In any neural network, most of the memory strain is courtesy of weights — numerical coefficients describing the operation of each neuron in the network. Initially, when training the model, the weights are computed and stored as high-precision fractional numbers. However, it turns out that rounding the weights in the trained model allows the AI tool to be run on regular computers while only slightly decreasing the performance. This rounding process is called quantization, and with its help the model’s size can be reduced considerably — instead of 16 bits, each weight might use eight, four, or even two bits.

According to current research, a larger model with more parameters and quantization can sometimes give better results than a model with precise weight storage but fewer parameters.

Armed with this knowledge, you’re now ready to explore the treasure trove of open-source language models, namely the top Open LLM leaderboard. In this list, AI tools are sorted by several generation quality metrics, and filters make it easy to exclude models that are too large, too small, or too accurate.

List of language models sorted by filter set

After reading the model description and making sure it’s potentially a fit for your needs, test its performance in the cloud using Hugging Face or Google Colab services. This way, you can avoid downloading models which produce unsatisfactory results, saving you time. Once you’re satisfied with the initial test of the model, it’s time to see how it works locally!

Required software

Most of the open-source models are published on Hugging Face, but simply downloading them to your computer isn’t enough. To run them, you have to install specialized software, such as LLaMA.cpp, or — even easier — its “wrapper”, LM Studio. The latter allows you to select your desired model directly from the application, download it, and run it in a dialog box.

Another “out-of-the-box” way to use a chatbot locally is GPT4All. Here, the choice is limited to about a dozen language models, but most of them will run even on a computer with just 8GB of memory and a basic graphics card.

If generation is too slow, then you may need a model with coarser quantization (two bits instead of four). If generation is interrupted or execution errors occur, the problem is often insufficient memory — it’s worth looking for a model with fewer parameters or, again, with coarser quantization.

Many models on Hugging Face have already been quantized to varying degrees of precision, but if no one has quantized the model you want with the desired precision, you can do it yourself using GPTQ.

This week, another promising tool was released to public beta: Chat With RTX from NVIDIA. The manufacturer of the most sought-after AI chips has released a local chatbot capable of summarizing the content of YouTube videos, processing sets of documents, and much more — provided the user has a Windows PC with 16GB of memory and an NVIDIA RTX 30th or 40th series graphics card with 8GB or more of video memory. “Under the hood” are the same varieties of Mistral and Llama 2 from Hugging Face. Of course, powerful graphics cards can improve generation performance, but according to the feedback from the first testers, the existing beta is quite cumbersome (about 40GB) and difficult to install. However, NVIDIA’s Chat With RTX could become a very useful local AI assistant in the future.

The code for the game “Snake”, written by the quantized language model TheBloke/CodeLlama-7B-Instruct-GGUF

The applications listed above perform all computations locally, don’t send data to servers, and can run offline so you can safely share confidential information with them. However, to fully protect yourself against leaks, you need to ensure not only the security of the language model but also that of your computer – and that’s where our comprehensive security solution comes in. As confirmed in independent tests, Kaspersky Premium has practically no impact on your computer’s performance — an important advantage when working with local AI models.

Kaspersky official blog – ​Read More

Secure AI usage both at home and at work | Kaspersky official blog

Last year’s explosive growth in AI applications, services, and plug-ins looks set to only accelerate. From office applications and image editors to integrated development environments (IDEs) such as Visual Studio — AI is being added to familiar and long-used tools. Plenty of developers are creating thousands of new apps that tap the largest AI models. However, no one in this race has yet been able to solve the inherent security issues, first and foremost the minimizing of confidential data leaks, and also the level of account/device hacking through various AI tools — let alone create proper safeguards against a futuristic “evil AI”. Until someone comes up with an off-the-shelf solution for protecting the users of AI assistants, you’ll have to pick up a few skills and help yourself.

So, how do you use AI without regretting it later?

Filter important data

The privacy policy of OpenAI, the developer of ChatGPT, unequivocally states that any dialogs with the chatbot are saved and can be used for a number of purposes. First, these are solving technical issues and preventing terms-of-service violations: in case someone gets an idea to generate inappropriate content. Who would have thought it, right? In that case, chats may even be reviewed by a human. Second, the data may be used for training new GPT versions and making other product “improvements”.

Most other popular language models — be it Google’s Bard, Anthropic’s Claude, or Microsoft’s Bing and Copilot — have similar policies: they can all save dialogs in their entirety.

That said, inadvertent chat leaks have already occurred due to software bugs, with users seeing other people’s conversations instead of their own. The use of this data for training could also lead to a data leak from a pre-trained model: the AI assistant might give your information to someone if it believes it to be relevant for the response. Information security experts have even designed multiple attacks (one, two, three) aimed at stealing dialogs, and they’re unlikely to stop there.

So, remember: anything you write to a chatbot can be used against you. We recommend taking precautions when talking to AI.

Don’t send any personal data to a chatbot. No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI. You can replace these with asterisks or “REDACTED” in your request.

Don’t upload any documents. Numerous plug-ins and add-ons let you use chatbots for document processing. There might be a strong temptation to upload a work document to, say, get an executive summary. However, by carelessly uploading of a multi-page document, you risk leaking confidential data, intellectual property, or a commercial secret such as the release date of a new product or the entire team’s payroll. Or, worse than that, when processing documents received from external sources, you might be targeted with an attack that counts on the document being scanned by a language model.

Use privacy settings. Carefully review your large-language-model (LLM) vendor’s privacy policy and available settings: these can normally be leveraged to minimize tracking. For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.

Sending code? Clean up any confidential data. This tip goes out to those software engineers who use AI assistants for reviewing and improving their code: remove any API keys, server addresses, or any other information that could give away the structure of the application or the server configuration.

Limit the use of third-party applications and plug-ins

Follow the above tips every time — no matter what popular AI assistant you’re using. However, even this may not be sufficient to ensure privacy. The use of ChatGPT plug-ins, Bard extensions, or separate add-on applications gives rise to new types of threats.

First, your chat history may now be stored not only on Google or OpenAI servers but also on servers belonging to the third party that supports the plug-in or add-on, as well as in unlikely corners of your computer or smartphone.

Second, most plug-ins draw information from external sources: web searches, your Gmail inbox, or personal notes from services such as Notion, Jupyter, or Evernote. As a result, any of your data from those services may also end up on the servers where the plug-in or the language model itself is running. An integration like that may carry significant risks: for example, consider this attack that creates new GitHub repositories on behalf of the user.

Third, the publication and verification of plug-ins for AI assistants are currently a much less orderly process than, say, app-screening in the App Store or Google Play. Therefore, your chances of encountering a poorly working, badly written, buggy, or even plain malicious plug-in are fairly high — all the more so because it seems no one really checks the creators or their contacts.

How do you mitigate these risks? Our key tip here is to give it some time. The plug-in ecosystem is too young, the publication and support processes aren’t smooth enough, and the creators themselves don’t always take care to design plug-ins properly or comply with information security requirements. This whole ecosystem needs more time to mature and become securer and more reliable.

Besides, the value that many plug-ins and add-ons add to the stock ChatGPT version is minimal: minor UI tweaks and “system prompt” templates that customize the assistant for a specific task (“Act as a high-school physics teacher…”). These wrappers certainly aren’t worth trusting with your data, as you can accomplish the task just fine without them.

If you do need certain plug-in features right here and now, try to take maximum precautions available before using them.

Choose extensions and add-ons that have been around for at least several months and are being updated regularly.
Consider only plug-ins that have lots of downloads, and carefully read the reviews for any issues.
If the plug-in comes with a privacy policy, read it carefully before you start using the extension.
Opt for open-source tools.
If you possess even rudimentary coding skills — or coder friends — skim the code to make sure that it only sends data to declared servers and, ideally, AI model servers only.

Execution plug-ins call for special monitoring

So far, we’ve been discussing risks relating to data leaks; but this isn’t the only potential issue when using AI. Many plug-ins are capable of performing specific actions at the user’s command — such as ordering airline tickets. These tools provide malicious actors with a new attack vector: the victim is presented with a document, web page, video, or even an image that contains concealed instructions for the language model in addition to the main content. If the victim feeds the document or link to a chatbot, the latter will execute the malicious instructions — for example, by buying tickets with the victim’s money. This type of attack is referred to as prompt injection, and although the developers of various LLMs are trying to develop a safeguard against this threat, no one has managed it — and perhaps never will.

Luckily, most significant actions — especially those involving payment transactions such as purchasing tickets — require a double confirmation. However, interactions between language models and plug-ins create an attack surface so large that it’s difficult to guarantee consistent results from these measures.

Therefore, you need to be really thorough when selecting AI tools, and also make sure that they only receive trusted data for processing.

Kaspersky official blog – ​Read More

Cyberthreats to marketing | Kaspersky official blog

When it comes to attacks on businesses, the focus is usually on four aspects: finance, intellectual property, personal data, and IT infrastructure. However, we mustn’t forget that cybercriminals can also target company assets managed by PR and marketing — including e-mailouts, advertising platforms, social media channels, and promotional sites. At first glance, these may seem unattractive to the bad guys (“where’s the revenue?”), but in practice each can serve cybercriminals in their own “marketing activities”.

Malvertising

To the great surprise of many (even InfoSec experts), cybercriminals have been making active use of legitimate paid advertising for a number of years now. In one way or another they pay for banner ads and search placements, and employ corporate promotion tools. There are many examples of this phenomenon, which goes by the name of malvertising (malicious advertising). Usually, cybercriminals advertise fake pages of popular apps, fake promo campaigns of famous brands, and other fraudulent schemes aimed at a wide audience. Sometimes threat actors create an advertising account of their own and pay for advertising, but this method leaves too much of a trail (such as payment details). So a different method is more attractive to them: stealing login credentials and hacking the advertising account of a straight-arrow company, then promoting their sites through it. This has a double payoff for the cybercriminals: they get to spend others’ money without leaving excess traces. But the victim company, besides a gutted advertising account, gets one problem after another — including potentially being blocked by the advertising platform for distributing malicious content.

Downvoted and unfollowed

A variation of the above scheme is a takeover of social networks’ paid advertising accounts. The specifics of social media platforms create additional troubles for the target company.

First, access to corporate social media accounts is usually tied to employees’ personal accounts. It’s often enough for attackers to compromise an advertiser’s personal computer or steal their social network password to gain access not only to likes and cat pics but to the scope of action granted by the company they work for. That includes posting on the company’s social network page, sending emails to customers through the built-in communication mechanism, and placing paid advertising. Revoking these functions from a compromised employee is easy as long as they aren’t the main administrator of the corporate page — in which case, restoring access will be labor-intensive in the extreme.

Second, most advertising on social networks takes the form of “promoted posts” created on behalf of a particular company. If an attacker posts and promotes a fraudulent offer, the audience immediately sees who published it and can voice their complaints directly under the post. In this case, the company will suffer not just financial but visible reputational damage.

Third, on social networks many companies save “custom audiences” — ready-made collections of customers interested in various products and services or who have previously visited the company’s website. Although these usually can’t be pulled (that is, stolen) from a social network, unfortunately it’s possible to create malvertising on their basis that’s adapted to a specific audience and is thus more effective.

Unscheduled circular

Another effective way for cybercriminals to get free advertising is to hijack an account on an email service provider. If the attacked company is large enough, it may have millions of subscribers in its mailing list.

This access can be exploited in a number of ways: by mailing an irresistible fake offer to email addresses in the subscriber database; by covertly substituting links in planned advertising emails; or by simply downloading the subscriber database in order to send them phishing emails in other ways later on.

Again, the damage suffered is financial, reputational, and technical. By “technical” we mean the blocking of future incoming messages by mail servers. In other words, after the malicious mailouts, the victim company will have to resolve matters not only with the mailing platform but also potentially with specific email providers that have blocked you as a source of fraudulent correspondents.

A very nasty side effect of such an attack is the leakage of customers’ personal data. This is an incident in its own right — capable of inflicting not only reputational damage but also landing you with a fine from data protection regulators.

Fifty shades of website

A website hack can go unnoticed for a long time — especially for a small company that does business primarily through social networks or offline. From the cybercriminals’ point of view, the goals of a website hack vary depending on the type of site and the nature of the company’s business. Leaving aside cases when website compromise is part of a more sophisticated cyberattack, we can generally delineate the following varieties.

First, threat actors can install a web skimmer on an e-commerce site. This is a small, well-disguised piece of JavaScript embedded directly in the website code that steals card details when customers pay for a purchase. The customer doesn’t need to download or run anything — they simply pay for goods or services on the site, and the attackers skim off the money.

Second, attackers can create hidden subsections on the site and fill them with malicious content of their choosing. Such pages can be used for a wide variety of criminal activity, be it fake giveaways, fake sales, or distributing Trojanized software. Using a legitimate website for these purposes is ideal, just as long as the owners don’t notice that they have “guests”. There is, in fact, a whole industry centered around this practice. Especially popular are unattended sites created for some marketing campaign or one-time event and then forgotten about.

The damage to a company from a website hack is broad-ranging, and includes: increased site-related costs due to malicious traffic; a decrease in the number of real visitors due to a drop in the site’s SEO ranking; potential wrangles with customers or law enforcement over unexpected charges to customers’ cards.

Hotwired web forms

Even without hacking a company’s website, threat actors can use it for their own purposes. All they need is a website function that generates a confirmation email: a feedback form, an appointment form, and so on. Cybercriminals use automated systems to exploit such forms for spamming or phishing.

The mechanics are straightforward: the target’s address is entered into the form as a contact email, while the text of the fraudulent email itself goes in the Name or Subject field, for example, “Your money transfer is ready for issue (link)”. As a result, the victim receives a malicious email that reads something like: “Dear XXX, your money transfer is ready for issue (link). Thank you for contacting us. We’ll be in touch shortly”. Naturally, the anti-spam platforms eventually stop letting such emails through, and the victim company’s form loses some of its functionality. In addition, all recipients of such mail think less of the company, equating it with a spammer.

How to protect PR and marketing assets from cyberattacks

Since the described attacks are quite diverse, in-depth protection is called for. Here are the steps to take:

Conduct cybersecurity awareness training across the entire marketing department. Repeat it regularly;
Make sure that all employees adhere to password best practices: long, unique passwords for each platform and mandatory use of two-factor authentication — especially for social networks, mailing tools, and ad management platforms;
Eliminate the practice of using one password for all employees who need access to a corporate social network or other online tool;
Instruct employees to access mailing/advertising tools and the website admin panel only from work devices equipped with full protection in line with company standards (EDR or internet security, EMM/UEM, VPN);
Urge employees to install comprehensive protection on their personal computers and smartphones;
Introduce the practice of mandatory logout from mailing/advertising platforms and other similar accounts when not in use;
Remember to revoke access to social networks, mailing/advertising platforms, and website admin immediately after an employee departs the company;
Regularly review email lists sent out and ads currently running, together with detailed website traffic analytics so as to spot anomalies in good time;
Make sure that all software used on your websites (content management system, its extensions) and on work computers (such as OS, browser, and Office), is regularly and systematically updated to the very latest versions;
Work with your website support contractor to implement form validation and sanitization; in particular, to ensure that links can’t be inserted into fields that aren’t intended for such a purpose. Also set a “rate limit” to prevent the same actor from making hundreds of requests a day, plus a smart captcha to guard against bots.

 

Kaspersky official blog – ​Read More

Navigating the risks of online dating | Kaspersky official blog

Navigating the current dating landscape can be perplexing; it’s filled with apps, websites, catfishing, and lurking stalkers. While pre-Tinder dating had its challenges, it sure seemed to be less intricate.

Complicating matters is the heightened uncertainty about the identity of your virtual conversational partner, and the disconcerting possibility of digital stalking.

In fact, we recently commissioned a report on digital stalking to ascertain the reality of these risks and concerns. We engaged with over 21,000 participants to cast light on the alarming prevalence of digital abuse experienced by those in pursuit of love.

Revelations from the survey

As per our survey findings, 34% of respondents believe that googling or checking social media accounts of someone they’ve just started dating is a form of “due diligence”. While seemingly harmless, 23% reported encountering some form of online stalking from a new romantic interest, suggesting that some individuals may take a swift Google search a bit too far.

Furthermore, and somewhat alarmingly, over 90% of respondents expressed a willingness to share or consider sharing passwords that grant access to their location. While seemingly innocuous on the surface, there can loom there specter of stalkerware: silent software capable of continuously tracking user whereabouts and spying on messages.

How to protect yourself? Tips from the experts

We’ve compiled advice from leading online security, dating, and safety experts to help you navigate the waters of love safely this Valentine’s Day!

Enhanced password safety measures

Create complex passwords using a mix of letters, numbers, and symbols.
Never reuse passwords across different sites and apps; keep them private.
Use two-factor authentication for an added layer of security.
Change your password immediately if you’ve shared it with someone you’ve been dating but are no longer in touch with.
Use a password manager to keep all your passwords strong and safe.

Proactive verification techniques of online dating profiles

Run a reverse-image search for that profile; if it appears on multiple pages under various names, it’s likely a catfisher.
Look for inconsistencies in daters’ stories and profile details.
Be wary of sudden, intense expressions of love, or requests for money.
Use video calls to verify a dater’s identity before meeting in person.

Maximizing online dating profile security:

Conduct your own privacy audit of your social media accounts to understand what’s publicly visible.
Customize your privacy settings to control who can see your posts and personal information.
Regularly review your friends/followers list to ensure you know who has access to your information.

Strategic sharing guidelines:

Avoid posting details that could disclose your location, workplace, or routines.
Think twice before sharing emotionally charged or intimate content.
Be mindful of metadata or other identifiable clues in photos (like geotags) that can reveal your identity, location, or details you’d rather keep private.
Set personal boundaries on the type of information you share early on in a relationship; only reveal personal details gradually as trust builds over time.
Listen to your instincts – if something feels off, take a step back and give yourself a moment.
Consider how the data you share could be used to piece together a profile or compromise your physical safety.

Comprehensive safety plan for offline meetings:

Choose well-lit, public places for initial meetings.
Avoid sharing or displaying personal items that might reveal your address or sensitive information.
Arrange your own transportation to and from the meeting place.
Have a check-in system with a friend or family member.

As we embrace the possibilities for romance and connection in the digital age, let’s not forget the importance of our safety and wellbeing. By implementing these strategies, you can confidently explore the world of online dating while safeguarding both your digital and physical self. For more details, please take a look at our safe dating guide. And our premium security solution with identity protection and privacy features can help you keep calm and carry on… dating!

Kaspersky official blog – ​Read More

One-time passwords and 2FA codes — what to do if you receive one without requesting it | Kaspersky official blog

Over the past few years, we’ve become accustomed to logging into important websites and apps, such as online banking ones, using a password and another verification method. This could be a one-time password (OTP) sent via a text message, email or push notification, a code from an authenticator app or even a special USB device — a token. This way of logging in is called two-factor authentication (2FA), and it makes hacking much more difficult. Stealing or guessing a password alone is no longer enough to hijack an account. But what should you do if you haven’t tried to log in anywhere, but suddenly receive a one-time code or a request to enter it?

There are three reasons why this situation might occur:

A hacking attempt. Hackers have somehow learned, guessed, or stolen your password and are now trying to use it to access your account. You have received a legitimate message from the service they are trying to access.
Preparation for a hack. Hackers have either learned your password or are trying to trick you into revealing it, in which case the OTP message is a form of phishing. The message is fake, although it may look very similar to a genuine one.
Just a mistake. Sometimes online services are set up to first request a confirmation code from a text message, and then a password, or authenticate with just one code. In this case, another user could make a typo and enter your phone/email instead of theirs — and you’ll receive the code.

As you can see, there may be a malicious intent behind this message. But the good news is that at this stage, there has been no irreparable damage, and by taking the right action you can avoid any trouble.

What to do when receiving a code request

Most importantly, do not click the confirmation button if the message is in the “Yes/No” form, do not log in anywhere, and do not share any received codes with anyone.

If the code request message contains links, do not follow them.

These are the most essential rules to follow. As long as you don’t confirm your login, your account is safe. However, it’s highly likely that your account’s password is known to attackers. Therefore, the next thing to do is to change the password for this account. Go to the relevant service by entering its web address manually, not by following a link. Enter your password, get a new (that’s important!) confirmation code, and enter it. Then find the password settings and set a new strong password. If you use the same password for other accounts, you’ll need to change the password for them, too — but make sure to create a unique password for each account. We understand that it’s difficult to remember so many passwords, so we highly recommend storing them in a dedicated password manager.

This stage — changing your passwords — is not so urgent. There’s no need to do it in a rush, but also don’t postpone it for another day. For valuable accounts (like banking), attackers may try to intercept the OTP if it is sent via a text message. This is done through SIM swapping (registering a new SIM card to your number) or attacking via the operator’s service network, utilizing a flaw in the SS7 communications protocol. Therefore, it’s important to change the password before they attempt such an attack. In general, one-time codes sent in text messages are less reliable than authenticator apps and USB tokens. We recommend always using the most secure 2FA method available; a review of different two-factor authentication methods can be found here.

What to do if you’re receiving a lot of OTP requests

In an attempt to make you confirm a login, hackers may bombard you with codes. They try to log in to the account again and again, hoping either that you’ll make a mistake and click “Confirm”, or go to the service and disable 2FA out of annoyance. It’s important to keep cool and do neither. The best thing to do is go to the service’s site as described above (open the site manually, not through a link) and quickly change the password; but for this, you’ll need to receive and enter your own, legitimate OTP. Some authentication requests (for example, warnings about logging into Google services) have a separate “No, it’s not me” button — usually, this button causes automated systems on the service side to automatically block the attacker and any new 2FA requests. Another option, albeit not the most convenient one, would be to switch the phone to silent or even airplane mode for half an hour or so until the wave of codes subsides.

What to do if you accidentally confirm a stranger’s login

This is the worst-case scenario, as you have likely allowed an attacker into your account. Attackers act quickly, changing settings and passwords, so you’ll have to play catch-up and deal with the consequences of the hack. We’ve provided advice for this scenario here.

How to protect yourself?

The best method of defense in this case is to stay one step ahead of the criminals: si vis pacem, para bellum. This is where our security solution comes in handy. It tracks leaks of your accounts linked to both email addresses and phone numbers, including on the dark web. You can add the phone numbers and email addresses of all your family members, and if any account data becomes public or is discovered in leaked databases, Kaspersky Premium will alert you and give advice on what to do.

Included in the subscription, Kaspersky Password Manager will warn you about compromised passwords and help you change them, generating new uncrackable passwords for you. You can also add two-factor authentication tokens to it or easily transfer them from Google Authenticator in a few clicks. Secure storage for your personal documents will safeguard your most important documents and files, such as passport scans or personal photos, in encrypted form so that only you can access them.

Moreover, your logins, passwords, authentication codes and saved documents will be available from any of your devices — computer, smartphone or tablet — so even if you somehow lose your phone, you won’t lose your data and access, and you’ll be able to easily restore them on a new device. And to access all your data, you only need to remember one password — the main one — which is not stored anywhere except in your head and is used for banking-standard AES data encryption.

With the “zero disclosure principle”, no one can access your passwords and data — not even Kaspersky employees. The reliability and effectiveness of our security solutions have been confirmed by numerous independent tests, and our home protection solutions received the highest award — “Product of the Year 2023” — in tests by the independent European laboratory AV-Comparatives.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 333 | Kaspersky official blog

Episode 333 of the Transatlantic Cable Podcast dives into news that a site called ‘OnlyFakes’ is offering deepfake photo ID – the team also stay on the AI bandwagon with the next story which talks about the recent furore around illicit AI generated Taylor Swift images.

From there the team discuss two final stories, the first around a virus that was released onto the Valhiem gaming Discord channels, causing havoc as it was spread. The final story looks at a recent Interpol campaign, dubbed ‘Operation Synergia,’ which resulted in 31 arrests and over 1,300 C2 (command and control) servers being taken down.

If you liked what you heard, please consider subscribing.

Inside the Underground Site Where ‘Neural Networks’ Churn Out Fake IDs
Taylor Swift deepfakes spark calls in Congress for new legislation
Valheim Discord servers locked after hacker releases virus
Interpol operation Synergia takes down 1,300 servers used for cybercrime

Kaspersky official blog – ​Read More

What kind of education does a cybersecurity specialist need? | Kaspersky official blog

The labor market has long experienced a shortage of cybersecurity experts. Often, companies in need of information-security specialists can’t find any – at least, those with specialized formal education and the necessary experience. In order to understand how important it is for a company to have specialists with a formal education in this area, and how well such education meets modern needs, our colleagues conducted a study – A portrait of a modern information security professional – in which they interviewed more than a thousand employees from 29 countries in different regions of the world. Among the respondents were specialists of various levels: from beginners with two years of experience, to CIOs and SOC managers with 10. And judging by the respondents’ answers, it looks like classical education isn’t keeping up with InfoSec trends.

First and foremost, the survey showed that not all specialists have a higher education: more than half (53%) of InfoSec workers have no post-graduate education. But as to those with it, every second worker doubts that their formal education really helps them perform their job duties.

Cybersecurity is a rapidly changing industry. The threat landscape is changing so fast that even a couple of months lag can be critical – while it can take four to five years to obtain an academic degree. During this time, attackers can modernize their tactics and methods in such a way that a graduate InfoSec “specialist” would have to quickly read all the latest articles about threats and defense methods in the event of an actual attack.

InfoSec specialists with real life experience argue that educational institutions in any case don’t provide enough practical knowledge – and don’t have access to modern technologies and equipment. Thus, to work in the InfoSec field and to fight real cyberthreats, some additional education is required anyway.

All this, of course, doesn’t mean that cybersecurity professionals with higher education are less competent than their colleagues without it. Ultimately, passion and the ability to continually improve are of the utmost importance in professional development. Many respondents noted that they received more theoretical than practical knowledge in traditional educational institutions, but felt that formal education was still useful since, without a solid theoretical basis, absorption of new knowledge would progress more slowly. On the other hand, specialists who don’t have post-graduate education at all, or who came to information security from another IT industry, can also become effective specialists in protecting against cyberthreats. It really does all depend on the individual.

How to improve the labor market situation

In order for the market to attract a sufficient number of information security experts, the situation needs to be balanced on both sides. First, it makes sense for universities to consider partnering with cybersecurity companies. This would allow them to provide students with more practically applicable knowledge. And second, it’s a good idea for companies to periodically increase the expertise of their employees with the help of specialized educational courses.

You can read the part of the report devoted to InfoSec educational problems on the webpage of the first chapter – Educational background of current cybersecurity experts.

Kaspersky official blog – ​Read More

Crypto wallet drainer: what it is and how to defend against it | Kaspersky official blog

A new category of malicious tools has been gaining popularity with crypto scammers lately: crypto wallet drainers. This post will explain what crypto drainers are, how they work, what makes them dangerous — even for experienced users — and how to defend against them.

What a crypto (wallet) drainer is

A crypto drainer — or crypto wallet drainer — is a type of malware that’s been targeting crypto owners since it first appeared just over a year ago. A crypto drainer is designed to (quickly) empty crypto wallets automatically by siphoning off either all or just the most valuable assets they contain, and placing them into the drainer operators’ wallets.

As an example of this kind of theft, let us review the theft of 14 Bored Ape NFTs with a total value of over $1 million, which occurred on December 17, 2022. The scammers set up a fake website for the real Los Angeles-based movie studio Forte Pictures, and contacted a certain NFT collector on behalf of the company. They told the collector that they were making a film about NFT. Next, they asked the collector if they wanted to license the intellectual property (IP) rights to one of their Bored Ape NFTs so it could be used in the movie.

According to the scammers, this required signing a contract on “Unemployd”, ostensibly a blockchain platform for licensing NFT-related intellectual property. However, after the victim approved the transaction, it turned out that all 14 Bored Ape NFTs belonging to them were sent to the malicious actor for a paltry 0.00000001 ETH (about US¢0.001 at the time).

What the request to sign the “contract” looked like (left), and what actually happened after the transaction was approved (right). Source

The scheme relied to a large extent on social engineering: the scammers courted the victim for more than a month with email messages, calls, fake legal documents, and so on. However, the centerpiece of this theft was the transaction that transferred the crypto assets into the scammers’ ownership, which they undertook at an opportune time. Such a transaction is what drainers rely on.

How crypto drainers work

Today’s drainers can automate most of the work of emptying victims’ crypto wallets. First, they can help to find out the approximate value of crypto assets in a wallet and identify the most valuable ones. Second, they can create transactions and smart contracts to siphon off assets quickly and efficiently. And finally, they obfuscate fraudulent transactions, making them as vague as possible, so that it’s difficult to understand what exactly happens once the transaction is authorized.

Armed with a drainer, malicious actors create fake web pages posing as websites for cryptocurrency projects of some sort. They often register lookalike domain names, taking advantage of the fact that these projects tend to use currently popular domain extensions that resemble one another.

Then the scammers use a technique to lure the victim to these sites. Frequent pretexts are an airdrop or NFT minting: these models of rewarding user activity are popular in the crypto world, and scammers don’t hesitate to take advantage of that.

These X (Twitter) ads promoted NFT airdrops and new token launches on sites that contain the drainer. Source

Also commonplace are some totally unlikely schemes: to draw users to a fake website, malicious actors recently used a hacked Twitter account that belonged to a… blockchain security company!

X (Twitter) ads for a supposedly limited-edition NFT collection on scam websites. Source

Scammers have also been known to place ads on social media and search engines to lure victims to their forged websites. In the latter case, it helps them intercept customers of real crypto projects as they search for a link to a website they’re interested in. Without looking too closely, users click on the “sponsored” scam link, which is always displayed above organic search results, and end up on the fake website.

Google search ads with links to scam websites containing crypto drainers. Source

Then, the unsuspecting crypto owners are handed a transaction generated by the crypto drainer to sign. This can result in a direct transfer of funds to the scammers’ wallets, or more sophisticated scenarios such as transferring the rights to manage assets in the victim’s wallet to a smart contract. One way or another, once the malicious transaction is approved, all the valuable assets get siphoned off to the scammers’ wallets as quickly as possible.

How dangerous crypto drainers are

The popularity of drainers among crypto scammers is growing rapidly. According to a recent study on crypto drainer scams, more than 320,000 users were affected in 2023, with total damage of just under $300 million. The fraudulent transactions recorded by the researchers included around a dozen — worth more than a million dollars each. The largest value of loot taken in a single transaction amounted to a little over $24 million!

Curiously, experienced cryptocurrency users fall prey to scams like this just like newbies. For example, the founder of the startup behind Nest Wallet was recently robbed of $125,000 worth of stETH by scammers who used a fake website promising an airdrop.

How to protect against crypto drainers

Don’t put all your eggs in one basket: try to keep only a portion of your funds that you need for day-to-day management of your projects in hot crypto wallets, and store the bulk of your crypto assets in cold wallets.
To be on the safe side, use multiple hot wallets: use one for your Web3 activities — such as drop hunting, use another to keep operating funds for these activities, and transfer your profits to cold wallets. You’ll have to pay extra commission for transfers between the wallets, but malicious actors would hardly be able to steal anything from the empty wallet used for airdrops.
Keep checking the websites you visit time and time again. Any suspicious detail is a reason to stop and double-check it all again.
Don’t click on sponsored links in search results: only use links in organic search results – that is, those that aren’t marked “sponsored”.
Review every transaction detail carefully.
Use companion browser extensions to verify transactions. These help identify fraudulent transactions and highlight what exactly will happen as a result of the transaction.
Finally, be sure to install reliable security on all devices you use to manage crypto assets.

How protection from crypto threats works in Kaspersky solutions

By the way, Kaspersky solutions offer multi-layered protection against crypto threats. Be sure to use comprehensive security on all your devices: phones, tablets, and computers. Kaspersky Premium is a good cross-platform solution. Check that all basic and advanced security features are enabled and read our detailed instructions on protecting both hot and cold crypto wallets.

Kaspersky official blog – ​Read More

Using ambient light sensor for spying | Kaspersky official blog

An article in Science Magazine published mid-January describes a non-trivial method of snooping on smartphone users through an ambient light sensor. All smartphones and tablets have this component built-in — as do many laptops and TVs. Its primary task is to sense the amount of ambient light in the environment the device finds itself in, and to alter the brightness of the display accordingly.

But first we need to explain why a threat actor would use a tool ill-suited for capturing footage instead of the target device’s regular camera. The reason is that such “ill-suited” sensors are usually totally unprotected. Let’s imagine an attacker tricked a user into installing a malicious program on their smartphone. The malware will struggle to gain access to oft-targeted components, such as the microphone or camera. But to the light sensor? Easy as pie.

So, the researchers proved that this ambient light sensor can be used instead of a camera; for example, to get a snapshot of the user’s hand entering a PIN on a virtual keyboard. In theory, by analyzing such data, it’s possible to reconstruct the password itself. This post explains the ins and outs in plain language.

“Taking shots” with a light sensor. Source

A light sensor is a rather primitive piece of technology. It’s a light-sensitive photocell for measuring the brightness of ambient light several times per second. Digital cameras use very similar (albeit smaller) light sensors, but there are many millions of them. The lens projects an image onto this photocell matrix, the brightness of each element is measured, and the result is a digital photograph. Thus, you could describe a light sensor as the most primitive digital camera there is: its resolution is exactly one pixel. How could such a thing ever capture what’s going on around the device?

The researchers used the Helmholtz reciprocity principle, formulated back in the mid-19th century. This principle is widely used in computer graphics, for example, where it greatly simplifies calculations. In 2005, the principle formed the basis of the proposed dual photography method. Let’s take an illustration from this paper to help explain:

On the left is a real photograph of the object. On the right is an image calculated from the point of view of the light source. Source

Imagine you’re photographing objects on a table. A lamp shines on the objects, the reflected light hits the camera lens, and the result is a photograph. Nothing out of the ordinary. In the illustration above, the image on the left is precisely that — a regular photo. Next, in greatly simplified terms, the researchers began to alter the brightness of the lamp and record the changes in illumination. As a result, they collected enough information to reconstruct the image on the right — taken as if from the point of view of the lamp. There’s no camera in this position and never was, but based on the measurements, the scene was successfully reconstructed.

Most interesting of all is that this trick doesn’t even require a camera. A simple photoresistor will do… just like the one in an ambient light sensor. A photoresistor (or “single-pixel camera”) measures changes in the light reflected from objects, and this data is used to construct a photograph of them. The quality of the image will be low, and many measurements must be taken — numbering in the hundreds or thousands.

Experimental setup: a Samsung Galaxy View tablet and a mannequin hand. Source

Let’s return to the study and the light sensor. The authors of the paper used a fairly large Samsung Galaxy View tablet with a 17-inch display. Various patterns of black and white rectangles were displayed on the tablet’s screen. A mannequin was positioned facing the screen in the role of a user entering something on the on-screen keyboard. The light sensor captured changes in brightness. In several hundred measurements like this, an image of the mannequin’s hand was produced. That is, the authors applied the Helmholtz reciprocity principle to get a photograph of the hand, taken as if from the point of view of the screen. The researchers effectively turned the tablet display into an extremely low-quality camera.

Comparing real objects in front of the tablet with what the light sensor captured. Source

True, not the sharpest image. The above-left picture shows what needed to be captured: in one case, the open palm of the mannequin; in the other, how the “user” appears to tap something on the display. The images in the center are a reconstructed “photo” at 32×32 pixel resolution, in which almost nothing is visible — too much noise in the data. But with the help of machine-learning algorithms, the noise was filtered out to produce the images on the right, where we can distinguish one hand position from the other. The authors of the paper give other examples of typical gestures that people make when using a tablet touchscreen. Or rather, examples of how they managed to “photograph” them:

Capturing various hand positions using a light sensor. Source

So can we apply this method in practice? Is it possible to monitor how the user interacts with the touchscreen of a tablet or smartphone? How they enter text on the on-screen keyboard? How they enter credit card details? How they open apps? Fortunately, it’s not that straightforward. Note the captions above the “photographs” in the illustration above. They show how slow this method works. In the best-case scenario, the researchers were able to reconstruct a “photo” of the hand in just over three minutes. The image in the previous illustration took 17 minutes to capture. Real-time surveillance at such speeds is out of the question. It’s also clear now why most of the experiments featured a mannequin’s hand: a human being simply can’t hold their hand motionless for that long.

But that doesn’t rule out the possibility of the method being improved. Let’s ponder the worst-case scenario: if each hand image can be obtained not in three minutes, but in, say, half a second; if the on-screen output is not some strange black-and-white figures, but a video or set of pictures or animation of interest to the user; and if the user does something worth spying on… — then the attack would make sense. But even then — not much sense. All the researchers’ efforts are undermined by the fact that if an attacker managed to slip malware onto the victim’s device, there are many easier ways to then trick them into entering a password or credit card number. Perhaps for the first time in covering such papers (examples: one, two, three, four), we are struggling even to imagine a real-life scenario for such an attack.

All we can do is marvel at the beauty of the proposed method. This research serves as another reminder that the seemingly familiar, inconspicuous devices we are surrounded by can harbor unusual, lesser-known functionalities. That said, for those concerned about this potential violation of privacy, the solution is simple. Such low-quality images are due to the fact that the light sensor takes measurements quite infrequently: 10–20 times per second. The output data also lacks precision. However, that’s only relevant for turning the sensor into a camera. For the main task — measuring ambient light — this rate is even too high. We can “coarsen” the data even more — transmitting it, say, five times per second instead of 20. For matching the screen brightness to the level of ambient light, this is more than enough. But spying through the sensor — already improbable — would become impossible. Perhaps for the best.

Kaspersky official blog – ​Read More

Transatlantic Cable podcast episode 332 | Kaspersky official blog

Episode 332 of the Kaspersky Transatlantic Cable podcast kicks off with news that, after the recent AI generated sketch, George Carlin’s estate has decided to pursue legal matters against the creators.  From there, discussion turns to Mozilla’s worry about Apple’s new browser rules and British law makers question the legality of live facial recognition.

To wrap up, the team discuss news around the recent 23andMe data breach.  If you like what you heard, please consider subscribing.

George Carlin’s Family Takes This AI Bullsh*t to Court

Mozilla says Apple’s new browser rules are ‘as painful as possible’ for Firefox

British lawmakers question legality of live facial recognition technology

23andMe data breach: Hackers stole raw genotype data, health reports

Kaspersky official blog – ​Read More