Impact of Microsoft Copilot+ Recall on corporate cybersecurity
Throughout May and June, the IT world watched the unfolding drama of Copilot+ Recall. First came Microsoft’s announcement of the “memory” feature named Recall that takes screenshots of everything happening on a computer every few seconds and extracting all useful information into a shared database. Then, cybersecurity researchers criticized Recall’s implementation by exposing security flaws and demonstrating the potential for data exfiltration — including of the remote kind. This forced Microsoft to backpedal: first stating the feature wouldn’t be enabled by default and promising improved encryption, and then delaying the mass rollout of Recall entirely — opting to first test it in the Windows Insider Program beta. Despite this setback, Redmond remains committed to the project and plans to launch it on a broad range of computers — including those with AMD and Intel CPUs.
Within the context of devices in the workplace — especially if a company allows BYOD — Recall clearly violates corporate data retention policies and significantly amplifies potential damage if a network is compromised by infostealers or ransomware. What’s more concerning is the clear intention of Microsoft’s competitors to follow this trend. The recently announced Apple Intelligence is still shrouded in marketing language, but the company claims that Siri will have “onscreen awareness” when processing requests, and text-handling tools available across all apps will be capable of both local or ChatGPT-powered processing. While Google’s equivalent features remain under wraps, the company has confirmed that Project Astra — the visual assistant announced at Google I/O — will eventually find its way onto Chromebooks, utilizing screenshots as the input data stream. How should IT and cybersecurity teams prepare for this deluge of AI-powered features?
Risks of visual assistants
We previously discussed how to mitigate the risks of unchecked ChatGPT and other AI assistants’ usage by employees in this article. However, there we focused on the deliberate adoption of additional apps and services by employees themselves — a new and troublesome breed of shadow IT. OS-level assistants present a more complex challenge:
The assistant can take screenshots, recognize text on them, and store any information displayed on an employee’s screen — either locally or in a public cloud. This occurs regardless of the information’s sensitivity, current authentication status, or work context. For instance, an AI assistant could create a local, or even cloud-based, copy of an encrypted email requiring a password.
Captured data might not adhere to corporate data-retention policies; data requiring encryption might be stored without it; data scheduled for deletion might persist in an unaccounted copy; data meant to remain inside the company’s perimeter might end up in a cloud — potentially under an unknown jurisdiction.
The problem of unauthorized access is exacerbated since AI assistants might bypass additional authentication measures implemented for sensitive services within an organization. (Roughly speaking, if you need to view financial transaction data, even after being authorized in the system you need to enable RDP, raise a certificate, log in to the remote system, and enter the password again — or you could simply view it through an AI assistant such as Recall.)
Control over the AI assistant by the user and even IT administrators is limited. Accidental or deliberate activation of additional OS functions at the manufacturer’s command is a known issue. Essentially, Recall, or a similar feature, could appear on a computer unexpectedly and without warning as part of an update.
Although all the tech giants are claiming to be paying close attention to AI security, the practical implementation of security measures must stand the test of reality. Microsoft’s initial claims about data being processed locally and stored in encrypted form proved inaccurate, as the encryption in question was in fact a simple BitLocker, which effectively only protects data when the computer is turned off. Now we have to wait for cybersecurity professionals to assess Microsoft’s updated encryption and whatever Apple eventually releases. Apple claims that some information is processed locally, some within their own cloud using secure computing principles without storing data post-processing, and some — transmitted to OpenAI in anonymized form. While Google’s approach remains to be seen, the company’s track record speaks for itself.
AI assistant implementation policies
Considering the substantial risks and overall lack of maturity in this domain, a conservative strategy is recommended for deploying visual AI assistants:
Collaboratively determine (involving IT, cybersecurity, and business teams) which employee workflows would benefit significantly from visual AI assistants to justify the introduction of additional risks.
Establish a company policy and inform employees that the use of system-level visual AI assistants is prohibited. Grant exceptions on a case-by-case basis for specific uses.
Take measures to block the spontaneous activation of visual AI. Utilize Microsoft group policies and block the execution of AI applications at the EDR or EMM/UEM level. Keep in mind that older computers might not be able to run AI components due to technical limitations, but manufacturers are working to expand their reach to previous system versions.
Ensure that security policies and tools are applied to all devices used by employees for work — including personal computers.
If the first-stage discussion identifies a group of employees that could significantly benefit from visual AI, launch a pilot program with just a few of these employees. IT and cybersecurity teams should develop recommended visual assistant settings tailored to employee roles and company policies. In addition to configuring the assistant, implement enhanced security measures (such as strict user authentication policies and more stringent SIEM and EDR monitoring settings) to prevent data leaks and protect the pilot computers from unwanted/malicious software. Ensure that the available AI assistant is activated by an administrator using these specific settings.
Regularly and thoroughly analyze the pilot program’s group performance compared to a control group, along with the behavior of company computers with the AI assistant activated. Based on this analysis, decide whether to expand or discontinue the pilot program.
Appoint a dedicated resource to monitor cybersecurity research and threat intelligence regarding attacks targeting visual AI assistants and their stored data. This will allow for timely policy adjustments as this technology evolves.
Kaspersky official blog – Read More