How businesses should respond to employees using personal AI apps
A recent MIT report, The GenAI Divide: State of AI in Business 2025, brought on a significant cooling of tech stocks. While the report offers interesting observations on the economics and organization of AI implementation in business, it also contains valuable insights for cybersecurity teams. The authors weren’t concerned with security issues: the words “security”, “cybersecurity”, or “safety” don’t even appear in the report. However, its findings can and should be considered when planning new corporate AI security policies.
The key observation is that while only 40% of surveyed organizations have purchased an LLM subscription, 90% of employees regularly use personal AI-powered tools for work tasks. And this “shadow AI economy” — the term used in the report — is said to be more effective than the official one. A mere 5% of corporations see economic benefit from their AI implementations, whereas employees are successfully boosting their personal productivity.
The top-down approach to AI implementation is often unsuccessful. Therefore, the authors recommend “learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives”. So how does this advice align with cybersecurity rules?
A complete ban on shadow AI
A policy favored by many CISOs is to test and implement — or better yet, build one’s own — AI tools and then simply ban all others. This approach can be economically inefficient, potentially causing the company to fall behind its competitors. It’s also difficult to enforce, as ensuring compliance can be both challenging and expensive. Nevertheless, for some highly regulated industries or for business units that handle extremely sensitive data, a prohibitive policy might be the only option. The following methods can be used to implement it:
- Block access to all popular AI tools at the network level using a network filtering tool.
- Configure a DLP system to monitor and block data from being transferred to AI applications and services; this includes preventing the copying and pasting of large text blocks via the clipboard.
- Use an application allowlist policy on corporate devices to prevent employees from running third-party applications that could be used for direct AI access or to bypass other security measures.
- Prohibit the use of personal devices for work-related tasks.
- Use additional tools, such as video analytics, to detect and limit employees’ ability to take pictures of their computer screens with personal smartphones.
- Establish a company-wide policy that prohibits the use of any AI tools except those on a management-approved list and deployed by corporate security teams. This policy should be formally documented, and employees should receive appropriate training.
Unrestricted use of AI
If the company considers the risks of using AI tools to be insignificant, or has departments that don’t handle personal or other sensitive data, the use of AI by these teams can be all but unrestricted. By setting a short list of hygiene measures and restrictions, the company can observe LLM usage habits, identify popular services, and use this data to plan future actions and refine their security measures. Even with this democratic approach, it’s still necessary to:
- Train employees on the basics of responsible AI use with the help of a cybersecurity module. A good starting place: our recommendations, or adding a specialized course to the company’s security awareness platform.
- Set up detailed application traffic logging to analyze the rhythm of AI use and the types of services being used.
- Make sure that all employees have an EPP/EDR agent installed on their work devices, and a robust security solution on their personal gadgets. (“ChatGPT app” has been scammers’ bait of choice for spreading infostealers in 2024–2025.)
- Conduct regular surveys to find out how often AI is being used and for what tasks. Based on telemetry and survey data, measure the effect and risks of its use to adjust your policies.
Balanced restrictions on AI use
When it comes to company-wide AI usage, neither extreme — a total ban or total freedom — is likely to fit. More versatile would be a policy that allows for different levels of AI access based on the type of data being used. Full implementation of such a policy requires:
- A specialized AI proxy that both cleans queries on-the-fly by removing specific types of sensitive data (such as names or customer IDs), and uses role-based access control to block inappropriate use cases.
- An IT self-service portal for employees to declare their use of AI tools — from basic models and services to specialized applications and browser extensions.
- A solution (NGFW, CASB, DLP, or other) for detailed monitoring and control of AI usage at the level of specific requests for each service.
- Only for companies that build software: modified CI/CD pipelines and SAST/DAST tools to automatically identify AI-generated code, and flag it for additional verification steps.
- As with the unrestricted scenario, regular employee training, surveys, and robust security for both work and personal devices.
Armed with the listed requirements, a policy needs to be developed that covers different departments and various types of information. It might look something like this:
Data type | Public-facing AI (from personal devices and accounts) | External AI service (via a corporate AI proxy) | On-premise or trusted cloud AI tools |
Public data (such as ad copy) | Permitted (declared via the company portal) | Permitted (logged) | Permitted (logged) |
General internal data (such as email content) | Discouraged but not blocked. Requires declaration | Permitted (logged) | Permitted (logged) |
Confidential data (such as application source code, legal or HR communications) | Blocked by DLP/CASB/NGFW | Permitted for specific, manager-approved scenarios (personal data must be removed; code requires both automated and manual checks) | Permitted (logged, with personal data removed as needed) |
High-impact regulated data (financial, medical, and so on) | Prohibited | Prohibited | Permitted with CISO approval, subject to regulatory storage requirements |
Highly critical and classified data | Prohibited | Prohibited | Prohibited (exceptions possible only with board of directors approval) |
To enforce the policy, a multi-layered organizational approach is necessary in addition to technical tools. First and foremost, employees need to be trained on the risks associated with AI — from data leaks and hallucinations to prompt injections. This training should be mandatory for everyone in the organization.
After the initial training, it’s essential to develop more detailed policies and provide advanced training for department heads. This will empower them to make informed decisions about whether to approve or deny requests to use specific data with public AI tools.
Initial policies, criteria, and measures are just the beginning; they need to be regularly updated. This involves analyzing data, refining real-world AI use cases, and monitoring popular tools. A self-service portal is needed as a stress-free environment where employees can explain what AI tools they’re using and for what purposes. This valuable feedback enriches your analytics, helps build a business case for AI adoption, and provides a role-based model for applying the right security policies.
Finally, a multi-tiered system for responding to violations is a must. Possible steps:
- An automated warning, and a mandatory micro-training course on the given violation.
- A private meeting between the employee and their department head and an information security officer.
- A temporary ban on AI-powered tools.
- Strict disciplinary action through HR.
A comprehensive approach to AI security
The policies discussed here cover a relatively narrow range of risks associated with the use of SaaS solutions for generative AI. To create a full-fledged policy that addresses the whole spectrum of relevant risks, see our guidelines for securely implementing AI systems, developed by Kaspersky in collaboration with other trusted experts.
Kaspersky official blog – Read More