Artificial intelligence has become woven into how knowledge work gets done. But a significant portion of that AI usage is happening outside the view of IT departments, security teams, and organizational leadership. This phenomenon—commonly called "shadow AI"—represents one of the more complex challenges facing businesses today.
What Is Shadow AI?
Shadow AI refers to the use of AI tools and services by employees without explicit organizational approval or oversight. It's the AI equivalent of shadow IT—but with some important distinctions that make it worth understanding on its own terms.
Unlike traditional software that often requires installation or procurement, most AI tools are accessible through a web browser with nothing more than an email signup. This accessibility means that adoption can outpace organizational awareness almost instantly.
We explored related dynamics in our piece on AI coding agents and organizational considerations, but shadow AI extends well beyond development teams.
How Widespread Is the Issue?
Research suggests that shadow AI use is both common and growing:
- According to an UpGuard report, more than 80% of workers—including nearly 90% of security professionals—use unapproved AI tools in their jobs
- A WalkMe survey found that 78% of employees admit to using AI tools not approved by their employer
- A Cybernews survey of U.S. employees found that 93% of executives and senior managers report using shadow AI tools—higher than any other employee group
- In sectors like healthcare, manufacturing, and financial services, shadow AI tool usage has reportedly increased by more than 200% year over year
The pattern is consistent across multiple studies: shadow AI isn't a fringe behavior confined to a few tech-savvy employees. It's widespread, it's regular, and it includes leadership.
Why Employees Turn to Unapproved Tools
Understanding why shadow AI happens is essential for understanding how to address it. Research indicates several common factors:
Familiarity: Many employees already use consumer AI tools in their personal lives. Surveys suggest that around 41% of employees using unapproved AI tools do so because they already rely on them personally.
Lack of Alternatives: Approximately 28% of employees say their organization doesn't provide an approved AI alternative for their needs.
Productivity Pressure: Employees often perceive AI tools as genuinely helpful for their work and may prioritize completing tasks over navigating approval processes.
This isn't fundamentally different from the dynamics that drove shadow IT adoption over the past decade. We touched on similar themes in our discussion of what business apps reveal about data.
Considerations for Business Leaders
Shadow AI creates several categories of considerations that may warrant attention:
Data Handling Questions
When employees use consumer AI tools for work tasks, organizational data may be processed by third-party services. Depending on the tool and its terms of service, this data might be:
- Stored on external servers
- Used to train AI models
- Subject to different privacy regulations than expected
- Accessible to the AI provider's personnel
For businesses handling sensitive client information or operating under regulatory requirements, these data flows may have compliance implications. We discussed related considerations in our article on Canada's privacy landscape for small businesses.
Security Considerations
According to IBM's Cost of a Data Breach report (covering incidents from March 2024 to February 2025), approximately 20% of organizations that experienced breaches reported that shadow AI was a contributing factor. The same research found that breaches involving shadow AI cost an average of $200,000 more than other breaches.
The security implications extend beyond direct breaches. Shadow AI usage can create:
- Gaps in organizational visibility and audit trails
- Inconsistent data handling practices
- Potential vectors for social engineering and phishing
- Challenges for incident response when something goes wrong
We covered related attack vectors in our piece on AI-powered cyber threats.
Governance Challenges
Research suggests that while many organizations have developed AI policies (one study found 81.8% of IT leaders report having documented AI governance policies), actual enforcement and employee compliance vary significantly.
The gap between policy and practice creates ambiguity. When 78% of employees use unapproved tools despite policies, the policies themselves may need examination—whether that means better communication, different enforcement approaches, or reconsidering which tools receive approval.
The Complexity of Platform Dominance
One finding worth noting: research from Reco's 2025 State of Shadow AI Report found that OpenAI's services account for approximately 53% of all shadow AI usage in studied organizations—more than the next nine AI platforms combined.
This concentration creates a particular dynamic. When most unauthorized AI usage flows through a small number of platforms, organizations face questions about whether to:
- Formally approve widely-used platforms (potentially with enterprise agreements)
- Attempt to block access to specific services
- Accept some level of unmanaged usage while focusing on data sensitivity boundaries
- Invest in providing comparable approved alternatives
There's no universally correct answer—the appropriate approach depends on an organization's specific circumstances, risk tolerance, and resources.
Questions Worth Considering
Rather than offering prescriptive solutions that may not fit every situation, here are questions that may help clarify your organization's position:
- Do you have visibility into which AI tools employees are currently using?
- Does your organization provide approved AI tools that meet employee needs?
- Are existing policies clear about what's permitted, what's prohibited, and why?
- How would you know if sensitive data were processed through an unapproved service?
- What's the process for employees to request approval for new AI tools?
- Have you assessed which roles and functions have the greatest AI adoption—approved or otherwise?
The research consistently shows that leadership and security professionals are among the heaviest shadow AI users. Any approach that treats this as primarily an employee compliance problem may be missing important dynamics.
The Broader Context
Shadow AI represents a collision between the pace of technological change and the pace of organizational adaptation. AI capabilities have advanced rapidly, while governance frameworks, procurement processes, and acceptable use policies were often designed for different categories of tools.
Organizations that acknowledge this gap—and approach it as a design challenge rather than purely a compliance problem—may be better positioned to capture AI's benefits while managing associated considerations.
We explored similar themes around AI governance in our discussion of Claude Code and Cowork. The dynamics are consistent: powerful tools that are easy to adopt create both opportunities and organizational questions that deserve thoughtful attention.
This article is intended for informational purposes only and does not constitute professional security, legal, or compliance advice. Organizations should consult with qualified professionals to assess their specific situation and develop appropriate policies.