Book a Demo

2025 RESEARCH REPORT2025 State of
AI Security 

What the latest data reveals about AI security risks, investment priorities and the biggest threats ahead.

Enterprises expect
AI security disasters
they can't prevent.

KEY INSIGHTS - AI SECURITYTop Findings

Confronting the hard truths about enterprise AI security.

0 %
lack optimized AI governance
0 %
Data Loss via GenAI

50% expect a data loss incident in the next 12 months.

0 %
Shadow AI Risk

Nearly half expect to have an incident caused by Shadow AI.

0 %
AI Runtime Security

Runtime is the biggest enterprise AI security challenge.

0 %
AI Supply Chain

AI Supply Chain security is the top budget priority over the next 12 months.

AI SECURITY MATURITYAI Governance is in the
Slow Lane

AI Governance is in the Slow Lane
7 0 %
lack optimized AI governance

Nearly 70% of organizations lack optimized AI governance maturity

0 %
Managed AI governance

Only 32% of respondents say their AI governance is managed.

0 %
Not optimized or managed

Nearly 40% do not have managed or optimized AI governance.

DATA LOSS VIA AI TOOLSAI Adoption Will Drive the Next Data Exposure Crisis

5 0 %
expect a data breach via AI tools
0 %
Shadow AI

49% of respondents view Shadow AI as the second biggest threat.

0 %
AI-Driven Insider Threats

Nearly 41% of organizations anticipate an AI-driven insider threat.

AI SECURITY RESPONSIBILITYWho Owns AI Security?
The CIO Takes the Torch.

The CIO leads at 29%, while CISOs are not even in the top 3 of owners.

0 %
identify the CIO as the owner of AI security
0 %
Chief Data Officer

17% of organizations place AI security responsibility with data governance.

0 %
Infrastructure and Operations

Only 15% state that AI security is under infrastructure and operations.

SECURITY BUDGETAI Supply Chain Security: The Next Frontier?

AI supply chain security has emerged as the top budget priority for organizations.

1 %
plan to direct their security budgets to AI supply chain security over the next 12 months
0 %
AI Agent Security

Quarter of respondents plan to invest in AI agent security 

0 %
Shadow AI Management

Respondents recognize Shadow AI as a security investment area.

2025 State of AI Security 
2025 State of AI Security 
2025 State of AI Security 

2025 STATE OF AI SECURITYBehind the Data

METHODOLOGY275 enterprise security leaders share their
AI security reality

275 security leaders from mid-market to enterprise organizations answered hard questions about AI security readiness. What they revealed challenges every assumption about how enterprises are managing AI risk. The scattered ownership, AI governance gaps, and incident expectations documented here, show an industry caught unprepared for the security challenges created by artificial intelligence.

FAQ - 2025 STATE OF AI SECURITYYour biggest questions about the findings, answered.

This report reveals that AI security represents a fundamental departure from how organizations have historically approached technology security. Unlike previous transitions where existing frameworks could be adapted to new technologies, AI is changing the nature of risk itself.

The research demonstrates that the challenges enterprises face are deeply interconnected – governance gaps directly contribute to fragmented ownership, which in turn explains why organizations expect widespread incidents while remaining unprepared across multiple threat categories. Most significantly, the report shows that enterprises are dealing with what amounts to a systemic readiness crisis rather than isolated security problems.

This systemic view helps explain why traditional security approaches are failing and why organizations need to build entirely new capabilities rather than simply applying familiar controls.

The report provides critical insights on how enterprise priorities are shifting in ways that challenge conventional wisdom. While industry discussions often focus heavily on model sourcing and software bills of materials, the data reveals organizations are actually most concerned with data sources and embeddings (31%) and external APIs with SaaS-embedded features (29%). Model sourcing ranks at only 13% of concerns. This disconnect between industry focus and actual organizational priorities suggests many security investments may be misaligned.

The report also reveals that AI supply chain security fundamentally differs from software supply chain security because it must address components that behave dynamically during runtime, requiring continuous monitoring rather than primarily pre-deployment controls.

The fundamental challenge is that AI operates differently from traditional IT systems that security teams are trained to protect. According to our research, 70% of organizations lack optimized AI governance, and nearly 40% have no AI-specific governance at all. This isn’t due to negligence—it’s because AI dissolves the traditional boundaries that cybersecurity was designed to defend.

Traditional security models rely on clear perimeters: network security protects the network layer, application security focuses on software, and data security manages information assets. AI, however, operates across all these domains simultaneously. Every query can potentially leak data, every plugin creates new attack vectors, and AI features embedded in everyday business applications open pathways that conventional security tools cannot monitor or control.

This systemic challenge is compounded by fragmented ownership. Our survey reveals that CIOs control AI security decisions in 29% of organizations, while CISOs—who traditionally own security—rank fourth at just 14.5%. This unusual distribution suggests organizations haven’t figured out whether AI security is a technology deployment issue, a data governance challenge, or a traditional security concern. Until enterprises develop unified governance structures and security models designed specifically for AI’s cross-domain nature, they’ll continue facing risks they can identify but cannot adequately address.

AI supply chain security has emerged as the #1 investment focus for organizations, representing one of the most significant shifts in enterprise security spending in decades.

This prioritization reflects a critical realization: AI components behave fundamentally differently from traditional software components.

Unlike conventional software supply chains that focus on static code and known vulnerabilities, AI supply chains include dynamic components that only reveal their full risk profile during runtime. Organizations identify data sources and embeddings as their greatest concern (31%), followed by external APIs and SaaS-embedded AI features (29%). These components can process sensitive data, retain information in ways organizations cannot control, and interact with users in unpredictable ways.

The investment surge recognizes that AI supply chain security requires continuous monitoring rather than pre-deployment checks. While 38% of organizations identify runtime as their most vulnerable phase, an additional 27% view risks as spanning the entire AI lifecycle. This means security teams need visibility into how AI agents access multiple systems, how embeddings process sensitive data, and how APIs enable real-time AI capabilities across enterprise applications. Traditional software bills of materials and vulnerability scanning simply cannot address these dynamic, runtime-based risks that emerge when AI components interact with real data and users in production environments.

Shadow AI represents an existential threat to enterprise data security, with 49% of organizations expecting Shadow AI incidents within the next 12 months—making it the second most likely AI-related security incident. Simultaneously, 23% acknowledge they’re inadequately prepared to address unapproved AI tools and services.

The threat differs fundamentally from traditional Shadow IT because AI tools directly process and potentially retain the sensitive data employees input. When workers use platforms like ChatGPT, Claude, or Midjourney without approval, they may inadvertently expose confidential information, intellectual property, or customer data to external services. This information could be stored indefinitely, used to train models accessible to other users, or processed in jurisdictions with different data protection standards.

Most concerning is how Shadow AI proliferates through seemingly legitimate channels. Our research shows 18% of organizations worry about GenAI features embedded within approved SaaS applications—capabilities often enabled automatically in tools like Zoom, Salesforce, Adobe, and Grammarly. Employees may not even realize they’re using AI functionality that’s analyzing and processing company data. Traditional security approaches fail because they focus on blocking unauthorized applications, but Shadow AI often operates within authorized software or through browser-based interfaces that bypass corporate controls.

Without specialized AI security and AI governance solutions, organizations have no visibility into what AI capabilities employees are using or what data is being exposed.

Runtime emerges as the critical vulnerability point, with 38% of security leaders identifying it as their most vulnerable phase and 27% admitting it’s where they’re least prepared. This vulnerability-preparedness gap represents the most dangerous aspect of enterprise AI security.

Runtime is where AI systems become truly unpredictable. During this phase, AI actively processes real user inputs, accesses live data, makes autonomous decisions, and potentially interacts with multiple enterprise systems. Pre-deployment testing and static security controls cannot anticipate the full range of behaviors that emerge when AI encounters actual business data, creative user prompts, or unexpected input combinations.

The challenge intensifies because runtime AI security requires capabilities most organizations haven’t developed. Traditional security tools excel at monitoring known patterns and blocking identified threats, but AI runtime security demands real-time analysis of context, intent, and semantic meaning. Security teams need to detect when legitimate AI use crosses into data exfiltration, identify prompt injection attempts that appear as normal queries, and recognize when AI agents exceed their intended scope—all without disrupting legitimate business operations. Our research shows organizations understand this criticality but lack the tools, skills, and frameworks to address it. This explains why enterprises simultaneously deploy AI capabilities while expecting incidents they feel powerless to prevent.

2025 STATE OF AI SECURITYDownload the Report

Explore the 2025 State of AI Security Report with new data on Shadow AI, AI supply chain risk, runtime security, and enterprise AI governance gaps.