Book a Demo
AI-security-2025-roundup

2025: The Year AI Security Became Non-Negotiable

TL;DR: A comprehensive look at the AI security landscape in 2025, from critical LLM vulnerabilities and shadow AI risks to agentic AI threats, new OWASP frameworks, and what enterprise security leaders should prioritize in 2026.

AI security as a discipline barely exists. 

ChatGPT launched in November 2022; the first OWASP Top 10 for LLMs came out in 2023. We’ve had roughly two years to figure out how to secure a technology that’s already embedded in enterprise workflows, processing sensitive data, and, as we start 2026, operating autonomously.

But 2025 was different. 

This was the year of firsts: the first documented autonomous cyberattack campaign run by AI agents, the first OWASP framework for agentic AI security, the first EU AI Act enforcement with meaningful penalties. It was also the year that revealed how unprepared enterprises remain. Our 2025 State of AI Security report found that CIOs, not CISOs, now own AI security in most organizations, a shift that raises questions about whether AI risk is being treated as a security problem at all.

What follows is an accounting of the year, including the incidents, the frameworks, the market shifts, and what they suggest about where this is heading.

The AI Stack Revealed its Fragility

The most significant security development of 2025 was the discovery that the foundational infrastructure powering enterprise AI harbors serious vulnerabilities. LangChain-core, downloaded 847 million times, became the focal point in December when researcher Yarden Porat at Cyata found “LangGrinch” (CVE-2025-68664). The flaw, scoring 9.3 CVSS, allowed attackers to extract environment secrets, cloud credentials, and API keys through prompt injection.

What made LangGrinch significant was what it revealed about agent architecture. In these frameworks, structured data flows downstream through serialization pipelines that were never designed to handle adversarial input. The vulnerability class will likely persist across similar tools.

Langflow suffered worse. CVE-2025-3248 scored 9.8 CVSS and enabled unauthenticated remote code execution through an endpoint that invoked Python’s exec() on user-supplied code. CISA added it to the Known Exploited Vulnerabilities catalog in May after Trend Micro documented active exploitation. Microsoft’s 365 Copilot also saw its first high-severity vulnerability with CVE-2025-32711.

Supply chain attacks accelerated in parallel. JFrog documented a 6.5-fold increase in malicious models on Hugging Face, with attackers using “nullifAI” techniques to evade security scanners. Palo Alto Networks Unit 42 identified a particularly clever attack vector: when Hugging Face authors delete accounts, threat actors can re-register those namespaces and upload poisoned model versions. Both Google Vertex AI and Microsoft Azure AI Foundry were found hosting vulnerable “orphaned” models.

Shadow AI Proved More Expensive Than Anyone Expected

February brought the largest AI conversation leak in history. A threat actor breached OmniGPT, an aggregator platform connecting users to ChatGPT-4, Claude 3.5, and Gemini, exposing 34 million lines of conversations, 30,000 user credentials, and uploaded files containing business documents. OmniGPT never publicly acknowledged it happened.

IBM’s 2025 Cost of a Data Breach Report put numbers to the shadow AI problem: one in five organizations experienced breaches from unsanctioned AI usage, costing $670,000 more than traditional breaches and taking 247 days to detect. The report found 97% of organizations with AI breaches lacked proper access controls, while 63% either had no AI governance policy or were still developing one.

Acuvity’s 2025 State of AI Security survey of 275 enterprise security leaders found similar patterns. Half expect data leakage through generative AI tools in the next 12 months; 49% anticipate shadow AI incidents. The primary concerns: standalone GenAI tools adopted without IT approval (21%) and AI features embedded in SaaS applications that employees may not even realize are processing sensitive data (18%). AI-driven insider threats round out the top three at 41%.

The scale of unsanctioned AI usage is staggering. Zscaler’s ThreatLabz documented 3,000% year-on-year growth in enterprise AI/ML transactions, with organizations sending 4.5 petabytes of data to AI tools. Enterprises responded by blocking roughly 60% of AI/ML transactions. Skyhigh Security found the average enterprise running over 320 unsanctioned AI applications.

The AI Governance Gap is Worse Than it Appears

The governance problem runs deeper than shadow AI. Acuvity’s research found 70% of organizations lack optimized AI governance—meaning they can’t demonstrate board-level risk reviews, automated monitoring, or policies updated after incidents. More troubling: 39% operate below managed governance levels entirely, relying on ad hoc practices or nothing at all.

This creates a peculiar situation where organizations deploy AI at scale while expecting incidents they feel unprepared to handle. Only 32% operate with measured effectiveness and reporting.

AI security ownership has fragmented in ways that don’t match how enterprises typically organize. Acuvity found CIOs lead AI security in 29% of enterprises, followed by Chief Data Officers at 17% and infrastructure teams at 15%. CISOs rank fourth at 14.5%, a departure from traditional security domains where security leadership holds primary responsibility.

This isn’t necessarily wrong, but it raises questions. Is AI security being treated as a security discipline? Or has it defaulted to whoever owns the technology deployment, without adequate consideration of the risk management expertise required?

Runtime is Where the Risk Concentrates

Where do AI risks actually materialize? Not where traditional security thinking would suggest. Acuvity’s survey found 38% of organizations identify runtime as their most vulnerable phase, with another 27% citing it as where they’re least prepared.

Pre-deployment concerns rank far lower: 13% cite dataset integrity, 12% point to model provenance. The gap suggests organizations understand something important – AI components create risks through live interactions that can’t be fully assessed before deployment. Static inventories and provenance tracking matter, but the security implications only become visible when systems operate in production with real data and real users.

This has implications for how we think about AI security investment. The “shift-left” approaches that work for traditional software don’t map cleanly to AI systems where behavior emerges at runtime.

Standards and Frameworks Tried to Catch Up

OWASP’s 2025 releases became the de facto standards for AI application security. The Top 10 for LLM Applications added System Prompt Leakage (reflecting real attacks extracting valuable IP) and Vector and Embedding Weaknesses (addressing the 53% of companies using RAG architectures). The framework’s treatment of “Excessive Agency” proved prescient as agentic deployments accelerated.

MITRE’s ATLAS framework reached 66 techniques across 15 tactics by October, adding 14 agentic AI attack techniques. Their SAFE-AI framework created the first systematic mapping between AI threats and NIST SP 800-53 controls – finally giving security teams a translation layer between AI risks and established compliance frameworks.

NIST released a preliminary draft of the Cybersecurity Framework Profile for AI in December, developed with 6,500 contributors. Gartner’s Market Guide for AI TRiSM offered a sobering prediction: through 2026, at least 80% of unauthorized AI transactions will stem from internal policy violations rather than external attacks.

Regulations Entered the Chat

The EU AI Act crossed its first enforcement thresholds. February 2 brought prohibitions on manipulative AI and mandatory AI literacy requirements. August 2 made GPAI rules binding, with penalties reaching €35 million or 7% of global annual turnover. OpenAI, Anthropic, Microsoft, and Google signed the voluntary GPAI Code of Practice in July.

The US regulatory picture grew more complicated. The Trump administration’s January Executive Order revoked Biden-era safety reporting requirements, while December’s EO 14319 signaled federal preemption of state AI laws. But enforcement continued: the SEC pursued “AI washing” cases while the FTC acted against companies misrepresenting AI capabilities.

States pushed forward regardless. New York’s RAISE Act established the first comprehensive state reporting requirements for frontier AI models, mandating 72-hour safety incident reporting. Colorado delayed its AI Act to June 2026. California clarified that employment discrimination laws apply to automated-decision systems.

China implemented mandatory AI content labeling effective September 1. The convergence of global regulatory frameworks – EU prescription, Chinese technical standards, US sector-specific enforcement—created the compliance complexity that will define 2026 planning.

AI Agents Became Weapons

In November, Anthropic disclosed GTG-1002, the first documented large-scale cyberattack executed primarily by AI. Chinese state-sponsored actors used a jailbroken instance of Claude Code to attack approximately 30 organizations across technology, finance, and government. The AI performed 80-90% of operations independently—reconnaissance, vulnerability discovery, exploit development, data exfiltration, at rates impossible for human operators.

In August, Anthropic disrupted a separate campaign using Claude for theft and extortion targeting 17 organizations including healthcare and government agencies. The pattern was clear: adversaries recognized that agentic capabilities could be turned against defenders.

OWASP responded in December with the Top 10 for Agentic Applications, identifying fundamental threats: agent behavior hijacking, tool misuse, identity and privilege abuse, memory poisoning, cascading hallucination attacks. SailPoint found 80% of organizations had already encountered risky AI agent behaviors.

OpenAI’s December security update for ChatGPT’s Atlas browser acknowledged what many suspected: “Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved.'” They’ve built an LLM-based automated attacker using reinforcement learning to find injection vectors, AI fighting AI.

Investment is Shifting Toward AI Supply Chain Security

One clear signal in Acuvity’s findings: AI supply chain security emerged as the top investment priority, with 31% planning their largest security investments there over the next 12 months. AI agent security follows at 25%, shadow AI management at 16%.

This represents a significant reorientation—from isolated tool management toward comprehensive oversight of models, datasets, agents, plugins, APIs, and SaaS AI features.

Organizations define AI supply chain risk differently than traditional software security frameworks suggest. Data sources and embeddings rank as the greatest concern at 31%, followed by external APIs and SaaS-embedded AI features at 29%. Model sourcing and provenance, which dominate traditional supply chain discussions, concern only 13%.

The Vendor Landscape Consolidated

Major security vendors moved to capture the market. Palo Alto Networks acquired Protect AI in July and launched Prisma AIRS 2.0. Their December State of Cloud Security Report found 99% of respondents reported at least one attack on AI systems in the past year.

CrowdStrike unveiled Falcon AI Detection and Response in December, with AI-SPM capabilities for shadow AI detection. Microsoft expanded Security Copilot with 12 AI-powered agents, reporting 22.8% fewer alerts per incident and malicious email detection up to 550% faster for organizations using it.

Google Cloud made Model Armor generally available for runtime protection across Gemini, OpenAI, Anthropic, and open-source models. Wiz launched AI-SPM with agentless discovery of AI services, while their $32 billion Google acquisition awaits regulatory review.

Market projections converged on aggressive growth: Mordor Intelligence valued AI cybersecurity at $30.92 billion for 2025 with 22.8% CAGR toward $86 billion by 2030. Gartner projected total security spending of $213 billion in 2025.

The Industry Hype Requires Careful Navigation

The agent security market warrants skepticism. Gartner’s June analysis placed AI agents at the “Peak of Inflated Expectations” and predicted over 40% of agentic AI projects will be canceled by end of 2027 due to escalating costs, unclear value, or inadequate risk controls. They warned explicitly of “agent washing”—vendors rebranding RPA, chatbots, or AI assistants as “agentic” without meaningful capability changes.

The legitimate concerns are documented: privilege escalation when agents inherit human-scoped permissions, prompt injection exploiting autonomous tool access, multi-step attack chains across connected systems. LangGrinch and the Microsoft Copilot CVE prove the attack surface is real. But vendor marketing has outpaced enterprise adoption, only 45% of enterprises run even one AI agent with access to critical systems.

What’s substantiated: prompt injection remains unsolved, credential exposure in agent workflows is a genuine risk, OWASP’s framework reflects community consensus. What’s speculative: agent-to-agent collusion at scale, cascading hallucination attacks across production systems, claims of “entirely new security paradigms.”

The practical path forward: inventory first (know what agents are running), implement least-privilege access controls, establish behavioral monitoring before purchasing specialized tools. Identity management, input validation, network segmentation – the foundational principles remain relevant.

What 2026 Likely Holds

Analyst predictions converge on several themes. Gartner projects over 50% of enterprises will use AI security platforms by 2028 (up from under 10% today) while predicting 40% of AI data breaches by 2027 will stem from improper cross-border GenAI usage. Forrester warns an agentic AI deployment will cause a publicly disclosed breach in 2026.

Google Cloud’s Cybersecurity Forecast anticipates “the first sustained, automated campaigns where threat actors use agentic AI to autonomously discover and exploit vulnerabilities faster than human defenders can patch.” Microsoft predicts AI agents will require the same security protections as human users. Palo Alto Networks sees identity as the primary battleground, with machine identities outnumbering humans 82-to-1.

The regulatory trajectory intensifies: EU AI Act high-risk requirements take effect in August 2026, South Korea’s AI Basic Act enforces in January, and US federal-state tensions will dominate compliance planning. IDC projects organizations will spend over $100 billion securing AI infrastructure by 2029.

Where Do We Go From Here?

2025 established that AI security is no longer a specialized concern but a foundational requirement. Shadow AI creates material breach risk. Foundational frameworks harbor critical vulnerabilities. Adversaries have weaponized AI agents.

Three imperatives for 2026: First, visibility precedes protection—you can’t secure AI you haven’t inventoried, and enterprises averaging 320+ unsanctioned AI applications face significant exposure. Second, agent security requires architectural thinking – privilege escalation, prompt injection, and multi-step attacks demand security embedded in design, not added afterward. Third, governance frameworks have matured enough to implement, OWASP, MITRE ATLAS, and the pending NIST profile provide guidance organizations can no longer claim doesn’t exist.

As Acuvity CEO Satyam Sinha put it: “AI is changing the nature of risk itself, forcing leaders to confront incidents they admit they aren’t ready to manage.” The 70% of organizations lacking optimized governance aren’t failing from poor planning—they’re facing a technology that dissolves the boundaries cybersecurity was designed to defend.

References

Vulnerability Disclosures and Incidents

  • Cyata. “LangGrinch: LangChain Core CVE-2025-68664.” December 2025. https://cyata.ai/blog/langgrinch-langchain-core-cve-2025-68664/ 
  • The Hacker News. “Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection.” December 2025.
  • Hackread. “OmniGPT AI Chatbot Breach: Hacker Leaks 34M Messages.” February 2025.
  • TechCrunch. “OpenAI says AI browsers may always be vulnerable to prompt injection attacks.” December 2025.
  • OpenAI. “Continuously hardening ChatGPT Atlas against prompt injection attacks.” December 2025.

Industry Research and Surveys

  • Acuvity. “2025 State of AI Security Report.” September 2025.
  • IBM. “2025 Cost of a Data Breach Report.” 2025.
  • Zscaler ThreatLabz. “AI Security Report 2025.”
  • Skyhigh Security. “2025 Cloud Adoption and Risk Report.”
  • SiliconANGLE. “Enterprise AI adoption jumps 30-fold as organizations face growing cybersecurity risks.” March 2025.

Frameworks and Standards

  • OWASP. “Top 10 for LLM Applications 2025.”
  • OWASP. “Top 10 Risks and Mitigations for Agentic AI Security.” December 2025. https://genai.owasp.org/
  • MITRE. “ATLAS Framework.” https://atlas.mitre.org/
  • MITRE. “SAFE-AI: A Framework for Securing AI Systems.” April 2025.
  • NIST. “Draft Cybersecurity Framework Profile for Artificial Intelligence (IR 8596).” December 2025.
  • Gartner. “Market Guide for AI Trust, Risk and Security Management.” February 2025.

Analyst Predictions and Market Research

  • Gartner. “Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027.” June 2025.
  • Gartner. “Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027.” February 2025.
  • Gartner. “Top Strategic Technology Trends for 2026.” October 2025.
  • Gartner. “Worldwide End-User Spending on Information Security to Total $213 Billion in 2025.” July 2025.
  • Forrester. “2026 Technology & Security Predictions.” October 2025.
  • Mordor Intelligence. “AI in Security Market Size & Share Analysis.” 2025.
  • McKinsey & Company. “Deploying Agentic AI with Safety and Security.” 2025.

Regulatory Developments

  • EU AI Act enforcement milestones (February 2, August 2, 2025).
  • Metricstream. “2026 Guide to AI Regulations and Policies in the US, UK, and EU.”
  • Hogan Lovells. “White House Issues Executive Order to Establish Federal AI Policy.” 2025.
  • Seyfarth Shaw. “AI Legal Roundup: Colorado, California, Illinois Developments.” 2025.

Vendor Announcements

  • Palo Alto Networks. “Completes Acquisition of Protect AI.” July 2025.
  • Palo Alto Networks. “Launches Prisma AIRS 2.0.” 2025.
  • CrowdStrike. “Announces GA of Falcon AI Detection and Response.” December 2025.
  • Microsoft. “Unveils Security Copilot Agents and New Protections for AI.” March 2025.
  • Google Cloud. “Model Armor for AI Threat Protection.” 2025.
  • Wiz. “Wizdom 2025: AI-SPM Launch.”

AI Supply Chain Security

  • VentureBeat. “Seven steps to AI supply chain visibility.” 2025.
  • Traxtech. “Hugging Face Model Hijacking Threatens AI Supply Chain Security.” 2025.

Threat Intelligence

  • Anthropic. GTG-1002 disclosure. November 2025.
  • Infosecurity Magazine. “Forrester: Agentic AI-Powered Breach Will Happen in 2026.”
  • Obsidian Security. “The 2025 AI Agent Security Landscape.”
  • Harvard Business Review / Palo Alto Networks. “6 Cybersecurity Predictions for the AI Economy in 2026.” December 2025.