Foundation Capital’s thesis on context graphs captures where enterprise AI is heading—systems that accumulate decision traces and turn precedent into searchable institutional memory. But a source of truth is only as reliable as the actors that interact with it. If agents reason on poisoned context, exceed the scope of user intent, or generate inaccurate traces, the entire precedent base becomes suspect. Agent integrity is the missing piece that makes context graphs trustworthy.
Policy as Code: Managing Agent Security Across Heterogeneous Deployments
Enterprise AI agent adoption is creating a security challenge that traditional approaches weren’t designed to handle. When five development teams deploy hundreds of agents across different frameworks, clouds, and architectures, security teams lose the ability to maintain consistent oversight. At Acuvity, we use policy as code to solve this problem by treating agent security policies as declarative, version-controlled configuration that operates independently of how developers build their agents.
How MCP Servers Became Shadow AI’s Stealth Mode
Note: This was originally published to Forbes Technology Council. — Model Context Protocol (MCP) has emerged as a way for AI applications to connect to enterprise data sources. But MCP servers themselves have become a form of Shadow AI when deployed…
Why AI Breaks The Old Cybersecurity Model
Explore how artificial intelligence is transforming cybersecurity by dismantling traditional defense frameworks and forcing organizations to rethink identity, attribution and control in the modern threat landscape. This expert Forbes Council analysis explains why legacy models fail against AI-driven risks and what next-gen security strategies must address to protect digital systems.
What Our Latest 2025 AI Security Research Reveals About Enterprise Risk
We just released our inaugural State of AI Security report, based on research with 275 security and IT leaders across the United States. The findings confirm what I’ve been observing in conversations with enterprise leaders: they’re struggling to secure and govern…
Acuvity vs SASE/CASB: Choosing the Right Tool for Securing Generative AI
Background As generative AI becomes embedded across modern enterprise workflows, organizations are under pressure to address a fast-evolving risk landscape. From employees using ChatGPT to AI agents operating autonomously, the security perimeter has shifted and traditional data governance tools are not…
Why AI Security is Mission-Critical for AppSec Teams
How Application Security can stay ahead in the age of AI-powered development The rise of Generative AI (Gen AI) is transforming how software is built, tested, and deployed—and Application Security (AppSec) teams are on the front lines of this shift. As…
AI Security Series: What It Really Takes to Secure Gen AI
This is Acuvity’s AI Security Series which offers a comprehensive exploration of securing AI systems, particularly focusing on Large Language Models (LLMs) and agentic applications. Each installment delves into critical components of AI security, providing insights and strategies for enterprises to protect their…
AI Security Series 5 – Model Training
As enterprises increasingly adopt Large Language Models (LLMs), some choose to pre-train or fine tune models. This blog describes problems that one needs to be aware of when they are indeed training models. In this part of the series we will…
AI Security Series 4 – Model Usage
At the heart of any AI application or agentic system are LLMs. Your developers and vendors are using multiple LLMs to achieve the right balance of quality and cost to deliver the workflow automations and agentic systems. In this section we…










