Book a Demo
yianni-mathioudakis-N0orj27mNd8-unsplash

The Trust Layer Context Graphs Need

TLDR; Foundation Capital’s thesis on context graphs captures where enterprise AI is heading. But for these systems to deliver on their promise, they need something the current conversation hasn’t fully addressed: agent integrity.

Jaya Gupta and Ashu Garg at Foundation Capital recently published what I consider the clearest articulation of where enterprise AI is heading.¹ Their thesis is elegant: the last generation of enterprise software created trillion-dollar companies by becoming systems of record for objects — customers in Salesforce, employees in Workday, operations in SAP. The next generation will be built by those who capture something these systems never stored: the reasoning behind decisions.

They call this accumulated structure a “context graph” — a living record of decision traces stitched across entities and time, where precedent becomes searchable. The context graph explains not just what happened, but why it was allowed to happen. And as AI agents take on more complex workflows, that explanation becomes essential. Agents encounter the same ambiguity humans navigate every day, but they lack the institutional memory and judgment that lets experienced employees know when to make exceptions and when to escalate. Context graphs provide that memory.

Context graphs don’t fully exist yet, at least not in the form Foundation Capital describes. But companies are already building toward this vision, and the trajectory is worth taking seriously.

I find myself nodding along to nearly all of it. The framing resonates because it names something many of us building in the AI infrastructure space have been circling around without quite pinning down. But as someone who has spent years thinking about how to secure systems that operate with increasing autonomy, I want to extend their argument in a direction the piece gestures toward but doesn’t fully develop.

Foundation Capital’s thesis is fundamentally about trust. They argue that context graphs become “the real source of truth for autonomy.” But a source of truth is only as reliable as the actors that interact with it. If agents query poisoned context, they reason on bad data. If agents take actions beyond what users intended, those actions get recorded as legitimate precedent. If decision traces can be manipulated after the fact, the entire historical record becomes suspect.

The integrity of a context graph depends on the integrity of the agents that read from it and write to it. And that integrity doesn’t emerge automatically. It has to be engineered.

Agent Integrity: The Missing Piece

Context graphs are infrastructure for agentic AI. Agents query them to understand precedent. Agents contribute to them when they make decisions. The value of the context graph — its ability to guide autonomous systems and serve as an authoritative record — is directly proportional to the confidence enterprises can place in the agents operating within it.

Agent integrity means several things:

Trustworthy context: When an agent pulls data from multiple systems to inform a decision, was that context reliable? Could it have been tampered with? Indirect prompt injection—where malicious instructions are planted in documents, emails, or tickets that agents process—is now the #1 vulnerability in OWASP’s Top 10 for LLM Applications.² An agent reasoning on poisoned context will generate compromised decision traces, regardless of whether the agent itself is functioning correctly.

Intent alignment: Agents have broad permissions by design. They need access to email, calendars, CRM, support systems, and more — that’s the whole value proposition. But broad permissions create a gap between what an agent is authorized to do and what the user actually asked it to do. We call this semantic privilege escalation: when an agent uses its legitimate access to take actions beyond the scope of the task it was given.³ Every credential is valid, every API call succeeds, but the agent is doing something no one requested.

Accurate decision traces: For a context graph to serve as a system of record, the traces it contains must be complete and tamper-resistant. Which agent acted? Under what policy? With what permissions? Based on what context? If any of this is missing or can be altered after the fact, the precedent base is unreliable.

Traditional security tools can’t verify any of this because they operate at the wrong layer. Your SIEM sees successful API calls, your IAM system confirms valid credentials, and your network monitoring shows normal traffic patterns — but none of them can tell you whether the agent did what the user intended, based on trustworthy context, or whether the resulting trace is an accurate record.

Why This Is Important for Context Graphs

Foundation Capital emphasizes that context graphs gain value over time. Every exception becomes training data. Every override becomes a case study. The graph doesn’t just store history; it makes history useful by turning past decisions into precedent that future agents can query.

This compounding nature is what makes agent integrity so critical. A single compromised decision doesn’t just affect one transaction — it becomes part of the precedent base. Future agents reference it. The corruption propagates not through code execution or data exfiltration, but through the reasoning layer, quietly, over time, at scale.

Consider what this looks like concretely. An agent manipulated by injected instructions accesses data it shouldn’t have and takes an action the user never requested. That action gets recorded as a legitimate decision trace. Months later, another agent handling a similar request queries the context graph, finds that precedent, and follows it. The original compromise — whether caused by injection, ambiguity, or scope drift — is now encoded as standard practice.

This is a fundamentally different threat model than what enterprise security architectures are designed to handle. We have mature frameworks for protecting data at rest and in transit. We understand how to validate inputs and sanitize outputs. But context graphs introduce a new category: protecting the integrity of reasoning itself.

What Agent Integrity Requires

For context graphs to function as trustworthy systems of record, organizations need capabilities that operate at the semantic layer, not just the permission layer:

Intent capture and tracking: Security starts with knowing what the user actually asked for. This means recording the original request, tracking how intent evolves as users refine their instructions, and maintaining that context through every downstream operation. Without a clear baseline of intent, there’s no way to evaluate whether subsequent actions are appropriate.

End-to-end workflow tracing: When an agent chains together multiple tool calls across multiple systems, organizations need visibility into the full path—not just what happened, but why the agent decided to take each step. This goes beyond traditional logging to capture reasoning, actions, and data flows as an integrated trace.

Security-annotated logging: Raw trace data is too voluminous for manual review. The system must enrich logs with security-relevant markers at collection time: sensitivity flags for PII or credentials, behavioral indicators that note deviations from historical patterns, and scope alignment assessments that evaluate whether actions match intent.

Intent-action alignment evaluation: This is the core detection mechanism. At runtime, every action must be evaluated against the captured intent. If a user asks for a document summary and the agent starts scanning Drive for API keys, that’s a violation—even though the agent has permission to access Drive. This evaluation must happen with low enough latency to intervene before damage occurs.

These capabilities are what we’ve built at Acuvity. The analyst community is converging on the same requirements—Gartner’s AI TRiSM framework explicitly calls out audit trails of all state changes to AI artifacts, alignment of agent activity with agent intentions, and detection of anomalous activities and security events.⁴ AI governance isn’t a compliance checkbox—it’s operational infrastructure that determines whether autonomous systems can be trusted.

Autonomy Requires Integrity

There’s a tendency in conversations about AI agents to treat autonomy as a spectrum that organizations move along based on comfort level — start with human-in-the-loop for everything, gradually relax oversight as confidence builds. But this framing misses something important. Autonomy isn’t just about risk tolerance; it’s about whether the systems that support autonomous action have the integrity guarantees that make autonomy defensible.

An organization might be entirely comfortable with an agent making certain decisions independently. But if they can’t demonstrate that the agent reasoned on trustworthy context, acted within the scope of user intent, and generated an accurate decision trace, they face liability exposure that no risk appetite can absorb. This is especially true in regulated industries, where the ability to explain and justify automated decisions isn’t optional.

Context graphs, done right, provide the substrate for that explanation. But “done right” means agent integrity from the foundation, not bolted on after the fact.

Building for the World Foundation Capital Describes

I’m genuinely excited about the context graph thesis because it articulates a vision of enterprise AI that goes beyond chatbots and copilots to something more fundamental: systems that accumulate institutional knowledge and apply it with increasing sophistication over time. That vision is worth building toward.

But the path from here to there requires agent integrity. When agents operate across system boundaries, pulling context from sources with varying levels of trustworthiness, how do you ensure the context hasn’t been poisoned? When agents have broad permissions to act on behalf of users, how do you ensure their actions stay within the scope of what was actually requested? When decision traces become precedent that shapes future behavior, how do you ensure those traces are accurate and tamper-resistant?

These are the requirements that determine whether context graphs can actually function as systems of record. Foundation Capital has identified the destination. Agent integrity is what makes it safe to get there.

That’s the work we’re focused on at Acuvity — ensuring that the agents interacting with enterprise systems operate with the integrity that context graphs require. The gap between what an agent is authorized to do and what it should do given the user’s actual intent is the gap that determines whether a context graph can be trusted. Closing that gap is what makes autonomy possible.

 

References

  1. Gupta, J. & Garg, A. “AI’s Trillion-Dollar Opportunity: Context Graphs.” Foundation Capital, December 2025. https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/ 

  2. OWASP Top 10 for LLM Applications 2025 ranks prompt injection as the #1 critical vulnerability in production AI deployments. See also: Palo Alto Networks Unit 42, “New Prompt Injection Attack Vectors Through MCP Sampling,” December 2025.

  3. For a detailed treatment of semantic privilege escalation and the security framework required to address it, see Acuvity, “Defending Against Semantic Privilege Escalation: Four Capabilities Required for an Agent Security Framework,” 2025.

  4. Gartner, “Market Guide for AI Trust, Risk and Security Management,” February 2025. See Table 1: AI Governance Technology Functions and Table 2: AI Runtime Inspection and Enforcement Functions.