The number of AI services continues to grow, with more and more tools available to employees every day. It’s becoming harder to keep up with all the new AI applications, much less understand which ones your employees can actually access or…
Tool Poisoning: Hidden Instructions in MCP Tool Descriptions
Imagine installing a seemingly benign math tool on your AI assistant that simply adds two numbers. Unbeknownst to you, the tool’s description itself contains hidden directives intended for the AI model. These malicious instructions are invisible or inconspicuous to the user,…
Cross-Server Tool Shadowing: Hijacking Calls Between Servers
Context MCP allows an AI agent to connect to multiple tool servers simultaneously. This flexibility can be dangerous: if one of those servers is malicious, it can shadow the tools of another server. In simple terms, a rogue server can interfere with or…
Rug Pulls (Silent Redefinition): When Tools Turn Malicious Over Time
Context Imagine that the AI assistant’s tool was actually safe at first – perhaps you used it for days without issue. Then, one day, it suddenly starts behaving maliciously, even though you never installed a new tool. This is the “rug pull”…
Secrets in the Wind: Environment Variables, URLs, and the Leaky Abstractions
Context In the evolving landscape of MCP servers and AI agents, a new category of risk is emerging: sensitive data exposure through dynamic access mechanisms. We’re talking about secrets not statically written to disk, but fetched on demand — via environment variables, command-line outputs,…
Deploy a Simple Chatbot Application Using Secure MCP Servers
Context You’ve built an agentic application that leverages MCP servers to give your agent advanced capabilities… and now it’s time to ship it to production! Securing the communication between your agent and the MCP servers—even within your own cluster—is essential. MCP’s…






