
Cross-Server Tool Shadowing: Hijacking Calls Between Servers
Context MCP allows an AI agent to connect to multiple tool servers simultaneously. This flexibility can be dangerous: if one of those […]

Rug Pulls (Silent Redefinition): When Tools Turn Malicious Over Time
Context Imagine that the AI assistant’s tool was actually safe at first – perhaps you used it for days without issue. Then, […]

Secrets in the Wind: Environment Variables, URLs, and the Leaky Abstractions
Context In the evolving landscape of MCP servers and AI agents, a new category of risk is emerging: sensitive data exposure through […]

Tool Poisoning: Hidden Instructions in MCP Tool Descriptions
Imagine installing a seemingly benign math tool on your AI assistant that simply adds two numbers. Unbeknownst to you, the tool’s description […]

Deploy a simple chatbot application using Secure MCP Servers
Context You’ve built an agentic application that leverages MCP servers to give your agent advanced capabilities… and now it’s time to ship […]

MCP Server: The Dangers of “Plug-and-Play” Code
With great power comes great(er) responsibility Since its launch in November 2024, MCP (Model Context Protocol) has been adopted across industries, for […]

AI Security Series 5 – Model Training
As enterprises increasingly adopt Large Language Models (LLMs), some choose to pre-train or fine tune models. This blog describes problems that one […]

AI Security Series 4 – Model Usage
At the heart of any AI application or agentic system are LLMs. Your developers and vendors are using multiple LLMs to achieve […]

AI Security Series 3 – Datastores
Modern AI applications—especially those involving conversational agents, retrieval-augmented generation (RAG), and enterprise copilots—depend heavily on a variety of datastores to supply, retrieve, […]

AI Security Series 2 – Gen AI Application Security Pillars
As enterprises rapidly integrate AI systems into core workflows, the need to adopt a security-first mindset becomes imperative. These systems, especially those […]

AI Security Series 1 – Applications and Agents
Introduction The rapid advancement of AI technologies—particularly large language models (LLMs) and agentic systems—has transformed the way modern applications are built and […]

AI Security Series: What It Really Takes to Secure Gen AI
This is Acuvity’s AI Security Series which offers a comprehensive exploration of securing AI systems, particularly focusing on Large Language Models (LLMs) and […]