Anthropic Warned About AI Employees: But Here’s What Comes Next

When Anthropic’s CEO predicted that AI employees would take over white collar jobs within 12 months, most of the reaction focused on job counts. Which roles disappear? How fast? That debate misses the harder question. What happens inside a company when the “worker” is not a person at all, but software with its own identity, credentials, and the ability to act?

We already see it in how machines represent us and make decisions on our behalf.

From Cars to Companies

Consider a Tesla. You may drive it, but so might your spouse, your teenager, or a visiting relative. Each person interacts with the vehicle differently, but the car also has its own digital identity that systems use to recognize and authorize it.

Now extend that to San Francisco, where thousands of Waymo vehicles navigate the streets without human drivers. Each car authenticates to the fleet. Each passenger brings a personal profile. And above them sits a layer of AI supervisors monitoring the fleet as a whole. Several non-human identities operate alongside multiple human ones, each authorized to act in specific ways but intersecting constantly.

This is the same challenge enterprises face as they introduce AI agents into their operations.

AI as Employees Inside the Enterprise

Organizations are already deploying AI agents to streamline critical workflows:

1. Meeting summarization: Fellow, an AI meeting assistant, joins calls on Zoom, Teams, or Google Meet, transcribes discussions, captures action items, summarizes decisions, and can even draft follow-up emails. Developers can prompt, “Write a follow-up email,” and Fellow delivers, seamlessly integrating with workplace tools.

2. Email management: AI agents scan inboxes to categorize messages, identify key threads, flag priorities, extract action items, and suggest replies. These capabilities now enable efficient handling of large communication flows.

3. Scheduling assistants: Agents learn preferences, like who prefers morning meetings, buffer times, or ideal days of the week, and automatically schedule across time zones, adapt to changes, and avoid conflicts in a way traditional calendar tools can’t. 

4. Enterprise integration at scale: At Morgan Stanley, a custom agent called AI @ Morgan Stanley Debrief summarizes video meetings and drafts follow-up emails, integrating data from Outlook, Zoom, and Salesforce. It’s a homegrown tool, not off-the-shelf, designed to fit seamlessly into existing advisor workflows.

5. ERP workflows in finance: FinRobot, an AI-native agent framework for ERP systems, autonomously handles tasks like budget planning, financial reporting, and even wire transfer processing, managing workflows end-to-end while reducing errors and improving compliance.

Every one of these agents requires system access to function, and at first the arrangement appears manageable: an employee oversees a handful of assistants, each limited to a narrow task. But the number of agents expands quickly. A single employee account can spawn dozens of them, all operating under that individual’s authority yet making independent decisions. Information then moves between systems without anyone reviewing it, and what began as a simple effort to offload routine work evolves into a distributed network of autonomous actors embedded across critical workflows.

How Errors Cascade

Imagine a summarization agent scanning project-related emails. Because its permissions extend across the entire mailbox, it also ingests payroll and legal correspondence. The output is posted in Slack, where another agent updates Salesforce records. A reporting agent then generates a dashboard in Google Drive.

No human intervened at any step. What began as a request for a project summary became the uncontrolled spread of sensitive information across three enterprise systems. From the outside, it all looks authorized.

At Black Hat USA 2025, researchers demonstrated “zero-click” prompt injection attacks that hijacked agents in platforms including ChatGPT, Microsoft Copilot, and Salesforce Einstein, exfiltrating data under valid credentials . Weeks later, investigators disclosed CVE-2025-54135 in Cursor AI, a widely used code editor. A single malicious prompt altered configuration files and escalated into full remote code execution.

These examples show what Anthropic hinted at: once AI is an “employee,” mistakes and compromises don’t just happen at the edges. They move through the core of business systems.

Frameworks Built for Humans Don’t Solve the Problem

Identity and access controls were built for people. OAuth, SAML, and consent flows assume a human is on the other end, reading prompts, weighing context, and exercising discretion. An agent does none of this. It executes whatever its permissions allow, guided only by the instructions it was given.

Because creating narrow roles is difficult, many companies take shortcuts, granting agents the same rights as their human operators. One account becomes dozens of processes with broad authority. Enterprises end up with distributed networks of non-human actors moving sensitive data between systems without oversight and without a framework that constrains their choices.

This is why Anthropic’s framing of “AI employees” matters. We already have the jobs; what we lack are the guardrails.

The Blind Spot Across Systems

The problem is compounded by how deeply AI is embedded in enterprise tools. Monitoring often focuses on traffic to obvious services like ChatGPT or Claude. But AI is already in Figma, Adobe, Microsoft 365, Google Workspace, and Salesforce. Each feature creates new pathways for information to flow into a model, often without showing up as a distinct service call.

When an agent mishandles information in this environment, the exposure does not follow a clean, human-readable trail. Sensitive data can move from email to Slack to CRM to shared drives, all through legitimate agent activity. Logs record nothing unusual. The risk comes not from unauthorized access, but from authorized systems moving information in ways no one expected.

Here Comes the Security Debt

Companies know they need AI security, but most controls are still aimed at yesterday’s threats: blocking unauthorized logins, filtering malicious inputs, or flagging suspicious behavior. Those measures work when the problem is someone trying to break in. They do not help when the activity is an authorized agent combining information in ways that violate policy.

By deploying AI agents at scale, enterprises are building networks of non-human actors that operate continuously, interact with one another, and make decisions without human review. A handful of organizations are experimenting with agent-specific permissions and monitoring, but most still treat agents as if they were conventional applications, predictable, bounded, and static. They are anything but.

This is where Anthropic’s “AI employees” framing is useful. It reminds us that these systems are not just tools. They are active participants in workflows, and they need to be governed as such.

What Comes Next

Anthropic’s warning highlighted the economic disruption of AI replacing jobs. The operational challenge is arriving even sooner: identity complexity and security exposure at enterprise scale. Companies are reaping the benefits of automation, but at the same time they are accumulating security debt.

The next phase of enterprise security will be defined by how quickly organizations accept that they are already managing non-human employees, and how fast they adapt their frameworks to govern them.

Where do we go from here?

Enterprises that continue to treat agents like conventional applications will face mounting security debt. At Acuvity, we’re helping security leaders confront this shift directly, with visibility into how AI agents move, interact, and make decisions across the enterprise.

Explore how our approach to Gen AI security and governance can give your organization the control it needs. Request a demo today.

http://acuvity.ai

Sudeep Padiyar is very passionate about Cyber Security and has built compelling products ranging from Generative AI to traditional network security. He is currently building cutting edge security for protecting Gen AI agentic application and user access at Acuvity. Prior to that he was in the founding team at Traceable AI, the leading API security startup acquired by Harness and played an instrumental part in its tremendous growth and successful exit. He started his security stint at Palo Alto Networks where he spearheaded CN-Series - the industry’s first Kubernetes next-gen firewall, lead automation initiatives for cloud security and managed cloud network security products. He has a MBA from Santa Clara University and MS from State University of New York.