What Anthropic announced
On April 8, 2026, Anthropic launched Claude Managed Agents, currently in public beta. It's not a new model. It's not a Claude update. It's an infrastructure service: a cloud platform for running autonomous AI agents without worrying about servers, containers, recovery or scaling.
Until now, building an agent with Claude meant handling everything yourself: the execution loop, tool orchestration, sandboxing, state persistence. Managed Agents take that work off your plate. You define what the agent should do and which tools it can use, Anthropic handles the rest.
The promise is concrete: cut development timelines from months to days. Early adopters — Notion, Rakuten, Asana, Sentry — report up to 10x faster deployment times.
How they work: four key concepts
The Managed Agents architecture revolves around four elements.
Agent: the agent definition. It includes the Claude model to use, the system prompt, available tools, connected MCP servers and guardrails. You create it once and reuse it across multiple sessions.
Environment: the configured cloud container. You can pre-install packages (Python, Node.js, Go), define network access rules and mount files. This is where the agent operates.
Session: an active instance of the agent executing a specific task. It has a persistent filesystem and conversation history that survives between interactions.
Events: the messages exchanged between your application and the agent. You can send input, receive output via SSE (Server-Sent Events) streaming, and even interrupt or redirect the agent during execution.
What they can actually do
Managed Agents have access to a full set of built-in tools. They can execute bash commands in the container, read and write files, search the web, browse pages and connect to external services via MCP servers.
This means an agent can, in a single session: analyze an uploaded document, extract data, write code to process it, execute that code, verify the results and generate a report. All without you managing any infrastructure.
Features in research preview include the ability to spawn sub-agents for parallel tasks (multi-agent), persistent memory across sessions, and automatic prompt refinement — which improved success rates by up to 10 percentage points in internal testing.
For companies already using MCP servers to expose internal data, integration is straightforward: just declare your MCP servers in the agent definition.
Get updates on Claude and AI for business
One email when there's something worth reading. No spam.
Evaluating Claude for your organization? Learn what it costs or which plan fits best
How much they cost: the pricing model
Pricing has two components. The first is infrastructure runtime cost: $0.08 per hour of active session time. Idle time is not counted. The second is Claude token consumption, which follows standard API rates.
A concrete example: a customer service agent handling 20-minute tickets costs roughly $0.027 in runtime per ticket, plus $0.10-0.50 in tokens depending on complexity. For an always-on agent running 24/7, runtime alone is about $58 per month, with token costs on top.
To understand how this fits into the broader picture of Claude costs for businesses, the main advantage is that you don't need to invest in dedicated infrastructure: servers, orchestrators, recovery systems — it's all included.
Who's already using them
Four high-profile companies have integrated Managed Agents into their products.
Notion uses them for parallel task delegation: a primary agent coordinating specialized sub-agents for different tasks. Asana has built "AI teammates" that handle routine project work. Sentry has automated the flow from bug detection to pull request creation. Rakuten went from pilot to production in less than a week.
These are all enterprise cases with significant volume. The common pattern is the same: processes that previously required complex infrastructure setup now launch in days.
What this means for your business
For companies evaluating AI agent adoption, Managed Agents dramatically lower the barrier to entry. No dedicated DevOps team needed, no container management, no building recovery systems from scratch.
Many organizations have limited AI expertise in-house. A managed service that abstracts away the infrastructure lets you focus on what actually matters — defining which processes to automate and setting governance rules — without getting bogged down in technical complexity.
The most immediate use cases: document automation in legal and compliance, financial analysis in private equity, ticket management in customer service, code review and testing in software development.
For those already working with the Claude API or the Agent SDK, Managed Agents represent the next step: from experimentation to production, with infrastructure managed by Anthropic.
If you're evaluating how to integrate Claude into your organization, or want to understand which approach fits best — Managed Agents, direct API or Agent SDK — the Maverick AI team can help with an independent assessment.