Enterprise AI9 min readPublished on 2026-04-23

Claude Cowork 3P: Private Inference, Data Under Your Control

How Cowork 3P keeps enterprise data off Anthropic's servers. Inference through your cloud provider, conversations on local devices, zero data egress.

In a nutshell

Cowork 3P routes Claude inference through your own cloud provider (Vertex AI, Bedrock). Conversations stay on the user's device, no data reaches Anthropic.

The real question enterprises are not asking about AI

Every company adopting AI runs into the same conversation sooner or later. Not about capabilities. Not about cost. About data.

Where does the prompt go when a user hits send? Who processes it? Who stores the response? Who could, in theory, read the conversation six months from now?

With most AI tools, the answer is uncomfortable. Your data travels to the provider's servers. It gets processed there. It might be retained for abuse monitoring, safety checks, or — in some cases — model improvement. The provider's privacy policy says what they will not do. But the data still leaves your perimeter.

For a marketing team brainstorming campaign ideas, this is fine. For a law firm reviewing a merger agreement, or a pharmaceutical company analyzing clinical trial data, it is not.

Anthropic recognized this tension early. Their API already offers strong contractual guarantees — no training on customer data, SOC 2 Type II certification, configurable retention. But contractual guarantees are promises. Some enterprises need architecture.

That is what Cowork 3P delivers. Not a better promise. A different architecture.

What Cowork 3P actually changes

Cowork 3P is a deployment mode of Claude Desktop designed for enterprises that need to control the entire data path. The name — Third-Party Provider — describes exactly what it does: model inference runs through a cloud provider configured by the company, not through Anthropic's API.

Supported providers today are Google Cloud Vertex AI, Amazon Bedrock, and Microsoft Foundry, plus compatible API gateways. An important distinction: with Vertex AI and Bedrock, Claude models run on Google's and Amazon's infrastructure respectively. Your requests go directly from the user's device to the configured regional endpoint. No data transits through Anthropic.

With Microsoft Foundry, the situation is different. Claude models still run on Anthropic's infrastructure. Foundry acts as a commercial integration for billing and access through Azure, but the "no data sent to Anthropic" guarantee does not apply. This matters, and companies evaluating Cowork 3P should understand the difference before choosing a provider.

The application itself runs as a local web app bundled in the desktop client. Conversations are stored on the user's device — not on Anthropic's backend, not in the cloud. User identity is local. There is no Anthropic account involved.

The result: a deployment where the company decides which cloud processes the inference, which region hosts it, and where conversations live. The data path is auditable from end to end.

Architecture compared: Standard Cowork vs Cowork 3P

The differences become clearer when you compare the two modes side by side.

In standard Cowork, inference goes through Anthropic's API. The web app loads from claude.ai. User identity is an Anthropic account. Conversations are stored on Anthropic's backend. Configuration happens through the admin console at claude.ai.

In Cowork 3P, inference goes through the configured provider endpoint — your Vertex AI project, your Bedrock setup. The web app is bundled in the desktop application. User identity is local to the device. Conversations live on the user's disk. Configuration is OS-native, managed through MDM systems like Jamf, Intune, Workspace ONE, or Group Policy.

What stays identical: the agentic capabilities. File creation, multi-step research, sub-agent coordination, the Code tab — all of it works the same way. Tool execution runs in a sandboxed local VM in both modes. The user experience does not degrade.

This is the part that matters most for adoption. IT and security teams get the architecture they need. End users get the same Claude they would use otherwise. No compromises on either side.

Want to deploy Claude with full data control?

30 minutes to discuss your specific case.

Book a call

Data residency and regulated industries

Data residency is where Cowork 3P goes from interesting to essential.

With Vertex AI or Bedrock, the request goes directly from the user's device to the configured regional endpoint. Data residency is determined by two factors: the cloud region you select and the physical location of the device. For organizations operating across multiple jurisdictions, you can deploy distinct MDM configuration profiles per geography — EU employees routed to an EU endpoint, US employees to a US endpoint.

Vertex AI and Bedrock offer Claude in EU, UK, Asia-Pacific, and other sovereign regions. This means a company headquartered in Milan can ensure that every prompt and every response stays within EU borders. No data crosses the Atlantic. No supplementary transfer mechanisms needed.

For highly regulated industries — financial services, healthcare, defense, public sector — this changes the conversation entirely. Instead of negotiating contractual safeguards around data transfers, you eliminate the transfers. Instead of relying on Standard Contractual Clauses to justify sending sensitive data to US servers, you keep the data where your regulator expects it to be.

FedRAMP and ITAR compliance requirements, which effectively prohibit certain data from leaving specific jurisdictions, become addressable. Not through legal workarounds, but through infrastructure design. If you are evaluating Claude for use cases involving personal data and GDPR compliance, Cowork 3P offers the strongest architectural answer available today.

Security, telemetry, and enterprise control

Beyond data residency, Cowork 3P gives IT teams a level of control that most AI deployments simply do not offer.

Tool execution happens in a hardened, sandboxed VM on the local machine — identical to standard Cowork. No code runs on remote servers. No files leave the device unless the user explicitly shares them.

Telemetry is auditable. The system includes a sensitive data scrubber that strips personally identifiable information before any telemetry is transmitted. And if that is still too much, telemetry can be fully disabled. Organizations can also export full session activity to their own OpenTelemetry collector, feeding it into existing SIEM and monitoring infrastructure.

Central management via MDM means IT teams can push configuration changes, enforce policies, and revoke access without touching individual machines. This is not a consumer tool with enterprise features bolted on. It is an enterprise deployment model from the ground up.

One note of transparency: Cowork 3P is currently in Research Preview. It is not yet generally available. Anthropic is actively developing the product and onboarding early enterprise customers. The architecture is solid, the direction is clear, but companies evaluating it today should factor in the maturity stage. For organizations that want to understand the different Claude deployment options for enterprise, we have written a dedicated comparison.

What this means for your AI strategy

The conversation about enterprise AI is shifting. Eighteen months ago, the question was whether AI was good enough for real work. Today, the question is whether AI is safe enough for your most sensitive work.

Cowork 3P represents a new category of answer. Not better terms of service. Not stronger contractual language. A fundamentally different architecture where the company controls inference, storage, and data flow.

This does not mean every organization needs Cowork 3P. For many use cases, Claude's standard API with its existing privacy guarantees is more than adequate. The right deployment model depends on your data sensitivity, your regulatory environment, and your risk tolerance. If you are evaluating how to integrate Claude into your organization, the first step is understanding which deployment model fits your specific requirements.

At Maverick AI, we specialize exclusively in Anthropic's ecosystem. We have designed Claude architectures for companies in financial services, M&A advisory, and other regulated sectors. We help organizations navigate the choice between API, Enterprise, and Cowork 3P — and we configure the infrastructure to match.

If your team needs AI but your data cannot leave your perimeter, we should talk. Get in touch for a free consultation.

FT
Federico Thiella·Founder, Maverick AI

Works with European companies on Claude and Anthropic ecosystem adoption. Has led AI implementations in private equity, consulting, manufacturing and professional services.

LinkedIn

Want to deploy Claude with full data control?

We design private Claude architectures on Vertex AI and Bedrock, with data residency, MDM management, and zero data egress to Anthropic.

Write to us

Domande Frequenti

Cowork 3P is a deployment mode of Claude Desktop that routes all model inference through a third-party cloud provider configured by the company — such as Google Cloud Vertex AI or Amazon Bedrock. Conversations are stored locally on the user's device, and no data is sent to Anthropic's servers. It is designed for enterprises that need full control over their AI data path.
With Vertex AI or Bedrock as the configured provider, no conversation data reaches Anthropic. Requests go directly from the user's device to the cloud provider's regional endpoint. Telemetry includes a sensitive data scrubber and can be fully disabled. However, with Microsoft Foundry as the provider, Claude models still run on Anthropic's infrastructure, so the zero-egress guarantee does not apply in that configuration.
Cowork 3P currently supports Google Cloud Vertex AI, Amazon Bedrock, Microsoft Foundry, and compatible API gateways. The strongest data privacy guarantees apply with Vertex AI and Bedrock, where Claude runs on the cloud provider's own infrastructure. With Foundry, Claude runs on Anthropic's infrastructure through an Azure commercial integration.
Yes. With Vertex AI or Bedrock, you can select an EU cloud region as the inference endpoint. Since requests go directly from the user's device to that endpoint, data never leaves the EU. For multi-region organizations, distinct MDM configuration profiles can be deployed per geography. Both Vertex AI and Bedrock offer Claude in EU, UK, Asia-Pacific, and other sovereign regions.
Cowork 3P is currently in Research Preview — it is not yet generally available. Anthropic is actively developing the product and onboarding early enterprise customers. Organizations interested in evaluating it should contact Anthropic directly or work with a specialized partner like Maverick AI to assess readiness and plan the deployment architecture.

Stay informed on AI for business

Get updates on Claude AI, business use cases and implementation strategies. No spam, just useful content.

Want to learn more?

Contact us to find out how we can help your company with tailored AI solutions.

Anthropic implementation partner in Italy. We work with companies in PE, pharma, fashion, manufacturing and consulting.

Get in Touch
Claude Cowork 3P: Private AI Inference for Enterprise | Maverick AI