Compliance9 min readPublished on 2026-04-23

AI and data sovereignty in Europe: what changes for enterprises using LLMs

GDPR, AI Act, Schrems II: what European enterprises using LLMs like Claude need to do to protect their data. Concrete options and practical advice.

In a nutshell

European enterprises adopting generative AI face a complex regulatory landscape: GDPR, AI Act, Schrems II. Options range from basic DPAs to EU cloud deployment to private inference with Cowork 3P. Companies that act now gain a lasting advantage.

68% of European enterprises use generative AI. Most of them send data to American servers.

The number comes from a McKinsey report from late 2025: nearly seven out of ten European companies have integrated generative AI tools into their workflows. But the vast majority did it the fastest way possible, without asking too many questions about where those data end up.

Every prompt sent to an LLM contains information. Sometimes it is anonymous data, sometimes it is client emails, contract drafts, financial reports, confidential strategies. And in most cases, that data crosses the Atlantic, gets processed on servers in the United States, and remains subject to American jurisdiction.

For a European enterprise, this is not just a technical problem. It is a regulatory problem that becomes more concrete every month. The European regulatory framework on data and AI has evolved rapidly, and ignoring it means exposure to risks that go well beyond fines.

The regulatory framework: three laws that change the rules of the game

Anyone adopting generative AI in Europe must deal with three regulatory pillars that, together, redefine what you can do with corporate data.

The first is GDPR, which everyone knows but few truly apply to AI tools. The critical point is the transfer of personal data outside the European Union. Doing so requires a solid legal basis: an adequacy decision, Standard Contractual Clauses (SCCs), or a Data Processing Agreement (DPA) with the provider. For a deeper dive into how this applies to Claude, read our article on compliance and privacy.

The second is the Schrems II ruling by the Court of Justice of the EU, which in 2020 invalidated the Privacy Shield and established a clear principle: SCCs alone are not enough if the destination country does not offer protections equivalent to those in Europe. The 2023 EU-US Data Privacy Framework partially addressed the issue, but its long-term stability remains uncertain and many legal experts consider it vulnerable to a future Schrems III challenge.

The third is the AI Act, the European regulation on artificial intelligence that entered into force in August 2024 with progressive application. Provisions on prohibited practices became effective in February 2025, those on general-purpose AI models (GPAI) in August 2025, and full enforcement for high-risk systems kicks in from August 2026. For enterprises using LLMs, the AI Act introduces transparency, governance, and documentation obligations that cannot be improvised.

Additionally, the Data Governance Act, operational since 2023, creates a framework for secure data sharing between organisations. It does not directly target LLMs, but it helps define the context in which they operate.

What the AI Act changes for enterprises using LLMs

The AI Act classifies AI systems by risk level. Most companies using Claude or other LLMs for document analysis, decision support, or process automation fall into the limited risk category. But the line is thin.

If the AI system is used to assess a client's creditworthiness, screen job candidates, or make decisions that impact people's rights, you enter high-risk territory. And there the obligations become heavy: conformity assessments, detailed technical documentation, human oversight, risk management, and registration in a European database.

Even for limited-risk uses, the AI Act requires transparency: people must know when they are interacting with an AI system, and the company must be able to document how the system works and what data it uses.

For general-purpose AI models like Claude, the AI Act imposes specific obligations on providers: technical documentation, copyright policies, transparency about training data. Anthropic, as the provider, must comply with these requirements. But the enterprise integrating Claude into its own processes has its own responsibilities: it must assess the risk of its specific use case and document the measures adopted.

The key point is that signing a contract with the provider is not enough. The enterprise is responsible for how it uses the system, not just for who supplies it.

Want to adopt generative AI with your data in Europe?

30 minutes to discuss your specific case.

Book a call

Concrete options: from DPAs to private inference

European enterprises have several options for adopting generative AI while respecting the regulatory framework. They can be organised on an ascending scale of protection.

The baseline is API access with a DPA and SCCs. Anthropic offers a GDPR-compliant Data Processing Agreement for API customers. Data is processed on Anthropic's servers, but with contractual guarantees of no-training and limited retention. This is sufficient for many use cases, but data still crosses the Atlantic.

The second level is Claude enterprise plans with enhanced contractual guarantees: SSO, audit logs, customisable retention policies, dedicated support for legal teams. For a complete overview of integration options, see our guide on how to integrate Claude.

The third level is EU cloud deployment. Through Google Cloud Vertex AI (EU region) or AWS Bedrock (Frankfurt), you can use Claude models with the guarantee that data never leaves European territory. The infrastructure belongs to the cloud provider, but data residency is European. For many companies in regulated sectors, this is the optimal solution.

The fourth level is Cowork 3P: inference runs through the company's own cloud provider, conversations stay on the local device, no data passes through Anthropic. This option offers maximum control over data sovereignty, because it combines the power of the model with an architecture where the company retains complete ownership of its data.

Five things to do today, without waiting for full enforcement

Full AI Act enforcement for high-risk systems arrives in August 2026. But waiting until the last moment is a losing strategy, for at least two reasons: first, the provisions on GPAI models and prohibited practices are already in effect; second, compliance requires time, expertise, and organisational changes that cannot be improvised.

Here is what to do now.

Map AI usage across your organisation. Conduct a census of all generative AI tools in use: by whom, with what data, and for what purposes. You cannot manage what you do not know.

Classify the risk of each use case. Use the AI Act categories: unacceptable, high, limited, minimal risk. For every high-risk use, start the conformity assessment.

Review contracts with AI providers. Do you have a DPA? Does it include SCCs? Does the provider commit to not training on your data? If you use Claude via API, these guarantees exist. If you use consumer tools, they probably do not.

Evaluate your data residency architecture. For the most sensitive data, consider EU cloud deployment or private inference. The additional cost is modest compared to the regulatory and reputational risk.

Document everything. The AI Act requires technical documentation, usage records, and risk assessments. Start creating these documents now, even in simplified form. When enforcement arrives, you will already have the structure in place.

Data sovereignty is a competitive advantage

There is a wrong way to approach this: seeing it as a cost, a bureaucratic obstacle, yet another European requirement that slows down innovation.

And there is the right way: understanding that data sovereignty is a competitive advantage. European enterprises that adopt generative AI with compliant architectures can do so with confidence, scale without legal risk, and differentiate themselves from competitors who built on urgency without thinking about sustainability.

Clients, especially in finance, healthcare, legal, and public administration, are starting to demand guarantees on data handling even when AI tools are involved. Companies that can demonstrate a solid architecture have a tangible commercial advantage.

Maverick AI works with Italian and European enterprises on exactly this: we help choose the right architecture, from enterprise plans to EU cloud deployment to private inference, based on the specific risk profile and operational needs.

Let's talk. A thirty-minute call is enough to understand which option is right for your situation.

FT
Federico Thiella·Founder, Maverick AI

Works with European companies on Claude and Anthropic ecosystem adoption. Has led AI implementations in private equity, consulting, manufacturing and professional services.

LinkedIn

Want to adopt generative AI with your data in Europe?

We help you choose the right architecture: EU cloud, private inference, or enterprise plans with contractual guarantees.

Write to us

Domande Frequenti

Yes. With the Claude API, Anthropic provides a GDPR-compliant Data Processing Agreement (DPA) with Standard Contractual Clauses (SCCs) for data transfers outside the EU. The provider commits to not using customer data for model training. For a higher level of protection, you can use Claude through EU cloud providers (Vertex AI or Bedrock in European regions) or with Cowork 3P, which keeps data entirely under the company's control.
Yes. The AI Act distinguishes between the model provider (Anthropic) and the deployer (the company using it). The provider has documentation and transparency obligations. The company integrating Claude into its processes must classify the risk level of its use case, ensure transparency towards users, and, for high-risk uses, conduct conformity assessments and maintain detailed technical documentation. Provisions on GPAI models have been effective since August 2025.
With Anthropic's direct API, data is processed on the provider's servers (mainly in the United States), with contractual guarantees of no-training and limited retention. With EU cloud deployment, such as Google Cloud Vertex AI in a European region or AWS Bedrock in Frankfurt, data never leaves EU territory. Processing occurs on the European cloud provider's infrastructure, offering native EU data residency.
The AI Act provides for significant penalties: up to 35 million euros or 7% of global turnover for the most serious violations (prohibited practices), and up to 15 million or 3% for non-compliance with high-risk system obligations. But the risk is not only financial: a non-compliant company faces reputational damage and loss of trust from clients and partners who require guarantees on data governance.
Cowork 3P is the Claude mode that routes inference through the company's own cloud provider (Google Cloud Vertex AI or AWS Bedrock) and saves conversations exclusively on the user's local device. No data passes through Anthropic's servers. This means the company maintains complete control over its data, meeting the strictest data residency and data sovereignty requirements under European regulation.

Stay informed on AI for business

Get updates on Claude AI, business use cases and implementation strategies. No spam, just useful content.

Want to learn more?

Contact us to find out how we can help your company with tailored AI solutions.

Anthropic implementation partner in Italy. We work with companies in PE, pharma, fashion, manufacturing and consulting.

Get in Touch
AI Data Sovereignty in Europe: Enterprise Guide (2026) | Maverick AI