68% of European enterprises use generative AI. Most of them send data to American servers.
The number comes from a McKinsey report from late 2025: nearly seven out of ten European companies have integrated generative AI tools into their workflows. But the vast majority did it the fastest way possible, without asking too many questions about where those data end up.
Every prompt sent to an LLM contains information. Sometimes it is anonymous data, sometimes it is client emails, contract drafts, financial reports, confidential strategies. And in most cases, that data crosses the Atlantic, gets processed on servers in the United States, and remains subject to American jurisdiction.
For a European enterprise, this is not just a technical problem. It is a regulatory problem that becomes more concrete every month. The European regulatory framework on data and AI has evolved rapidly, and ignoring it means exposure to risks that go well beyond fines.
The regulatory framework: three laws that change the rules of the game
Anyone adopting generative AI in Europe must deal with three regulatory pillars that, together, redefine what you can do with corporate data.
The first is GDPR, which everyone knows but few truly apply to AI tools. The critical point is the transfer of personal data outside the European Union. Doing so requires a solid legal basis: an adequacy decision, Standard Contractual Clauses (SCCs), or a Data Processing Agreement (DPA) with the provider. For a deeper dive into how this applies to Claude, read our article on compliance and privacy.
The second is the Schrems II ruling by the Court of Justice of the EU, which in 2020 invalidated the Privacy Shield and established a clear principle: SCCs alone are not enough if the destination country does not offer protections equivalent to those in Europe. The 2023 EU-US Data Privacy Framework partially addressed the issue, but its long-term stability remains uncertain and many legal experts consider it vulnerable to a future Schrems III challenge.
The third is the AI Act, the European regulation on artificial intelligence that entered into force in August 2024 with progressive application. Provisions on prohibited practices became effective in February 2025, those on general-purpose AI models (GPAI) in August 2025, and full enforcement for high-risk systems kicks in from August 2026. For enterprises using LLMs, the AI Act introduces transparency, governance, and documentation obligations that cannot be improvised.
Additionally, the Data Governance Act, operational since 2023, creates a framework for secure data sharing between organisations. It does not directly target LLMs, but it helps define the context in which they operate.
What the AI Act changes for enterprises using LLMs
The AI Act classifies AI systems by risk level. Most companies using Claude or other LLMs for document analysis, decision support, or process automation fall into the limited risk category. But the line is thin.
If the AI system is used to assess a client's creditworthiness, screen job candidates, or make decisions that impact people's rights, you enter high-risk territory. And there the obligations become heavy: conformity assessments, detailed technical documentation, human oversight, risk management, and registration in a European database.
Even for limited-risk uses, the AI Act requires transparency: people must know when they are interacting with an AI system, and the company must be able to document how the system works and what data it uses.
For general-purpose AI models like Claude, the AI Act imposes specific obligations on providers: technical documentation, copyright policies, transparency about training data. Anthropic, as the provider, must comply with these requirements. But the enterprise integrating Claude into its own processes has its own responsibilities: it must assess the risk of its specific use case and document the measures adopted.
The key point is that signing a contract with the provider is not enough. The enterprise is responsible for how it uses the system, not just for who supplies it.
Want to adopt generative AI with your data in Europe?
30 minutes to discuss your specific case.
Concrete options: from DPAs to private inference
European enterprises have several options for adopting generative AI while respecting the regulatory framework. They can be organised on an ascending scale of protection.
The baseline is API access with a DPA and SCCs. Anthropic offers a GDPR-compliant Data Processing Agreement for API customers. Data is processed on Anthropic's servers, but with contractual guarantees of no-training and limited retention. This is sufficient for many use cases, but data still crosses the Atlantic.
The second level is Claude enterprise plans with enhanced contractual guarantees: SSO, audit logs, customisable retention policies, dedicated support for legal teams. For a complete overview of integration options, see our guide on how to integrate Claude.
The third level is EU cloud deployment. Through Google Cloud Vertex AI (EU region) or AWS Bedrock (Frankfurt), you can use Claude models with the guarantee that data never leaves European territory. The infrastructure belongs to the cloud provider, but data residency is European. For many companies in regulated sectors, this is the optimal solution.
The fourth level is Cowork 3P: inference runs through the company's own cloud provider, conversations stay on the local device, no data passes through Anthropic. This option offers maximum control over data sovereignty, because it combines the power of the model with an architecture where the company retains complete ownership of its data.
Five things to do today, without waiting for full enforcement
Full AI Act enforcement for high-risk systems arrives in August 2026. But waiting until the last moment is a losing strategy, for at least two reasons: first, the provisions on GPAI models and prohibited practices are already in effect; second, compliance requires time, expertise, and organisational changes that cannot be improvised.
Here is what to do now.
Map AI usage across your organisation. Conduct a census of all generative AI tools in use: by whom, with what data, and for what purposes. You cannot manage what you do not know.
Classify the risk of each use case. Use the AI Act categories: unacceptable, high, limited, minimal risk. For every high-risk use, start the conformity assessment.
Review contracts with AI providers. Do you have a DPA? Does it include SCCs? Does the provider commit to not training on your data? If you use Claude via API, these guarantees exist. If you use consumer tools, they probably do not.
Evaluate your data residency architecture. For the most sensitive data, consider EU cloud deployment or private inference. The additional cost is modest compared to the regulatory and reputational risk.
Document everything. The AI Act requires technical documentation, usage records, and risk assessments. Start creating these documents now, even in simplified form. When enforcement arrives, you will already have the structure in place.
Data sovereignty is a competitive advantage
There is a wrong way to approach this: seeing it as a cost, a bureaucratic obstacle, yet another European requirement that slows down innovation.
And there is the right way: understanding that data sovereignty is a competitive advantage. European enterprises that adopt generative AI with compliant architectures can do so with confidence, scale without legal risk, and differentiate themselves from competitors who built on urgency without thinking about sustainability.
Clients, especially in finance, healthcare, legal, and public administration, are starting to demand guarantees on data handling even when AI tools are involved. Companies that can demonstrate a solid architecture have a tangible commercial advantage.
Maverick AI works with Italian and European enterprises on exactly this: we help choose the right architecture, from enterprise plans to EU cloud deployment to private inference, based on the specific risk profile and operational needs.
Let's talk. A thirty-minute call is enough to understand which option is right for your situation.