Back to blog

Cloud & AI Development Act: European cloud leaders call for executable AI sovereignty

Article created on 21 March 2026 · Source analyzed: 17 March 2026 · Source: CISPE

A coalition of European cloud providers via CISPE published a joint letter on 17 March 2026 addressing the upcoming Cloud & AI Development Act (CADA). Their core argument is practical: without strict operational criteria, "sovereign AI" can remain marketing language instead of an auditable European capability.

1. What is officially announced

The joint letter asks the European Commission to include enforceable safeguards against "sovereignty washing" in CADA: effective governance control, operational independence, data residency guarantees, and transparency on non-EU dependencies. The goal is to separate declared compliance from real, enforceable sovereignty.

2. Why this is major sovereign AI news

This position shifts the debate from branding to execution conditions for AI systems: who operates, who administers, who can enforce legal access, and where critical control points sit. For enterprises, this directly impacts platform choices, contracting models, and continuity architecture for sensitive AI workloads.

3. Operational reading for organizations

IT, security, and legal teams should audit AI/cloud services using a "real control vs claimed control" framework: admin model, subcontracting chain, key management, logs, support operations, and extraterritorial clauses. Sovereignty becomes a measurable architecture and governance discipline, not only a communication claim.

Build an executable sovereignty matrix for critical AI workloads and prioritize remediation gaps within the next 90 days.

Start scoping

Read official source

Read the joint letter (PDF)