“Sovereign AI” is moving from conferences into budgets. At its simplest, sovereign AI means a nation can develop and run AI using its own infrastructure and data, under its own governance. The term got mainstream attention in early 2024 as leaders and vendors started framing AI as national capability, not just software.
In Europe, the conversation is now tied to concrete programs. AI Factories are being rolled out through the EuroHPC ecosystem, and the EU is also pushing toward much larger AI gigafactories through InvestAI.
The common starting point is infrastructure: where compute sits, which jurisdiction applies, who operates the stack. That starting point matters. In production, it is rarely the deciding factor.
Alongside that, InvestAI was launched with the stated aim of mobilising €200 billion for AI investment, including a €20 billion fund for AI gigafactories. In mid-2025, Reuters reported strong market interest in the gigafactory push, with dozens of bids.
By December 2025, the European Commission published a Memorandum of Understanding on AI Gigafactories, and the EIB described its role in supporting financing structures and advisory support.
This is why “sovereign AI” is no longer just language. It is becoming architecture, funding, and vendor selection.
It is tested when teams monitor production, debug failures, investigate incidents, integrate third-party services, and move fast under pressure. That is when sensitive data is most likely to appear in plaintext, even if residency rules are followed.
This view is shaped by our work at Wodan AI, where we focus on keeping sensitive data protected during computation, because that’s where governance usually gets tested.
This is not a moral argument about good or bad practices. It is about operational reality.
If a system needs plaintext to compute, plaintext will spread. Not because teams are careless, but because modern stacks include many tools that implicitly assume visibility.
This is the gap many sovereign AI programs still under-specify: what happens to sensitive data during processing.
AI makes this distinction unavoidable because the value is created when data is used. Usage expands the number of systems involved, the number of integrations, and the number of people who can affect exposure.
For business leaders, the symptoms are familiar. Compliance and legal reviews get slower because boundaries are hard to explain end-to-end. Vendor risk becomes harder to manage because the real system includes tooling outside the core platform. Production rollouts stall because exceptions multiply.
This is the point where sovereignty moves from policy to operating model.
Confidential computing is commonly described as protecting data during processing, typically using hardware-based trusted execution environments.
Fully homomorphic encryption (FHE) is another path, allowing computation over encrypted data without decrypting it first.
These are not interchangeable approaches, and a business audience does not need a deep technical comparison to understand the key point: both aim to reduce how often sensitive data must be exposed in plaintext to make systems work.
That reduction has direct executive value. It shrinks the trust boundary. It reduces the number of tools and roles that need raw access. It makes governance more durable when teams are under operational pressure.
In a sovereign AI context, that is not a nice-to-have. It is the difference between “sovereign on paper” and “sovereign in production.”
The next step is to treat runtime data protection as a first-class requirement, not a technical footnote.
If plaintext remains the default, “sovereign” becomes harder to defend the moment systems go live.
EuroHPC JU, “The EuroHPC JU Selects Additional AI Factories…” (Mar 12, 2025)
NIST, “Fully-Homomorphic Encryption (FHE)” (Privacy-Enhancing Cryptography project page)
Ready to see encrypted-in-use AI in action? Book a demo of Wodan AI solution today.

