Tag: european-tech

  • Brussels Blinked: The EU AI Act’s High-Risk Deadline Just Moved, but the Compliance Clock Has Not Stopped

    When maddaisy examined the shift from AI principles to penalties in February, the EU AI Act’s August 2026 deadline for high-risk AI systems sat at the centre of the analysis. That date — 2 August 2026 — was the moment when compliance stopped being theoretical and started carrying fines of up to seven per cent of global turnover.

    Four weeks later, Brussels blinked.

    On 13 March, the EU Council agreed its position on the Digital Omnibus package, pushing back the application of high-risk AI rules to December 2027 for standalone systems and August 2028 for those embedded in products. The proposal still requires negotiation with the European Parliament, but the direction is clear: the EU’s own regulatory infrastructure was not ready for its own deadline.

    What Actually Changed

    The delay is narrower than the headlines suggest. The EU AI Act’s prohibited practices — social scoring, manipulative AI targeting vulnerable groups, unauthorised real-time biometric surveillance — have been in force since February 2025 and remain untouched. Obligations for general-purpose AI model providers, including transparency and copyright requirements, still apply from August 2025. The Code of Practice requiring machine-readable detection techniques for AI-generated content is already published.

    What shifted is specifically the high-risk classification regime: the rules governing AI systems used in employment decisions, credit scoring, healthcare, education, law enforcement, and critical infrastructure. These are the provisions that demand conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. They are also the provisions that most enterprises have been scrambling to prepare for.

    The Council’s rationale is pragmatic rather than political. The European Commission missed its own February 2026 deadline for publishing the guidance and harmonised standards that enterprises need to demonstrate compliance. Without those standards, companies were being asked to hit a target that the regulator had not yet fully defined. As the Cypriot presidency put it, the goal is “greater legal certainty” and “more proportionate” implementation — diplomatic language for acknowledging that the implementation machinery was not keeping pace with the legislative ambition.

    The Compliance Paradox

    For enterprises that have spent the past 18 months building AI governance programmes, risk inventories, and compliance frameworks, the delay creates an awkward question: should they slow down?

    The short answer is no — and the reasoning matters more than the conclusion.

    First, the delay is conditional. The Council’s position sets fixed dates — December 2027 and August 2028 — but the Commission retains the ability to confirm earlier application if standards become available sooner. Organisations that pause their compliance programmes risk finding themselves back under pressure with less runway than they had before.

    Second, the regulatory landscape extends well beyond Brussels. As maddaisy has previously examined, the United States is building its own patchwork of state-level AI laws. Colorado’s AI Act takes effect in June 2026. California’s transparency requirements are already live. The EU delay does not change these timelines. An enterprise operating across both markets still faces near-term obligations.

    Third, and perhaps most importantly, the governance work itself has value beyond regulatory compliance. Organisations that have inventoried their AI systems, established accountability structures, and implemented monitoring processes are better positioned to manage operational risk, regardless of when a specific regulation takes effect. As the Ethyca governance framework notes, the shift from policy documentation to continuous operational evidence is happening independently of any single regulatory deadline.

    What the Delay Reveals

    The Digital Omnibus is not just a timeline adjustment. It is a signal about the structural challenges of regulating AI at the pace the technology is evolving.

    The EU built the world’s most comprehensive AI regulation. It classified systems by risk tier, defined obligations for providers and deployers, established penalties that exceed GDPR maximums, and applied the rules extraterritorially. What it did not build quickly enough was the operational layer: the harmonised standards, the conformity assessment procedures, the guidance documents that translate legal text into practical compliance steps.

    This mirrors a pattern maddaisy has observed across multiple regulatory domains. Europe’s cloud sovereignty push encountered similar friction — ambitious policy goals meeting incomplete implementation frameworks. The gap between legislative intent and operational readiness is becoming a recurring theme in European technology regulation.

    The Council’s position does include substantive additions alongside the delay. A new prohibition on AI-generated non-consensual intimate content and child sexual abuse material was introduced. Regulatory exemptions previously limited to SMEs were extended to small mid-cap companies. The AI Office’s enforcement powers were reinforced. These are not trivial changes — they show that the regulation is still being actively shaped even as its core provisions await full application.

    The ISO 42001 Factor

    One development running parallel to the regulatory delay is the accelerating adoption of ISO/IEC 42001, the international standard for AI management systems. Enterprise buyers are increasingly adding it to vendor procurement requirements, and AI liability insurers are beginning to factor governance certifications into risk assessments.

    For organisations uncertain about how to structure their compliance programmes during the delay, ISO 42001 offers a practical framework. It maps to the EU AI Act’s requirements without being dependent on them, meaning that compliance work done under the standard retains its value regardless of how regulatory timelines shift. Pega’s recent certification is one example of vendors using the standard to demonstrate governance readiness to enterprise clients.

    What Practitioners Should Do Now

    The EU AI Act delay changes timelines, not trajectories. The practical recommendations remain consistent with what maddaisy outlined in February, with one important addition:

    • Continue AI system inventories. Understanding what AI is deployed, where, and at what risk level is foundational work that no regulatory timeline change invalidates.
    • Monitor the Parliament negotiations. The Council position must be reconciled with the European Parliament before becoming final. The dates could shift again — in either direction.
    • Use the extra time for standards alignment. With harmonised standards still being developed, organisations now have an opportunity to align with ISO 42001 or the NIST AI Risk Management Framework before mandatory compliance begins.
    • Do not treat the delay as permission to deprioritise. Colorado, California, and other US state deadlines remain unchanged. Enterprise clients and procurement teams are not waiting for regulators — they are setting their own governance expectations now.

    The EU built the most ambitious AI regulation in the world, then discovered that ambition requires infrastructure. The delay is a concession to reality, not a retreat from intent. For enterprises, the message is straightforward: the destination has not changed, only the speed limit on the road getting there.

  • Europe’s Cloud Sovereignty Rush Meets Its Regulatory Reality Check

    European sovereign cloud spending is set to nearly double in 2026, from $6.9 billion to $12.6 billion according to Gartner’s latest forecast. Every major US hyperscaler now has a European sovereignty answer. AWS launched its European Sovereign Cloud from Germany in January, backed by a €7.8 billion investment. Google operates through S3NS, a French joint venture with Thales that holds SecNumCloud certification. Microsoft has Delos Cloud in Germany and Bleu in France.

    Yet beneath the flood of partnership announcements and sovereign cloud launches sits a less comfortable truth: the regulatory framework driving all of this activity is still incomplete, sometimes contradictory, and in certain critical areas, stalled entirely. For organisations trying to build disaster plans around Europe’s digital infrastructure, the ground has not stopped moving.

    The regulatory pile-up

    Three major pieces of European regulation now intersect on questions of cloud resilience and digital sovereignty — and none of them align neatly.

    The Digital Operational Resilience Act (DORA), enforceable since January 2025, requires financial institutions to implement comprehensive ICT risk management frameworks, including detailed third-party risk assessments for cloud providers. DORA is specific, prescriptive, and already creating compliance pressure across European banking and insurance.

    The NIS2 directive, enforceable since October 2024, extends similar resilience requirements to a much broader set of critical infrastructure operators — energy, transport, health, and digital infrastructure itself. Where DORA targets financial services, NIS2 casts a wider net but leaves more room for national interpretation, creating an uneven patchwork across EU member states.

    Then there is the European Cybersecurity Certification Scheme for Cloud Services (EUCS), which was supposed to provide a unified standard for assessing cloud security across the EU — including, controversially, sovereignty requirements that would have effectively barred non-EU cloud providers from the highest certification tier. That sovereignty clause was stripped from the latest drafts under intense lobbying pressure. The scheme itself remains unadopted. In January 2026, the European Commission proposed a revised Cybersecurity Act that would overhaul the entire certification framework — effectively resetting the process while organisations wait for clarity that may not arrive before 2027.

    Disaster planning in a regulatory fog

    The practical consequence for enterprises is an uncomfortable paradox. Regulations now require detailed disaster recovery and business continuity plans for cloud-dependent operations. But the certification framework that would define what “sovereign” or “resilient” actually means in practice remains unfinished.

    As maddaisy examined last week, the SAP-Microsoft “break glass” contingency plan illustrates the tension. It offers a theoretical failover for European Azure workloads in a crisis scenario, but analysts questioned whether a disconnected copy of Azure could remain operationally viable beyond a few weeks. The plan satisfies a political need — demonstrating that contingency planning exists — without resolving the deeper technical question of what happens when a severed cloud stops receiving updates.

    Capgemini’s CEO Aiman Ezzat has framed this pragmatically, arguing that Europe has meaningful sovereignty over data, operations, and regulation — but not over the underlying technology stack. The four-layer model he has described reflects the reality most enterprises face: sovereign in governance, dependent on US technology, and now required by law to plan for scenarios where that dependency becomes a liability.

    The hyperscaler response: sovereignty as a service

    The US cloud providers have responded to the regulatory and political pressure with significant investment. AWS’s European Sovereign Cloud, operating from Brandenburg, is architecturally separated from other AWS Regions — a genuine sovereign partition with EU-resident leadership and local operational control. AWS CEO Matt Garman called it a “big bet”, with expansion planned for Belgium, the Netherlands, and Portugal.

    Google’s approach in France, through S3NS (a joint venture where Google holds a minority stake under French law), has achieved SecNumCloud 3.2 qualification — the most demanding sovereignty standard currently in force in Europe. Microsoft’s structure routes through nationally controlled entities: Delos Cloud in Germany and Bleu (co-owned by Capgemini and Orange) in France.

    The pattern across all three is consistent: legal and operational separation, EU-resident personnel, local data residency, and contingency plans for geopolitical disruption. What differs is the depth of that separation. A fully air-gapped partition like Google’s Distributed Cloud offering for defence clients sits at one end of the spectrum. A contractual failover arrangement like the SAP-Microsoft deal sits at the other. Most enterprise workloads will land somewhere in between — and DORA and NIS2 require organisations to understand precisely where.

    What practitioners need to do now

    For consultants and technology leaders navigating this landscape, three priorities stand out.

    First, classify workloads by sovereignty sensitivity before choosing infrastructure. Not every application needs the highest tier of sovereign protection. DORA’s third-party risk requirements are prescriptive but risk-proportionate — a core banking system and an internal collaboration tool do not demand the same level of contingency planning. The trap is treating sovereignty as a binary choice rather than a spectrum.

    Second, build disaster plans around regulatory timelines, not vendor announcements. DORA enforcement is live. NIS2 implementation varies by member state but is progressing. The EUCS framework is stalled, but the underlying requirements it was meant to codify — around data residency, operational control, and access restrictions — are already being enforced through sector-specific regulation and national certification schemes like France’s SecNumCloud. Waiting for a pan-European standard before acting is not a viable compliance strategy.

    Third, pressure-test vendor contingency claims. The proliferation of sovereign cloud offerings and disaster recovery partnerships creates an illusion of completeness. But as Forrester analyst Dario Maisto noted of the SAP-Microsoft plan, many of these arrangements remain untested and legally unproven. “This is not compliance as much as risk management,” he said. Organisations should ask pointed questions about update cycles, hardware dependencies, and the operational lifespan of any disconnected cloud environment.

    The long view

    European digital sovereignty has moved from policy aspiration to market reality faster than the regulatory framework can keep pace. The investment figures are significant — AWS alone is committing €7.8 billion. The compliance deadlines are real. The contingency plans exist, at least on paper.

    But the gap between what regulations require and what certification frameworks define remains open. For organisations building disaster plans today, the most honest assessment is that they are planning against a moving target, using vendor solutions that have never been tested in the crisis scenarios they are designed for. That is not a reason to delay — DORA and NIS2 make delay legally untenable. It is a reason to plan with humility, build in flexibility, and avoid treating any single vendor’s sovereignty narrative as a finished answer.

  • SAP’s “Break Glass” Cloud Plan Exposes the Limits of European Digital Sovereignty

    SAP, Microsoft, Capgemini, and Orange have announced a joint contingency plan for European cloud services — a “break glass” option in case US hyperscalers are legally blocked from operating in Europe. The partnership, routed through SAP’s German subsidiary Delos Cloud and the French entity Bleu (co-owned by Capgemini and Orange), promises business continuity in crisis scenarios ranging from sanctions to military conflict.

    It is a notable development, and it connects directly to the sovereignty narrative maddaisy.com has been tracking. But before treating it as a solution, it is worth examining what the plan actually offers — and what analysts say it cannot.

    The deal in context

    When maddaisy examined Capgemini’s sovereignty strategy earlier this month, the picture was clear: European digital sovereignty is converging on a pragmatic middle ground. Rather than building independent infrastructure from scratch, European firms are positioning themselves as trusted operators running workloads on American hyperscaler platforms — sovereign in governance and operations, reliant on US technology underneath.

    The SAP-Microsoft-Capgemini-Orange agreement is the logical extension of that approach. SAP’s announcement describes a mutual assistance framework where Delos Cloud and Bleu would cooperate on cross-border crisis response, including “early detection, analysis, defence, and remediation of cyber incidents.” Separately, Delos Cloud and Microsoft signed a business continuity agreement allowing Delos to access source code and maintain operations if sanctions restrict Microsoft’s European services.

    In other words: if the worst happens, European operators would run a local copy of Azure, disconnected from Microsoft’s global network.

    The wildcard is Washington, not Brussels

    Analysts are broadly aligned on one point: the EU itself is highly unlikely to block American cloud providers. Some 75% of the European cloud market sits with US hyperscalers, according to Forrester senior analyst Dario Maisto. Cutting off that access would amount to economic self-harm on a significant scale.

    The real concern is the reverse scenario — the US government using its leverage over hyperscalers to pressure European governments. As Maisto put it to CIO: “What if the US administration pulls the kill switch? It would be the weaponisation of IT, because the US knows about this dependency.”

    Danilo Kirschner, managing director at European cloud consulting firm Zoi, was blunter: “There have been non-logical, nonsensical decisions in the past year. From a European perspective, we need to prepare for anything.”

    The likelihood of such a scenario remains low. But the fact that SAP and Microsoft are publicly planning for it signals that enterprise customers are asking uncomfortable questions — and expect answers.

    A lifeboat, not a luxury liner

    The technical reality is where the plan runs into difficulty. Running a severed version of Azure in a European data centre sounds feasible in a press release. In practice, as Kirschner explained, Azure is millions of lines of code updated daily. Disconnected from Microsoft’s global security intelligence, engineering updates, and optimisation pipelines, a local copy would degrade rapidly.

    “This is a lifeboat, not a luxury liner,” Kirschner said. “Your disaster recovery plans must account for the fact that a sovereign cloud in crisis mode will likely be a static, maintenance-only environment.”

    The hardware question compounds the problem. Azure runs on proprietary, custom-designed server infrastructure. If geopolitical tensions are severe enough to block software access, sourcing replacement hardware under the same sanctions regime becomes equally difficult. And if a crisis lasts months rather than weeks, the global Azure platform will have evolved while the European fork remains frozen — creating what Kirschner described as “a technological dead end that requires a total rebuild to reconnect.”

    Even the legal framework is untested. “This agreement will have to be tested in court once the problem happens, when it could be too late,” Maisto noted. “This is not compliance as much as risk management.”

    The sovereignty paradox deepens

    There is an irony at the heart of this deal that Kirschner identified clearly: by offering a break-glass option for European sovereignty, Microsoft has paradoxically strengthened its own position. The single biggest political risk to using American hyperscalers in the European public sector — the theoretical possibility of a forced disconnection — has been partially neutralised. European governments and enterprises can now point to a contingency plan, however imperfect, and continue building on US infrastructure.

    As maddaisy’s earlier analysis of Capgemini’s sovereignty framework noted, Capgemini CEO Aiman Ezzat has been candid that “there is no such thing as absolute sovereignty” because no entity controls the entire value chain. The SAP deal underscores that position. Europe is not building an alternative to American cloud infrastructure. It is building contingency plans that assume American cloud infrastructure remains the default.

    For hardliners in France and elsewhere who want European-built alternatives at the highest sovereign classification levels, this approach will be unsatisfying. But the practical question — what is the alternative? — remains unanswered. The European Cybersecurity Certification Scheme continues to evolve, yet the gap between regulatory ambition and infrastructure reality shows no sign of closing.

    What practitioners should take from this

    For enterprise architects and CIOs managing European workloads, the SAP-Microsoft-Capgemini deal changes the conversation without changing the underlying calculus. It provides a political answer to a political risk — a contingency plan that reassures procurement committees and satisfies sovereignty checkboxes. It does not, however, solve the fundamental dependency.

    The practical takeaway is threefold. First, organisations should treat this as risk management, not a guarantee — the plan’s viability in a real crisis remains unproven and potentially short-lived. Second, workload portability and multi-cloud strategies become more important, not less, in a world where even the contingency plans assume degraded service. Third, the sovereignty requirements that Capgemini estimated would feature in over 50% of European service contracts by 2029 are becoming structurally embedded in how deals are structured — and this agreement is part of that shift.

    Europe’s cloud sovereignty story is not moving toward independence. It is moving toward managed dependency, with increasingly elaborate safety nets. Whether those nets would hold under real stress is a question no one can answer yet — and the honest participants in this deal are not pretending otherwise.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.

  • Capgemini’s Sovereignty Playbook: Bridging Europe’s AI Ambitions and American Infrastructure

    In the space of a single week in early February, Capgemini signed sovereignty-focused partnerships with all three major US hyperscalers — Google Cloud, AWS, and Microsoft. Days later, CEO Aiman Ezzat used the company’s full-year results presentation to publicly dismiss calls for complete European tech autonomy.

    The juxtaposition was deliberate, and it tells a more interesting story than the headline financials. Capgemini is not just adding AI capabilities to its consulting portfolio. It is building a distinct commercial proposition around one of Europe’s most politically charged technology questions: who controls the infrastructure that enterprises depend on?

    The gap between rhetoric and reality

    European digital sovereignty has been a policy preoccupation for several years now, accelerated by concerns over US government data access, the dominance of American cloud providers, and the growing strategic importance of AI infrastructure. The European Commission has pushed for greater technological independence. Member states have launched sovereign cloud initiatives. The language of autonomy is everywhere.

    The reality, as Ezzat put it bluntly during the post-earnings call, is more complicated. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.”

    This is not a controversial claim among practitioners, but it is a notable one for a CEO whose company is headquartered in Paris and whose chairman also leads the digital working group at the European Round Table for Industry. Ezzat has been discussing sovereignty with the European Commission in Brussels and at Davos. His position is informed, not casual.

    A four-layer framework

    Ezzat outlined what amounts to a practical sovereignty framework built around four layers: data, operations, regulation, and technology. His argument is that Europe has meaningful independence on the first three — data residency and governance, operational control over services, and regulatory authority through instruments like GDPR and the AI Act. The fourth layer, the underlying technology stack, is where US Big Tech dominance means full independence is neither achievable nor, in his view, desirable.

    Rather than pursuing autonomy at every layer, Capgemini’s approach is to offer clients “the right sovereignty solution based on the use case, the client environment, the government.” In practice, this means European-managed services running on American infrastructure — sovereign in governance and operations, pragmatic on technology.

    As maddaisy noted earlier this week in examining Capgemini’s full-year results, the company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. That trajectory, if it holds, represents a structural shift in how enterprise IT contracts are structured across Europe.

    Three partnerships, one message

    The timing of Capgemini’s hyperscaler announcements was no coincidence. On 6 February, the company expanded its partnership with Google Cloud, establishing a Sovereign Cloud Delivery Practice and Centre of Excellence. Capgemini will operate as a Google Distributed Cloud air-gapped operator — meaning it can deliver fully managed services with total isolation from the public internet, suited to defence, intelligence, and critical infrastructure clients.

    On 9 February, a similar announcement followed with AWS, focused on sovereign-ready cloud and AI capabilities. Two days later, Capgemini formalised integrated sovereignty solutions with Microsoft. Three announcements in five days, each offering variations on the same theme: Capgemini as the European operator sitting between the client and the American cloud.

    This is a positioning play with genuine commercial substance. For European enterprises navigating tightening regulation — particularly public sector organisations, financial institutions, and healthcare providers — the question is not whether to use cloud services but how to use them in ways that satisfy increasingly specific sovereignty requirements. Capgemini is betting it can be the answer to that question.

    Where AI and sovereignty converge

    The sovereignty proposition becomes more compelling when combined with Capgemini’s broader AI pivot. Generative and agentic AI bookings exceeded 10% of group bookings in Q4 2025, and the company has trained 310,000 employees on generative AI and 194,000 on agentic AI — systems designed to take autonomous actions rather than simply generate content.

    AI workloads are particularly sensitive from a sovereignty perspective. They involve large volumes of proprietary data, often require access to regulated information, and increasingly touch decision-making processes that organisations want to keep within controlled environments. A sovereign AI solution — where the model runs on infrastructure governed under European jurisdiction, operated by a European firm, but built on the technical capabilities of a US hyperscaler — addresses a specific and growing need.

    Ezzat framed AI itself with characteristic pragmatism in a separate interview with Fortune. “AI is a business. It is not a technology,” he said, warning leaders against treating it as a “black box being managed separately.” His caution against AI FOMO — “You don’t want to be too ahead of the learning curve. If you are, you’re investing and building capabilities that nobody wants” — suggests a company that has learned from watching the metaverse hype cycle play out.

    What to watch

    Capgemini’s sovereignty strategy raises several questions worth tracking. First, whether the 50%-by-2029 estimate for sovereignty-embedded contracts proves accurate, or whether it reflects the kind of optimistic forecasting that consulting firms are prone to when promoting a new service line. Second, how European competitors — particularly Atos, which has its own sovereignty ambitions, and smaller European cloud providers — respond to Capgemini’s hyperscaler-partnered model. Third, whether the European Commission’s own stance on sovereignty tilts toward the pragmatic Capgemini position or toward more aggressive technological independence.

    For consultants and practitioners, the practical takeaway is straightforward: sovereignty is moving from a compliance checkbox to a structural feature of European enterprise contracts. The firms that build credible delivery capabilities around it now — not just policy positions, but operational partnerships and trained workforces — will have a meaningful advantage as regulation tightens. Capgemini has placed its bet. The question is whether the market follows.

  • Capgemini’s AI Pivot: From Hype to Realism, with €22.5 Billion at Stake

    Capgemini’s full-year 2025 results, released on 13 February, offered one of the clearest snapshots yet of how a major IT consultancy is repositioning itself around artificial intelligence — and the distance still to travel.

    The French group reported revenues of €22.5 billion, up 3.4% at constant currency. Growth was modest by historical standards, weighed down by cautious enterprise spending across Continental Europe. But the fourth quarter told a different story: 10.6% growth, suggesting that client hesitancy may be starting to ease.

    From hype to realism

    CEO Aiman Ezzat framed the company’s direction as a shift “from AI hype to AI realism” — a phrase that carries weight given how frequently the consultancy sector has leaned on AI as a growth narrative without always delivering measurable outcomes.

    The numbers behind the claim are worth examining. Generative AI bookings exceeded 8% of Capgemini’s total for the year, rising above 10% in Q4. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI — the emerging category of AI systems designed to take autonomous actions rather than simply generate content.

    These are significant investments in capability. Whether they translate into proportional revenue growth remains the open question. Capgemini’s 2026 guidance of 6.5% to 8.5% revenue growth fell slightly below the 7.2% analyst consensus, suggesting the market expected more from a company positioning AI at the centre of its strategy.

    The sovereignty opportunity

    Beyond AI, Capgemini is betting on a second structural trend: digital sovereignty. The company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. For a European-headquartered firm, this represents a genuine competitive advantage over US-based rivals.

    Ezzat was notably pragmatic on the sovereignty question, dismissing calls for full European tech autonomy. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.” Instead, he advocated for finding “the right sovereignty solution based on the use case, the client environment, the government.”

    This positions Capgemini as a bridge between European regulatory ambitions and the practical reality that most enterprise infrastructure still runs on AWS, Google Cloud, and Microsoft Azure. The company has signed partnerships with all three US hyperscalers to deliver what it calls “sovereign” AI solutions — European-managed services running on American infrastructure.

    The uncomfortable parts

    Not everything in the results paints a straightforward growth picture. Net income declined 4.2% year-on-year to €1.6 billion. The stock has fallen approximately 29% so far in 2026, caught in a broader selloff of companies perceived as vulnerable to AI-driven disruption — an ironic position for a firm making AI the centrepiece of its strategy.

    There is also the matter of Capgemini Government Solutions, the US subsidiary the company announced it would sell following public backlash over a $4.8 million contract with US Immigration and Customs Enforcement. It is a reminder that the consulting business operates in political as well as commercial environments, and reputational risk can force strategic decisions that have little to do with market fundamentals.

    France, Capgemini’s home market, contracted by 4.1% — a concern for a company headquartered in Paris. The bright spots were North America (7.3% growth), the UK and Ireland (10.5%), and Asia Pacific and Latin America (13.8%). Financial Services led sectors with 9.2% growth, accelerating sharply to 20.4% in Q4.

    What to watch

    Capgemini’s results matter beyond its own balance sheet because they reflect broader dynamics affecting the consultancy and IT services sector. The shift from selling AI as a concept to delivering AI as an operational capability is where the next wave of value — and differentiation — will come from.

    Three things are worth tracking. First, whether GenAI bookings continue to accelerate past the 10% threshold reached in Q4, or whether that was an end-of-year surge. Second, how the sovereignty proposition develops as European regulation tightens — this could become a meaningful differentiator for European-headquartered firms. Third, whether the “Fit-for-growth” restructuring programme, which will cost an additional €200 million in cash outflow, delivers the operational efficiency the company is banking on.

    The consultancy sector has spent the better part of two years talking about AI. Capgemini’s latest results suggest it is now moving into the phase of proving it can deliver.