Category: Technology

  • SAP’s “Break Glass” Cloud Plan Exposes the Limits of European Digital Sovereignty

    SAP, Microsoft, Capgemini, and Orange have announced a joint contingency plan for European cloud services — a “break glass” option in case US hyperscalers are legally blocked from operating in Europe. The partnership, routed through SAP’s German subsidiary Delos Cloud and the French entity Bleu (co-owned by Capgemini and Orange), promises business continuity in crisis scenarios ranging from sanctions to military conflict.

    It is a notable development, and it connects directly to the sovereignty narrative maddaisy.com has been tracking. But before treating it as a solution, it is worth examining what the plan actually offers — and what analysts say it cannot.

    The deal in context

    When maddaisy examined Capgemini’s sovereignty strategy earlier this month, the picture was clear: European digital sovereignty is converging on a pragmatic middle ground. Rather than building independent infrastructure from scratch, European firms are positioning themselves as trusted operators running workloads on American hyperscaler platforms — sovereign in governance and operations, reliant on US technology underneath.

    The SAP-Microsoft-Capgemini-Orange agreement is the logical extension of that approach. SAP’s announcement describes a mutual assistance framework where Delos Cloud and Bleu would cooperate on cross-border crisis response, including “early detection, analysis, defence, and remediation of cyber incidents.” Separately, Delos Cloud and Microsoft signed a business continuity agreement allowing Delos to access source code and maintain operations if sanctions restrict Microsoft’s European services.

    In other words: if the worst happens, European operators would run a local copy of Azure, disconnected from Microsoft’s global network.

    The wildcard is Washington, not Brussels

    Analysts are broadly aligned on one point: the EU itself is highly unlikely to block American cloud providers. Some 75% of the European cloud market sits with US hyperscalers, according to Forrester senior analyst Dario Maisto. Cutting off that access would amount to economic self-harm on a significant scale.

    The real concern is the reverse scenario — the US government using its leverage over hyperscalers to pressure European governments. As Maisto put it to CIO: “What if the US administration pulls the kill switch? It would be the weaponisation of IT, because the US knows about this dependency.”

    Danilo Kirschner, managing director at European cloud consulting firm Zoi, was blunter: “There have been non-logical, nonsensical decisions in the past year. From a European perspective, we need to prepare for anything.”

    The likelihood of such a scenario remains low. But the fact that SAP and Microsoft are publicly planning for it signals that enterprise customers are asking uncomfortable questions — and expect answers.

    A lifeboat, not a luxury liner

    The technical reality is where the plan runs into difficulty. Running a severed version of Azure in a European data centre sounds feasible in a press release. In practice, as Kirschner explained, Azure is millions of lines of code updated daily. Disconnected from Microsoft’s global security intelligence, engineering updates, and optimisation pipelines, a local copy would degrade rapidly.

    “This is a lifeboat, not a luxury liner,” Kirschner said. “Your disaster recovery plans must account for the fact that a sovereign cloud in crisis mode will likely be a static, maintenance-only environment.”

    The hardware question compounds the problem. Azure runs on proprietary, custom-designed server infrastructure. If geopolitical tensions are severe enough to block software access, sourcing replacement hardware under the same sanctions regime becomes equally difficult. And if a crisis lasts months rather than weeks, the global Azure platform will have evolved while the European fork remains frozen — creating what Kirschner described as “a technological dead end that requires a total rebuild to reconnect.”

    Even the legal framework is untested. “This agreement will have to be tested in court once the problem happens, when it could be too late,” Maisto noted. “This is not compliance as much as risk management.”

    The sovereignty paradox deepens

    There is an irony at the heart of this deal that Kirschner identified clearly: by offering a break-glass option for European sovereignty, Microsoft has paradoxically strengthened its own position. The single biggest political risk to using American hyperscalers in the European public sector — the theoretical possibility of a forced disconnection — has been partially neutralised. European governments and enterprises can now point to a contingency plan, however imperfect, and continue building on US infrastructure.

    As maddaisy’s earlier analysis of Capgemini’s sovereignty framework noted, Capgemini CEO Aiman Ezzat has been candid that “there is no such thing as absolute sovereignty” because no entity controls the entire value chain. The SAP deal underscores that position. Europe is not building an alternative to American cloud infrastructure. It is building contingency plans that assume American cloud infrastructure remains the default.

    For hardliners in France and elsewhere who want European-built alternatives at the highest sovereign classification levels, this approach will be unsatisfying. But the practical question — what is the alternative? — remains unanswered. The European Cybersecurity Certification Scheme continues to evolve, yet the gap between regulatory ambition and infrastructure reality shows no sign of closing.

    What practitioners should take from this

    For enterprise architects and CIOs managing European workloads, the SAP-Microsoft-Capgemini deal changes the conversation without changing the underlying calculus. It provides a political answer to a political risk — a contingency plan that reassures procurement committees and satisfies sovereignty checkboxes. It does not, however, solve the fundamental dependency.

    The practical takeaway is threefold. First, organisations should treat this as risk management, not a guarantee — the plan’s viability in a real crisis remains unproven and potentially short-lived. Second, workload portability and multi-cloud strategies become more important, not less, in a world where even the contingency plans assume degraded service. Third, the sovereignty requirements that Capgemini estimated would feature in over 50% of European service contracts by 2029 are becoming structurally embedded in how deals are structured — and this agreement is part of that shift.

    Europe’s cloud sovereignty story is not moving toward independence. It is moving toward managed dependency, with increasingly elaborate safety nets. Whether those nets would hold under real stress is a question no one can answer yet — and the honest participants in this deal are not pretending otherwise.

  • From Stablecoins to Smart Settlement: How Distributed Ledger Technology Is Quietly Rebuilding Financial Plumbing

    For years, distributed ledger technology sat in the awkward space between genuine innovation and inflated expectations. Blockchain was going to revolutionise everything from supply chains to voting systems, or so the pitch decks claimed. Most of those promises went nowhere. But while the hype cycle burned itself out, something quieter and more consequential was happening: major financial institutions started rebuilding their core infrastructure around the technology, and regulators began writing the rules to let them do it at scale.

    The result, now visible across multiple industry reports and institutional moves in early 2026, is that distributed ledger technology has become financial plumbing — not a speculative asset class, not a retail phenomenon, but the infrastructure layer that processes, settles, and records transactions between institutions. The shift is less dramatic than the headlines of 2021 suggested, but arguably more significant.

    The Regulatory Unlock

    The single biggest catalyst has been regulatory clarity. In the United States, the GENIUS Act, signed into law in July 2025, established the first comprehensive framework for stablecoins. It requires permitted payment stablecoin issuers to maintain 100% reserve backing with liquid assets, submit to federal or state supervision, and implement anti-money laundering programmes equivalent to those of traditional banks.

    The effect was immediate. According to BDO’s 2026 fintech predictions, stablecoin transaction volumes surged from $6 billion in February 2025 to $10 billion by August, a direct consequence of regulatory certainty reducing institutional hesitancy. In Europe, the Markets in Crypto-Assets regulation (MiCA) added a parallel framework, creating a passportable licensing regime for crypto-asset service providers across the EU.

    This matters because the absence of clear rules was the primary barrier to institutional adoption. Banks and asset managers were not sceptical of the technology itself — they were sceptical of deploying it without knowing which regulations would apply. That uncertainty is now largely resolved in the two largest financial markets.

    Institutional Moves, Not Startup Promises

    The institutions acting on this clarity are not fintech startups chasing venture capital. They are some of the largest names in global finance.

    JPMorgan Chase launched its JPMD deposit token on Coinbase’s Base blockchain in 2025, representing US dollar deposits on a distributed ledger — a significant step toward tokenised banking. Visa piloted stablecoin payouts through Visa Direct, enabling businesses to send payments directly to stablecoin wallets. Mastercard has been investing in crypto transaction processing infrastructure, including its reported interest in acquiring Zero Hash.

    These are not experiments. They are strategic commitments backed by significant capital. When a bank the size of JPMorgan puts deposit tokens on a public blockchain, it signals that the technology has passed internal risk assessments, compliance reviews, and board-level scrutiny. The pilot phase, for these institutions at least, is over.

    What the Technology Actually Changes

    The practical impact centres on three areas: settlement speed, asset accessibility, and cross-border payments.

    Settlement is the most consequential. Traditional securities settlement operates on a T+1 or T+2 cycle — transactions take one to two business days to finalise. Distributed ledger technology enables near-instant settlement, reducing counterparty risk and freeing up capital that would otherwise be locked during the settlement window. For large institutions managing billions in daily transactions, even marginal improvements in settlement efficiency translate to meaningful cost savings.

    Tokenisation is extending access to asset classes that were previously illiquid or inaccessible to smaller investors. Real estate, private equity, commodities, and corporate bonds are being represented as digital tokens on distributed ledgers, enabling fractional ownership. BDO notes that analysts estimate more than $30 billion of assets are now tokenised globally, with Standard Chartered projecting a multi-trillion-dollar market as the technology matures. The practical effect is that an investor can now purchase a fraction of a commercial property or a private equity fund for as little as $1,000, something that was structurally impossible under traditional ownership models.

    Cross-border payments stand to gain the most from stablecoin infrastructure. Transfers that previously took days and incurred significant fees through correspondent banking networks can now settle in seconds at a fraction of the cost. For businesses operating across multiple jurisdictions, particularly in emerging markets with currency volatility, this is a material operational improvement.

    The Barriers That Remain

    None of this means the transition is complete or without friction. Several significant challenges persist.

    Accenture’s 2026 banking trends report estimates that $13 trillion in transaction value could shift to alternative payment methods by 2030, putting approximately $13 billion in payment fees at risk for traditional banks. That creates a powerful incentive for incumbents to adopt the technology, but also a defensive posture that can slow genuine transformation. Seventy-six per cent of financial institutions surveyed reported they still have work to do to enable smart money capabilities.

    Regulatory fragmentation is another concern. While the US and EU have made progress, global alignment is far from complete. Different jurisdictions impose different AML and KYC requirements, creating compliance complexity for institutions operating across borders. BDO’s analysis highlights that sponsor banks are now demanding detailed AML compliance specifications from fintech partners before deals can proceed — a sign that the industry is maturing, but also that the compliance burden is substantial.

    Cybersecurity risk is also evolving alongside the technology. As more value moves onto distributed ledgers, the attack surface expands. Financial services already accounts for 33% of all AI-powered cyberattacks, and blockchain systems introduce additional vectors around consensus protocols and smart contract vulnerabilities. The security infrastructure needs to mature at the same pace as the financial infrastructure.

    What Practitioners Should Watch

    For consultants and technology leaders advising financial services clients, the key shift is one of framing. Distributed ledger technology is no longer a conversation about cryptocurrency adoption. It is a conversation about infrastructure modernisation — how institutions settle trades, manage assets, process payments, and comply with regulation.

    The practical questions for 2026 are specific: Should a bank position itself as a stablecoin issuer, custodian, or facilitator? How does tokenisation change the custody and compliance requirements for an asset management firm? What does near-instant settlement mean for treasury operations and capital allocation?

    These are not speculative questions. They are operational ones, driven by technology that is already deployed and regulation that is already in force. The organisations that treat distributed ledger technology as infrastructure — rather than innovation theatre — will be the ones that capture the efficiency gains and competitive advantages it offers. The rest will find themselves paying someone else’s transaction fees.

  • Deploy First, Fix Later: How Poor AI Rollouts Are Engineering Burnout Into the System

    Over the past week, maddaisy has examined two dimensions of the AI-burnout nexus: the work intensification mechanisms that make employees busier rather than better, and the organisational frameworks that might address the people side of the equation. But there is a third dimension that sits upstream of both: the technology deployment itself.

    Before burnout becomes a management problem, it is often an implementation problem. And a growing body of evidence suggests that the way organisations roll out AI tools — rushed timelines, inadequate capability-building, and a persistent belief that go-live equals readiness — is baking unsustainable working conditions into the system from day one.

    The capability gap that no one budgets for

    A detailed analysis published by CIO.com this month draws on workforce upskilling data from large-scale enterprise rollouts to make a blunt assessment: systems rarely fail because the technology does not work. They fail because the organisation has not built the infrastructure to support the people using them.

    The numbers are sobering. Organisations routinely experience productivity drops of 30 to 40% within the first 90 days of a major technology go-live when workforce capability has not been adequately addressed. Support tickets triple. Workaround behaviours — offline spreadsheets, manual reconciliations, shadow systems — proliferate as employees revert to what feels safe and controllable. Post-go-live support costs run 40 to 60% over budget. ROI timelines slip by six to 12 months.

    None of this is caused by employee resistance or technological failure. It is the predictable consequence of treating capability-building as a training event rather than a systemic requirement.

    The 10% problem

    Perhaps the most striking data point concerns training transfer. Research on technology implementation training suggests that only 10 to 20% of skills learned in formal programmes translate into sustained on-the-job performance. The issue is not poor course design. It is the assumption that a three-day training session two weeks before go-live can substitute for the repeated practice, pattern recognition, and feedback loops that genuine capability requires.

    This matters directly for the burnout conversation. When employees lack the capability to operate new systems confidently, every minor issue becomes a support ticket. Every exception becomes an escalation. The cognitive load of navigating unfamiliar tools while maintaining output expectations creates exactly the kind of sustained pressure that Harvard Business Review’s recent research identified as driving work intensification. The tools are not the problem — the deployment is.

    Where AI deployments differ from traditional IT rollouts

    Enterprise technology deployments have always carried capability risks. But AI introduces specific complications that amplify them.

    First, AI tools change the scope of work, not just the process. As maddaisy’s earlier analysis noted, employees using AI do not simply do the same tasks faster — they absorb new responsibilities, blur role boundaries, and take on work that previously justified additional headcount. A traditional ERP deployment changes how someone does their job. An AI deployment can change what their job is. Upskilling programmes designed around process training cannot address a shift that is fundamentally about role redesign.

    Second, AI output is probabilistic, not deterministic. An ERP system produces the same result given the same inputs. An AI tool might produce different outputs each time, requiring users to exercise judgement about quality, accuracy, and appropriateness. That judgement cannot be trained in a classroom — it is built through experience, and it demands a kind of cognitive engagement that is qualitatively different from following a process manual.

    Third, AI raises the visibility of individual output. When an AI tool enables someone to produce a first draft in minutes rather than hours, the speed becomes the new baseline expectation. Employees who are still building capability with the tool face pressure to match output norms set by early adopters or by the tool’s theoretical capacity. The result is the self-reinforcing acceleration cycle that researchers have now documented repeatedly.

    Governance as deployment infrastructure

    The CIO.com analysis proposes treating upskilling as a governance concern rather than a training administration task — embedding capability-building into the transformation workstream with the same rigour as data migration or integration testing. This means defining capability in behavioural terms tied to business processes, starting practice in sandbox environments as soon as process designs stabilise, and measuring performance data rather than training completion rates.

    One practical recommendation stands out: establishing a “performance council” distinct from the training team. This group — composed of process owners, frontline managers, and high-performing end users — meets weekly to review whether people are performing reliably in live operations. They examine error rates, support ticket patterns, workaround behaviours, and time-to-competency. Critically, they have the authority to pause rollouts when the data shows capability is not sticking.

    This is not a radical proposal. It is the kind of operational discipline that well-run technology programmes have always applied to infrastructure and integration. The fact that it needs to be articulated separately for workforce capability suggests how often organisations skip the people side of deployment planning.

    Connecting the implementation layer to the burnout evidence

    The burnout data that has emerged over recent weeks tells a consistent story. DHR Global reports that 83% of workers experience some degree of burnout, with engagement dropping 24 percentage points in a single year. Only 34% of employees say their organisation has communicated AI’s workplace impact clearly. Among entry-level staff, that figure falls to 12%.

    These are not just management failures. They are deployment failures. When organisations ship AI tools without building the capability infrastructure to support them — without adequate training transfer, without performance monitoring, without role redesign — they are not just creating a change management problem. They are creating a technical debt of human capability that compounds over time, manifesting as burnout, disengagement, and quiet resistance.

    As Deloitte’s 2026 AI report made clear, the gap between strategic confidence and operational readiness remains the defining feature of enterprise AI. The implementation evidence suggests that closing this gap requires treating workforce capability not as a soft skill or an HR deliverable, but as core deployment infrastructure — as essential as the data pipeline and as measurable as system uptime.

    What this means for the next phase

    The organisations that avoid engineering burnout into their AI deployments will share a common trait: they will treat go-live as the beginning of capability-building, not the end. They will budget for the 90-day productivity dip and plan for it rather than being surprised by it. They will measure whether people can perform reliably at scale under real business pressure, not whether they completed a training module.

    Most importantly, they will recognise that the fastest way to undermine an AI investment is not a technical failure — it is deploying capable technology to an unprepared workforce and then measuring success by adoption rates alone. The tools work. The question, as it has always been with technology transformations, is whether the organisation around them does too.