Tag: regulation

  • Brussels Blinked: The EU AI Act’s High-Risk Deadline Just Moved, but the Compliance Clock Has Not Stopped

    When maddaisy examined the shift from AI principles to penalties in February, the EU AI Act’s August 2026 deadline for high-risk AI systems sat at the centre of the analysis. That date — 2 August 2026 — was the moment when compliance stopped being theoretical and started carrying fines of up to seven per cent of global turnover.

    Four weeks later, Brussels blinked.

    On 13 March, the EU Council agreed its position on the Digital Omnibus package, pushing back the application of high-risk AI rules to December 2027 for standalone systems and August 2028 for those embedded in products. The proposal still requires negotiation with the European Parliament, but the direction is clear: the EU’s own regulatory infrastructure was not ready for its own deadline.

    What Actually Changed

    The delay is narrower than the headlines suggest. The EU AI Act’s prohibited practices — social scoring, manipulative AI targeting vulnerable groups, unauthorised real-time biometric surveillance — have been in force since February 2025 and remain untouched. Obligations for general-purpose AI model providers, including transparency and copyright requirements, still apply from August 2025. The Code of Practice requiring machine-readable detection techniques for AI-generated content is already published.

    What shifted is specifically the high-risk classification regime: the rules governing AI systems used in employment decisions, credit scoring, healthcare, education, law enforcement, and critical infrastructure. These are the provisions that demand conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. They are also the provisions that most enterprises have been scrambling to prepare for.

    The Council’s rationale is pragmatic rather than political. The European Commission missed its own February 2026 deadline for publishing the guidance and harmonised standards that enterprises need to demonstrate compliance. Without those standards, companies were being asked to hit a target that the regulator had not yet fully defined. As the Cypriot presidency put it, the goal is “greater legal certainty” and “more proportionate” implementation — diplomatic language for acknowledging that the implementation machinery was not keeping pace with the legislative ambition.

    The Compliance Paradox

    For enterprises that have spent the past 18 months building AI governance programmes, risk inventories, and compliance frameworks, the delay creates an awkward question: should they slow down?

    The short answer is no — and the reasoning matters more than the conclusion.

    First, the delay is conditional. The Council’s position sets fixed dates — December 2027 and August 2028 — but the Commission retains the ability to confirm earlier application if standards become available sooner. Organisations that pause their compliance programmes risk finding themselves back under pressure with less runway than they had before.

    Second, the regulatory landscape extends well beyond Brussels. As maddaisy has previously examined, the United States is building its own patchwork of state-level AI laws. Colorado’s AI Act takes effect in June 2026. California’s transparency requirements are already live. The EU delay does not change these timelines. An enterprise operating across both markets still faces near-term obligations.

    Third, and perhaps most importantly, the governance work itself has value beyond regulatory compliance. Organisations that have inventoried their AI systems, established accountability structures, and implemented monitoring processes are better positioned to manage operational risk, regardless of when a specific regulation takes effect. As the Ethyca governance framework notes, the shift from policy documentation to continuous operational evidence is happening independently of any single regulatory deadline.

    What the Delay Reveals

    The Digital Omnibus is not just a timeline adjustment. It is a signal about the structural challenges of regulating AI at the pace the technology is evolving.

    The EU built the world’s most comprehensive AI regulation. It classified systems by risk tier, defined obligations for providers and deployers, established penalties that exceed GDPR maximums, and applied the rules extraterritorially. What it did not build quickly enough was the operational layer: the harmonised standards, the conformity assessment procedures, the guidance documents that translate legal text into practical compliance steps.

    This mirrors a pattern maddaisy has observed across multiple regulatory domains. Europe’s cloud sovereignty push encountered similar friction — ambitious policy goals meeting incomplete implementation frameworks. The gap between legislative intent and operational readiness is becoming a recurring theme in European technology regulation.

    The Council’s position does include substantive additions alongside the delay. A new prohibition on AI-generated non-consensual intimate content and child sexual abuse material was introduced. Regulatory exemptions previously limited to SMEs were extended to small mid-cap companies. The AI Office’s enforcement powers were reinforced. These are not trivial changes — they show that the regulation is still being actively shaped even as its core provisions await full application.

    The ISO 42001 Factor

    One development running parallel to the regulatory delay is the accelerating adoption of ISO/IEC 42001, the international standard for AI management systems. Enterprise buyers are increasingly adding it to vendor procurement requirements, and AI liability insurers are beginning to factor governance certifications into risk assessments.

    For organisations uncertain about how to structure their compliance programmes during the delay, ISO 42001 offers a practical framework. It maps to the EU AI Act’s requirements without being dependent on them, meaning that compliance work done under the standard retains its value regardless of how regulatory timelines shift. Pega’s recent certification is one example of vendors using the standard to demonstrate governance readiness to enterprise clients.

    What Practitioners Should Do Now

    The EU AI Act delay changes timelines, not trajectories. The practical recommendations remain consistent with what maddaisy outlined in February, with one important addition:

    • Continue AI system inventories. Understanding what AI is deployed, where, and at what risk level is foundational work that no regulatory timeline change invalidates.
    • Monitor the Parliament negotiations. The Council position must be reconciled with the European Parliament before becoming final. The dates could shift again — in either direction.
    • Use the extra time for standards alignment. With harmonised standards still being developed, organisations now have an opportunity to align with ISO 42001 or the NIST AI Risk Management Framework before mandatory compliance begins.
    • Do not treat the delay as permission to deprioritise. Colorado, California, and other US state deadlines remain unchanged. Enterprise clients and procurement teams are not waiting for regulators — they are setting their own governance expectations now.

    The EU built the most ambitious AI regulation in the world, then discovered that ambition requires infrastructure. The delay is a concession to reality, not a retreat from intent. For enterprises, the message is straightforward: the destination has not changed, only the speed limit on the road getting there.

  • The Rules for Public Data Are Quietly Reshaping Who Gets to Build AI

    The question of who gets to train AI on publicly available data is quietly becoming one of the most consequential regulatory battles in technology. A new report from the Information Technology and Innovation Foundation (ITIF), published this week, lays out the stakes clearly: the jurisdictions that permit responsible access to public web data will lead in AI development, while those that restrict it risk falling permanently behind.

    This is not abstract. In 2025, US-based organisations produced 40 notable foundation models. China produced 15. The European Union managed three. The gap is driven by many factors — investment, talent, compute infrastructure — but the rules governing access to training data are an increasingly significant one.

    The Transatlantic Divide

    The US and EU have taken fundamentally different approaches to how publicly available data can be used for AI training.

    The US operates what the ITIF describes as a “gates up” framework. Publicly accessible web data is generally available for automated collection unless a site owner implements technical barriers — robots.txt files, authentication walls, or rate-limiting mechanisms. This permissive posture has given American AI labs broad access to the digital commons as training material.

    The EU, by contrast, applies GDPR protections to personal data regardless of whether it appears on a public website. Even a name and job title scraped from a company’s “About Us” page may require a lawful processing basis under European law. The EU AI Act adds a further layer: Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, and rights holders can opt out of their content being used. The European Commission’s November 2025 Digital Omnibus proposal aims to simplify some of this regulatory burden, but the fundamental constraints on data use remain.

    The result is that AI development gravitates toward more permissive jurisdictions. This is not a theoretical concern — it is visible in where companies locate their model training infrastructure and where they hire.

    The US Is Not Unified Either

    As maddaisy examined in February, the United States has its own regulatory fragmentation problem. California’s AB 2013, which took effect on 1 January 2026, requires developers of publicly available generative AI systems to disclose detailed information about their training data — including the sources, whether the data contains copyrighted material, whether it includes personal information, and when it was collected. That transparency obligation applies retrospectively, meaning developers must document historical training practices.

    Colorado’s AI Act addresses the deployment side, with impact assessments and discrimination safeguards for high-risk systems due to take effect in June 2026. Illinois, New York City, and Texas each have their own targeted requirements.

    The federal government wants to consolidate this into a single framework, but as maddaisy noted when AI governance entered its enforcement era, the White House’s December 2025 executive order is a statement of intent, not a statute. State laws remain in force, and the compliance burden is cumulative.

    Technical Governance Is Filling the Gap

    Where regulation is fragmented or slow, technical standards are emerging to manage access to public data for AI training. The ITIF report identifies several mechanisms that are gaining traction:

    Machine-readable opt-out signals extend beyond the familiar robots.txt protocol. New standards like LLMs.txt allow website operators to provide curated, machine-readable summaries of their content specifically for AI systems — a more nuanced approach than a binary allow/block decision.

    Cryptographic bot authentication using HTTP message signatures allows site operators to verify the identity of AI crawlers and grant or restrict access based on who is asking, not just what they are requesting.

    Automated licensing frameworks are experimenting with HTTP 402 (“Payment Required”) signals, creating the technical infrastructure for content owners to set terms for AI training use — including compensation.

    PII filtering tools such as Microsoft’s open-source Presidio project allow developers to detect and remove sensitive personal information during data preparation, addressing privacy concerns at the technical rather than legal level.

    These mechanisms are not yet standardised or universally adopted. But they point toward a model where access to public data is governed by a combination of technical protocols and market-based agreements, rather than solely by regulation.

    The Agentic Wrinkle

    The data access question becomes more complex as AI systems shift from static model training to live, agentic operations. When maddaisy examined the governance challenges for AI agents earlier this week, the focus was on operational controls — monitoring, auditing, and accountability chains. The ITIF report adds a further dimension: data that is technically accessible (visible through a browser or available via API) is not necessarily intended for AI consumption.

    Consider an AI agent authorised to access a company’s customer relationship management system. The data it encounters is not public, but it is available to the agent through delegated credentials. Current regulatory frameworks are largely silent on this category of “private-but-available” data, and the risks compound when agents combine information from multiple sources to surface connections that no individual source intended to reveal.

    What Practitioners Should Watch

    The ITIF report recommends that policymakers focus on three priorities: regulating AI outputs rather than training inputs, encouraging transparency norms for AI agents, and creating safe harbour protections for developers who respect machine-readable opt-out signals and filter sensitive data.

    For consultants and practitioners advising organisations on AI strategy, the practical implications are more immediate. Enterprises deploying AI — whether training proprietary models, fine-tuning foundation models, or deploying agentic systems — need to map their data supply chain with the same rigour they apply to physical procurement. That means understanding where training data originates, what rights framework governs its use, whether it contains personal information subject to GDPR or state privacy laws, and whether the technical mechanisms exist to honour opt-out requests.

    The organisations that treat training data governance as a compliance afterthought will find themselves exposed — not just to regulatory penalties, but to reputational risk and potential litigation. Those that build responsible data practices into their AI development lifecycle will have a genuine competitive advantage, particularly as transparency requirements tighten across jurisdictions.

    The rules for public data are not a peripheral regulatory detail. They are becoming one of the defining factors in who builds the next generation of AI systems, and where.

  • The Governance Frameworks for AI Agents Exist. The Hard Part Is Making Them Work.

    The governance playbook for autonomous AI agents is no longer a blank page. Regulatory bodies have published frameworks. Law firms have issued guidance. Industry coalitions have identified priorities. The principles – least privilege, human checkpoints, real-time monitoring, value-chain accountability – are converging across jurisdictions. And yet, Gartner predicts that 40 per cent of agentic AI projects will be cancelled by the end of 2027, citing escalating costs, unclear business value, and inadequate risk controls.

    The problem is not that enterprises lack governance policies. It is that they lack governance infrastructure – the operational machinery to translate principles into practice across live, autonomous systems operating at scale.

    When maddaisy examined the emerging governance playbook last week, the direction of travel was clear: regulators and advisors were converging on what good governance should look like. The question that follows is more difficult. What does it take to actually run that governance, day after day, across agents that plan, execute, and adapt autonomously?

    The Gap Between Policy and Operations

    The most revealing data point in recent weeks comes not from a governance report but from Logicalis’s 2026 CIO Report. Among 1,000 chief information officers surveyed globally, 89 per cent described their AI governance approach as “learning as we go.” That is not experimentation. That is the absence of operational governance.

    The skills gap compounds the problem. Nearly nine in 10 organisations cite a lack of internal technical capability as their primary constraint on AI deployment. For governance specifically, the deficit is acute. Monitoring agent behaviour in production, auditing multi-step reasoning chains, and interpreting regulatory requirements across jurisdictions all demand expertise that most enterprises have not yet hired for – and in many cases, cannot find.

    A PwC survey found that 79 per cent of companies have adopted agents in some capacity. But when enterprise search firm Lucidworks assessed over 1,100 organisations, only 6 per cent had deployed more than one agentic solution. The implication is significant: most enterprises are governing a single, contained pilot. The governance challenge changes materially when agents multiply, interact, and share data across business functions.

    Regulations Are Arriving – Unevenly

    The regulatory landscape is not waiting for enterprises to catch up. The EU AI Act’s obligations on high-risk and general-purpose AI systems take effect from August 2026, applying globally to any organisation whose systems affect EU residents. In the United States, the picture is more fragmented. President Trump’s December 2025 Executive Order signalled federal intent to consolidate AI oversight, but as legal analysis from Gunderson Dettmer makes clear, it does not preempt existing state laws.

    California, Colorado, and Texas have each enacted comprehensive AI governance statutes with distinct requirements for high-risk systems. New York’s RAISE Act imposes transparency obligations that do not apply elsewhere. For multinational enterprises deploying autonomous agents, the compliance surface is not one framework – it is dozens, with different definitions of high-risk, different disclosure requirements, and different enforcement timelines.

    This is where governance-as-policy meets governance-as-operations. A well-crafted internal policy cannot resolve the question of whether an agent deployed in London, which processes data from a New York customer and executes a transaction through a Singapore-based system, complies with three different regulatory regimes simultaneously. That requires technical infrastructure: jurisdictional routing, dynamic compliance rules, and audit trails that satisfy multiple authorities.

    What Operational Governance Actually Requires

    Several CIOs interviewed by CIO.com this month offered a consistent message: governance cannot be separated from workflow design.

    Don Schuerman, CTO at Pega, put it directly: the expectation that thousands of agents can be deployed randomly across a business and left to operate is a myth. Successful deployments anchor agents in well-defined business processes with prescribed steps, high predictability, and clear audit requirements. The governance is not a layer added afterwards – it is embedded in how the agent’s workflow is designed.

    IBM CIO Matt Lyteson echoed the point, stressing that organisations need to understand the outcomes they are targeting, the data agents will require, and the controls needed to manage them before deployment – not after. Salesforce CIO Dan Shmitt added that without high-quality data and a unified governance model, agents produce unreliable results regardless of the policy framework around them.

    The emerging consensus among practitioners, distinct from the framework-level guidance, centres on three operational requirements.

    First, governance must be embedded in agent design, not bolted on. Decision boundaries, escalation rules, and compliance checks need to be part of the agent’s workflow architecture. Retrofitting governance onto an agent already in production is significantly harder and more expensive.

    Second, observability infrastructure is non-negotiable. As maddaisy has previously reported on agentic drift, agents that pass review at launch can behave differently months later. Continuous monitoring of reasoning chains, action sequences, and decision outcomes is the minimum viable governance stack – not periodic audits.

    Third, governance requires dedicated roles, not committees. The Mayer Brown framework identified four governance functions: policy-setters, product teams, cybersecurity integration, and frontline escalation. Most enterprises have distributed these responsibilities informally. As agents scale beyond pilot stage, informal arrangements become liabilities.

    The Trajectory Ahead

    The governance conversation has moved faster than most observers expected. Twelve months ago, agentic AI governance was a theoretical concern. Today, it has dedicated regulatory guidance, published legal frameworks, and named positions on practitioners’ organisational charts. That is genuine progress.

    But the distance between knowing what governance should look like and operating it reliably is where the next phase of difficulty lies. The 40 per cent cancellation rate Gartner projects is not primarily a technology failure – it is a governance and operational maturity failure. The organisations that succeed with autonomous agents will not be those with the most sophisticated AI models. They will be the ones that built the operational infrastructure to govern them before they scaled.

    For consultants advising enterprise clients on agentic AI, the message has shifted. The question is no longer whether governance frameworks exist. It is whether the organisation has the skills, tooling, and organisational design to make those frameworks operational. That is a harder conversation, but it is now the one that matters.

  • America’s Fragmented State-by-State AI Regulation Is Creating a Compliance Labyrinth

    When maddaisy examined the shift from AI principles to penalties earlier this month, the United States’ growing patchwork of state-level AI laws featured as one of three converging forces. That brief mention understated the scale of what is now unfolding. With more than 1,000 AI-related bills introduced across all 50 states in 2025 alone, the US is rapidly building the most fragmented AI regulatory landscape of any major economy — and a federal government that wants to stop it but may lack the tools to do so.

    The federal-state standoff

    The tension is now explicit. In December 2025, the White House released a framework titled “Ensuring a National Policy Framework for Artificial Intelligence”, arguing that diverging state AI rules risk creating a harmful compliance patchwork that undermines American competitiveness. An accompanying fact sheet went further, suggesting the administration may consider litigation and funding mechanisms to counter state AI regimes it considers obstructive.

    This follows Executive Order 14179, issued in January 2025, which rescinded the Biden administration’s 2023 AI order and established a competitiveness-first federal posture. The July 2025 AI Action Plan outlined more than 90 measures to support AI infrastructure and innovation. The message from Washington is consistent: the federal government wants to set the rules, and it wants states to fall in line.

    The states, however, are not listening.

    Four approaches, one country

    What makes the US situation distinctive is not simply that states are regulating AI — it is that they are doing so in fundamentally different ways. Four models are emerging, each with different implications for organisations operating across state lines.

    Colorado: the comprehensive framework. Colorado’s AI Act (SB 24-205) remains the most ambitious state-level attempt at broad AI governance. It requires deployers of high-risk AI systems to conduct impact assessments, provide consumer transparency, and exercise reasonable care against algorithmic discrimination. Although enforcement was delayed to June 2026 through SB25B-004, the statutory framework is intact. Legal analysts note that the delay is strategic, not a retreat — Colorado is buying time to refine its approach while preserving regulatory leverage.

    California: regulation through existing powers. Rather than passing a single comprehensive AI act, California is weaving AI governance into its existing regulatory apparatus. The California Privacy Protection Agency finalised new regulations in September 2025 covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), with phased effective dates running through January 2027. The ADMT obligations are particularly significant: they convert what many organisations treat as aspirational governance practices — impact assessments, decision transparency, audit trails — into enforceable system design requirements.

    Washington: the multi-bill expansion. Washington has taken a fragmented-by-design approach, advancing separate bills for high-risk AI (HB 2157), transparency (HB 1168), training data regulation (HB 2503), and consumer protection in automated systems (SB 6284). The logic is pragmatic: AI risk does not fit into a single statutory box, so Washington is addressing it through multiple, targeted legislative vehicles. For compliance teams, this means tracking not one law but several, each with its own scope and requirements.

    Texas: government-first regulation. The Responsible AI Governance Act (HB 149), effective since January 2026, governs AI use within state government, prohibits certain high-risk applications, and establishes an advisory council. It is narrower than Colorado or California’s approaches but signals that even politically conservative states see a role for AI regulation — just a different one.

    The preemption problem

    The White House clearly wants federal preemption — a single national framework that would override state laws. But there are reasons to doubt this will materialise quickly, if at all.

    First, there is no comprehensive federal AI legislation on the table. The administration’s tools are executive orders and agency guidance, which carry less legal weight than statute and are vulnerable to court challenges. Second, the pattern from privacy law is instructive: the US still lacks a federal privacy law, and state laws like the California Consumer Privacy Act have effectively set national standards by default. AI regulation appears to be following the same trajectory.

    Third, states have institutional momentum. According to the National Conference of State Legislatures, all 50 states, Washington DC, and US territories introduced AI legislation in 2025. That level of legislative activity does not simply stop because the federal government asks it to. As one legal analysis from The Beckage Firm observes, states are not ignoring the federal signal — they are interpreting it differently, rolling out requirements more slowly or in smaller pieces, but maintaining control rather than ceding it.

    What this means for organisations

    For any company deploying AI across multiple US states — which is to say, most enterprises of any scale — the practical implications are significant and immediate.

    Multi-jurisdictional compliance is now unavoidable. Organisations cannot build a single AI governance programme around one state’s requirements and assume it covers the rest. Colorado’s impact assessment obligations differ from California’s ADMT rules, which differ from New York City’s bias audit requirements for automated employment tools. Each demands different documentation, different processes, and in some cases, different technical controls.

    Effective dates are stacking up. Colorado’s enforcement begins June 2026. California’s ADMT obligations take full effect January 2027. New York City’s AEDT rules are already being actively enforced. Texas’s government-focused requirements are live now. Organisations treating these as isolated events rather than overlapping compliance waves risk being caught unprepared.

    The privacy playbook applies. Companies that built adaptable privacy programmes — mapping data flows, documenting processing purposes, conducting impact assessments — when GDPR and the CCPA emerged are better positioned than those that treated each regulation as a separate exercise. The same principle holds for AI governance. The organisations that will manage this landscape most effectively are those building flexible frameworks capable of mapping risk assessments, accountability structures, and documentation across multiple jurisdictions simultaneously.

    The consulting opportunity — and obligation

    For consultants and practitioners, the US regulatory fragmentation creates both demand and responsibility. Demand, because multi-state AI compliance is genuinely complex and most organisations lack in-house expertise to navigate it. Responsibility, because the temptation to oversimplify — to sell a single “AI compliance solution” that claims to cover all jurisdictions — is real and should be resisted. The details matter, and they vary state by state.

    As maddaisy has noted in covering shadow AI governance challenges and the agentic AI governance gap, the operational machinery for AI oversight is still immature in most enterprises. Adding a fragmented regulatory landscape on top of that immaturity does not just increase cost — it increases the likelihood that organisations will get something materially wrong.

    The US may eventually get a federal AI framework. But the states are not waiting, and neither should the organisations that operate in them. The compliance labyrinth is already being built, one statehouse at a time.

  • Europe’s Cloud Sovereignty Rush Meets Its Regulatory Reality Check

    European sovereign cloud spending is set to nearly double in 2026, from $6.9 billion to $12.6 billion according to Gartner’s latest forecast. Every major US hyperscaler now has a European sovereignty answer. AWS launched its European Sovereign Cloud from Germany in January, backed by a €7.8 billion investment. Google operates through S3NS, a French joint venture with Thales that holds SecNumCloud certification. Microsoft has Delos Cloud in Germany and Bleu in France.

    Yet beneath the flood of partnership announcements and sovereign cloud launches sits a less comfortable truth: the regulatory framework driving all of this activity is still incomplete, sometimes contradictory, and in certain critical areas, stalled entirely. For organisations trying to build disaster plans around Europe’s digital infrastructure, the ground has not stopped moving.

    The regulatory pile-up

    Three major pieces of European regulation now intersect on questions of cloud resilience and digital sovereignty — and none of them align neatly.

    The Digital Operational Resilience Act (DORA), enforceable since January 2025, requires financial institutions to implement comprehensive ICT risk management frameworks, including detailed third-party risk assessments for cloud providers. DORA is specific, prescriptive, and already creating compliance pressure across European banking and insurance.

    The NIS2 directive, enforceable since October 2024, extends similar resilience requirements to a much broader set of critical infrastructure operators — energy, transport, health, and digital infrastructure itself. Where DORA targets financial services, NIS2 casts a wider net but leaves more room for national interpretation, creating an uneven patchwork across EU member states.

    Then there is the European Cybersecurity Certification Scheme for Cloud Services (EUCS), which was supposed to provide a unified standard for assessing cloud security across the EU — including, controversially, sovereignty requirements that would have effectively barred non-EU cloud providers from the highest certification tier. That sovereignty clause was stripped from the latest drafts under intense lobbying pressure. The scheme itself remains unadopted. In January 2026, the European Commission proposed a revised Cybersecurity Act that would overhaul the entire certification framework — effectively resetting the process while organisations wait for clarity that may not arrive before 2027.

    Disaster planning in a regulatory fog

    The practical consequence for enterprises is an uncomfortable paradox. Regulations now require detailed disaster recovery and business continuity plans for cloud-dependent operations. But the certification framework that would define what “sovereign” or “resilient” actually means in practice remains unfinished.

    As maddaisy examined last week, the SAP-Microsoft “break glass” contingency plan illustrates the tension. It offers a theoretical failover for European Azure workloads in a crisis scenario, but analysts questioned whether a disconnected copy of Azure could remain operationally viable beyond a few weeks. The plan satisfies a political need — demonstrating that contingency planning exists — without resolving the deeper technical question of what happens when a severed cloud stops receiving updates.

    Capgemini’s CEO Aiman Ezzat has framed this pragmatically, arguing that Europe has meaningful sovereignty over data, operations, and regulation — but not over the underlying technology stack. The four-layer model he has described reflects the reality most enterprises face: sovereign in governance, dependent on US technology, and now required by law to plan for scenarios where that dependency becomes a liability.

    The hyperscaler response: sovereignty as a service

    The US cloud providers have responded to the regulatory and political pressure with significant investment. AWS’s European Sovereign Cloud, operating from Brandenburg, is architecturally separated from other AWS Regions — a genuine sovereign partition with EU-resident leadership and local operational control. AWS CEO Matt Garman called it a “big bet”, with expansion planned for Belgium, the Netherlands, and Portugal.

    Google’s approach in France, through S3NS (a joint venture where Google holds a minority stake under French law), has achieved SecNumCloud 3.2 qualification — the most demanding sovereignty standard currently in force in Europe. Microsoft’s structure routes through nationally controlled entities: Delos Cloud in Germany and Bleu (co-owned by Capgemini and Orange) in France.

    The pattern across all three is consistent: legal and operational separation, EU-resident personnel, local data residency, and contingency plans for geopolitical disruption. What differs is the depth of that separation. A fully air-gapped partition like Google’s Distributed Cloud offering for defence clients sits at one end of the spectrum. A contractual failover arrangement like the SAP-Microsoft deal sits at the other. Most enterprise workloads will land somewhere in between — and DORA and NIS2 require organisations to understand precisely where.

    What practitioners need to do now

    For consultants and technology leaders navigating this landscape, three priorities stand out.

    First, classify workloads by sovereignty sensitivity before choosing infrastructure. Not every application needs the highest tier of sovereign protection. DORA’s third-party risk requirements are prescriptive but risk-proportionate — a core banking system and an internal collaboration tool do not demand the same level of contingency planning. The trap is treating sovereignty as a binary choice rather than a spectrum.

    Second, build disaster plans around regulatory timelines, not vendor announcements. DORA enforcement is live. NIS2 implementation varies by member state but is progressing. The EUCS framework is stalled, but the underlying requirements it was meant to codify — around data residency, operational control, and access restrictions — are already being enforced through sector-specific regulation and national certification schemes like France’s SecNumCloud. Waiting for a pan-European standard before acting is not a viable compliance strategy.

    Third, pressure-test vendor contingency claims. The proliferation of sovereign cloud offerings and disaster recovery partnerships creates an illusion of completeness. But as Forrester analyst Dario Maisto noted of the SAP-Microsoft plan, many of these arrangements remain untested and legally unproven. “This is not compliance as much as risk management,” he said. Organisations should ask pointed questions about update cycles, hardware dependencies, and the operational lifespan of any disconnected cloud environment.

    The long view

    European digital sovereignty has moved from policy aspiration to market reality faster than the regulatory framework can keep pace. The investment figures are significant — AWS alone is committing €7.8 billion. The compliance deadlines are real. The contingency plans exist, at least on paper.

    But the gap between what regulations require and what certification frameworks define remains open. For organisations building disaster plans today, the most honest assessment is that they are planning against a moving target, using vendor solutions that have never been tested in the crisis scenarios they are designed for. That is not a reason to delay — DORA and NIS2 make delay legally untenable. It is a reason to plan with humility, build in flexibility, and avoid treating any single vendor’s sovereignty narrative as a finished answer.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.