Tag: ai-governance

  • Agentic AI Drift: The Silent Production Risk No One Is Measuring

    When maddaisy examined the agentic AI governance gap last week, the focus was on a structural mismatch: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. That gap remains wide. But a more specific — and arguably more dangerous — operational risk is now coming into focus: agentic AI systems do not fail suddenly. They drift.

    A recent analysis published by CIO makes the case plainly. Unlike earlier generations of AI, which tend to produce identifiable errors — a wrong classification, a hallucinated fact — agentic systems degrade gradually. Their behaviour evolves incrementally as models are updated, prompts are refined, tools are added, and execution paths adapt to real-world conditions. For long stretches, everything appears fine. KPIs hold. No alarms fire. But underneath, the system’s risk posture has already shifted.

    The Problem with Demo-Driven Confidence

    Most organisations still evaluate agentic AI the way they evaluate any software feature: through demonstrations, curated test scenarios, and human judgment of output quality. In controlled settings, this looks adequate. Prompts are fresh, tools are stable, edge cases are avoided, and execution paths are short and predictable.

    Production is different. Prompts evolve. Dependencies fail intermittently. Execution depth varies. New behaviours emerge over time. Research from Stanford and Harvard has examined why many agentic systems perform convincingly in demonstrations but struggle under sustained real-world use — a gap that grows wider the longer a system runs.

    The result is a pattern that will be familiar to anyone who has managed complex software in production: a system passes all its review gates, earns early trust, and then becomes brittle or inconsistent months later, without any single change that clearly broke it. The difference with agentic AI is that the degradation is harder to detect, because the system’s outputs can still look reasonable even as the reasoning behind them has shifted.

    What Drift Actually Looks Like

    The CIO analysis includes a telling case study from a credit adjudication pilot. An agent designed to support high-risk lending decisions initially ran an income verification step consistently before producing recommendations. Over time, a series of small, individually reasonable changes — prompt adjustments for efficiency, a new tool for an edge case, a model upgrade, tweaked retry logic — caused the verification step to be skipped in 20 to 30 per cent of cases.

    No single run produced an obviously wrong result. Reviewers often agreed with the recommendations. But the way the agent arrived at those recommendations had fundamentally changed. In a credit context, that difference carries real financial and regulatory consequences.

    This is the nature of agentic drift: it is not a bug. It is the predictable outcome of complex, adaptive systems operating in changing environments. Two executions of the same agent with the same inputs can legitimately differ — that stochasticity is inherent to how modern agentic systems work. But it also means that point-in-time evaluation, one-off tests, and spot checks are structurally insufficient for production risk management.

    From Policy to Diagnostics

    When maddaisy covered the shadow AI governance challenge earlier this month, one theme was clear: governance frameworks are necessary but not sufficient. They define ownership, policies, escalation paths, and controls. What they often lack is an operational mechanism to answer a deceptively simple question: has the agent’s behaviour actually changed?

    Without that evidence, governance operates in the dark. Policy defines what should happen. Diagnostics establish what is actually happening. When measurement is absent, controls develop blind spots in precisely the live systems where agentic risk tends to accumulate.

    The Cloud Security Alliance has begun framing this as “cognitive degradation” — a systemic risk that emerges gradually rather than through sudden failure. Carnegie Mellon’s Software Engineering Institute has similarly emphasised the need for continuous testing and evaluation discipline in complex AI-enabled systems, drawing parallels to how other high-risk software domains manage operational risk.

    What Practitioners Should Watch For

    The emerging consensus points toward several operational principles for managing agentic drift:

    Behavioural baselines over output checks. No single execution is representative. What matters is how behaviour shows up across repeated runs under similar conditions. Organisations need to establish baselines — not for what an agent should do in the abstract, but for how it has actually behaved under known conditions — and then monitor for sustained deviations.

    Separate configuration changes from behavioural evidence. Prompt updates, tool additions, and model upgrades are important signals, but they are not evidence of drift on their own. What matters is persistence: transient deviations are often noise in stochastic systems, while sustained behavioural shifts across time and conditions are where risk begins to emerge.

    Treat agent behaviour as an operational signal. Internal audit teams are asking new questions about control and traceability. Regulators are paying closer attention to AI system behaviour. Platform teams are under growing pressure to demonstrate stability in live environments. “It looked fine in testing” is no longer a defensible operational posture, particularly in sectors — financial services, healthcare, compliance — where subtle behavioural changes carry real consequences.

    The Observability Gap

    This is, ultimately, the next chapter in the governance story maddaisy has been tracking. The first chapter — covered in the enforcement era analysis — was about moving from principles to rules. The second, examined through Deloitte’s enterprise data, was the gap between strategic confidence and operational readiness. This third chapter is more specific and more technical: the gap between having governance frameworks and having the observability infrastructure to make them work.

    The goal is not to eliminate drift. Drift is inevitable in adaptive systems. The goal is to detect it early — while it is still measurable, explainable, and correctable — rather than discovering it through incidents, audits, or post-mortems. Organisations that build this capability will be better positioned to deploy agentic AI at scale with confidence. Those that do not will continue to be surprised by systems that appeared stable, until they were not.

    For consultants advising on enterprise AI deployments, the implication is practical: governance reviews that stop at policy documentation are incomplete. The question to ask is not just whether a client has an AI governance framework, but whether they can tell you how their agents are behaving today compared to three months ago. If the answer is silence, that is where the work begins.

  • Shadow AI and the Year-Two Governance Challenge

    Over a third of employees have already used AI tools without their organisation’s knowledge or permission. As enforcement deadlines tighten, the gap between what staff are doing and what leadership has sanctioned is becoming one of enterprise AI’s most urgent — and most overlooked — governance problems.

    The term “shadow AI” describes AI tools adopted by employees outside official channels — free-tier large language models used to draft client communications, image generators repurposed for internal presentations, coding assistants plugged into development environments without security review. It is the AI equivalent of shadow IT, but with a critical difference: the data exposure risks are significantly higher and the speed of adoption is faster than anything IT departments have previously managed.

    According to the State of Information Security Report 2025 from ISMS.online, 37% of organisations surveyed said employees had already used generative AI tools without organisational permission or guidance. A further 34% identified shadow AI as a top emerging threat for the next 12 months. These are not projections about some distant risk — they describe what is happening now, in organisations that thought they had AI under control.

    The year-two reckoning

    The pattern is familiar to anyone who lived through the early cloud adoption cycle, but compressed into a fraction of the time. Year one — roughly 2024 into early 2025 — was characterised by experimentation. Departments trialled AI tools, leaders encouraged innovation, and governance was deferred in favour of speed. The implicit message in many organisations was: try things, move fast, we will sort out the rules later.

    Year two is when “later” arrives. And the numbers suggest most organisations are not ready for it. The same ISMS.online report found that 54% of respondents admitted their business had adopted AI technology too quickly and was now facing challenges in scaling it back or implementing it more responsibly. That figure represents a majority of enterprises acknowledging, in effect, that they have accumulated governance debt they do not yet know how to service.

    This aligns with a pattern maddaisy has been tracking across several dimensions. Deloitte’s 2026 State of AI report revealed that organisations report growing strategic confidence in AI but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute responsibly. Shadow AI is the ground-level expression of that paradox: strategy says “adopt AI,” but operations never built the guardrails to manage what adoption actually looks like across thousands of employees making independent tool choices every day.

    Why traditional IT governance falls short

    The instinct in many organisations is to treat shadow AI like shadow IT — block unsanctioned tools, enforce approved vendor lists, and route everything through procurement. That approach, while understandable, misses what makes AI different.

    When an employee uses an unapproved project management tool, the risk is primarily operational: data silos, integration headaches, wasted licences. When an employee pastes client data, financial projections, or intellectual property into a free-tier language model, the risk is fundamentally different. That data may be used for model training, stored in jurisdictions with different privacy regimes, or exposed through security vulnerabilities the organisation has no visibility into.

    As CIO.com recently noted, boards are increasingly recognising that AI is not waiting for permission — it is already shaping decisions through vendor systems, employee-adopted tools, and embedded algorithms that grow more powerful without explicit organisational consent. The question for governance teams is not whether employees are using unsanctioned AI. It is how much sensitive data has already left the building.

    The regulatory dimension

    Shadow AI also creates a specific compliance exposure that many organisations have not fully mapped. As maddaisy examined last week, the EU AI Act reaches its most consequential enforcement milestone in August 2026, with requirements for high-risk AI systems carrying penalties of up to €35 million or 7% of global annual turnover. Colorado’s AI Act takes effect in June 2026 with its own set of requirements around algorithmic discrimination and impact assessments.

    The compliance challenge with shadow AI is that an organisation cannot demonstrate responsible use of systems it does not know exist. If an employee in a hiring function uses an unsanctioned AI tool to screen CVs, or a financial analyst uses a free language model to generate risk assessments, the organisation may be deploying high-risk AI — as defined by regulators — without any of the documentation, monitoring, or impact assessment that compliance requires.

    This is not a theoretical concern. It is a direct consequence of the gap between adoption speed and governance maturity that maddaisy has documented across the enterprise AI landscape.

    What a pragmatic response looks like

    The organisations handling shadow AI most effectively are not the ones deploying the heaviest restrictions. They are the ones that recognised early that prohibition does not work when the tools are free, browser-based, and genuinely useful.

    A pragmatic governance approach has several components. First, visibility: understanding what AI tools employees are actually using, through network monitoring, surveys, and — critically — creating an environment where people feel safe disclosing their usage rather than hiding it. Second, clear usage policies that distinguish between acceptable and unacceptable use cases, specifying what data categories must never enter external AI systems regardless of the tool’s provenance.

    Third, an approved toolset that is genuinely competitive with the free alternatives. One of the most common drivers of shadow AI is that official enterprise AI tools are slower, more restricted, or simply worse than what employees can access on their own. If the sanctioned option requires a three-week procurement process while ChatGPT is a browser tab away, governance has already lost.

    Fourth, the frameworks exist. ISO 42001 provides a structured approach to establishing an AI management system, covering policy, roles, impact assessment, and continuous improvement through the familiar Plan-Do-Check-Act cycle. It is not a silver bullet, but it offers a starting point for organisations that currently have no systematic approach to AI governance beyond hoping the problem does not escalate.

    The window is narrowing

    The uncomfortable reality for many enterprises is that shadow AI has already created facts on the ground. Data has been shared with external models. Workflows have been built around unsanctioned tools. Employees have integrated AI into their daily routines in ways that would be disruptive to simply switch off.

    The year-two governance challenge is not about preventing AI adoption — that ship has sailed. It is about catching up with what has already happened, building the visibility and policy infrastructure to manage it going forward, and doing so before enforcement deadlines turn a governance gap into a compliance crisis.

    For consultants and practitioners advising organisations through this transition, the message is straightforward: audit first, policy second, technology third. The organisations that will navigate this best are not the ones with the most sophisticated AI strategies. They are the ones that know, concretely and completely, what AI is actually being used within their walls — and by whom.

  • The Governance Gap No One Is Closing: Why Agentic AI Is Outrunning Enterprise Oversight

    Three-quarters of enterprises plan to deploy agentic AI within two years. Only one in five has a mature governance model for it. That arithmetic should concern anyone responsible for enterprise technology strategy.

    The figures come from Deloitte’s 2026 State of AI in the Enterprise report, and they represent something more specific than the familiar story of AI adoption outpacing regulation. The challenge with agentic AI is not that rules do not exist — as maddaisy recently examined, the EU AI Act, Colorado’s AI Act, and a growing patchwork of global regulations are creating real enforcement deadlines. The challenge is that agentic systems demand a fundamentally different kind of oversight, and most organisations have not built the operational machinery to provide it.

    What makes agentic AI different

    Conventional AI systems — including the generative AI tools that have dominated enterprise adoption over the past two years — operate in an advisory mode. They suggest, summarise, draft, and classify. A human reviews the output and decides what to do with it. Governance for these systems, while imperfect, fits within existing frameworks: you audit the model, monitor outputs, and maintain a human in the decision loop.

    Agentic AI breaks that model. These systems are designed to plan, execute, and adjust autonomously — booking flights, approving procurement decisions, triaging customer complaints, or managing collections workflows without waiting for human sign-off. Oracle’s new agentic banking platform, launched in February 2026, illustrates the trajectory: domain-specific agents handle loan originations, credit decisioning, and compliance checks, with human oversight positioned as a “human-in-the-loop” role rather than a gatekeeping one.

    The distinction matters because it changes where governance must operate. With advisory AI, oversight happens after the model produces an output and before a human acts on it. With agentic AI, the system is the actor. Governance must be embedded in real time — monitoring agent behaviour as it happens, enforcing boundaries on what an agent can and cannot do, and maintaining audit trails that capture not just decisions but the full chain of reasoning and actions that led to them.

    The 21% problem

    Deloitte’s finding that only 21% of companies have a mature agentic AI governance model is striking, but the detail beneath it is more revealing. In Singapore, where deployment ambitions are among the highest globally — 72% of businesses plan to deploy agentic AI across multiple operational areas within two years, up from 15% today — the mature governance figure drops to just 14%.

    As maddaisy noted in its analysis of the broader Deloitte report, this fits a wider pattern: organisations are increasingly confident in their AI strategy but declining in readiness on the operational foundations needed to execute it. The agentic governance gap is perhaps the sharpest expression of this paradox — a technology that is advancing from pilot to production while the controls needed to run it safely remain in early stages.

    Half of Singapore respondents reported using a patchwork of public and internal proprietary frameworks to assess agent risk and performance. That is not a governance model — it is improvisation.

    Why existing frameworks fall short

    The AI Trends Report 2026, published by statworx and AI Hub Frankfurt, identifies three operational disciplines that are becoming foundational for reliable agentic AI: AI governance, DataOps, and what the report terms AgentOps — the operational layer for managing autonomous AI agents in production.

    AgentOps is a useful concept because it captures what most enterprise governance frameworks currently lack. Traditional AI governance focuses on model development: training data quality, bias testing, documentation, and approval workflows before deployment. That is necessary but insufficient for systems that learn, adapt, and take actions in production environments.

    Agentic systems require runtime governance: clear boundaries on agent autonomy (what decisions can the agent make independently, and which require escalation?), real-time monitoring of agent behaviour against expected parameters, kill switches for when agents drift outside acceptable bounds, and comprehensive audit trails that regulators can inspect after the fact.

    The EU AI Act’s requirements for high-risk systems — documented risk management, technical logging, human oversight mechanisms, and conformity assessments — implicitly assume this kind of operational infrastructure. But most organisations have not yet translated those requirements into engineering reality.

    Deployment is not waiting for governance

    The uncomfortable truth is that agentic AI is entering production regardless of whether governance is ready. Oracle is shipping banking agents now. Companies like AMD and Heathrow Airport are deploying autonomous agents in customer experience roles. Gartner predicts agentic systems will autonomously resolve 80% of customer service issues by 2028.

    Constellation Research offers a useful counterweight to the hype, arguing that agentic AI is “more of a feature than a revolution” and that the real measure of value is decision velocity — how quickly smaller decision trees and processes can be automated at scale. This framing is helpful because it reduces the abstraction. An AI agent rebooking a flight is not a paradigm shift; it is a process automation with a more sophisticated reasoning layer. But that reasoning layer is precisely what makes governance harder. The agent is not following a static script — it is making contextual judgements, and those judgements need oversight.

    What the governance gap actually costs

    The business case for closing the governance gap is not primarily about regulatory fines, though those are real. It is about operational risk. When an agentic system autonomously commits to a procurement decision, misprices a financial product, or gives a customer incorrect information with real-world consequences, the liability question is immediate and the reputational exposure is direct.

    It is also about scaling. Deloitte’s data shows that companies with stronger governance foundations are deploying agentic AI more successfully — they start with lower-risk use cases, build governance capabilities alongside deployment, and scale deliberately. Organisations that skip the governance step find themselves either slowing down when something goes wrong or, worse, not knowing that something has gone wrong until a regulator or customer tells them.

    What needs to happen

    The gap between agentic AI deployment and agentic AI governance is not going to close on its own. Three practical steps can narrow it.

    Define agent autonomy boundaries explicitly. For every agentic AI deployment, organisations need a clear specification of what the agent can do independently, what requires human approval, and what is prohibited. These boundaries should be codified in the system, not just written in a policy document. The Oracle banking platform’s “human-in-the-loop” architecture is one model, but even that needs specificity about when and how the loop engages.

    Invest in runtime monitoring, not just pre-deployment testing. The governance challenge with agentic AI is that it operates continuously and adapts to context. Pre-deployment audits are necessary but not sufficient. Organisations need real-time monitoring that tracks agent decisions against expected parameters and flags anomalies before they compound.

    Build audit trails as engineering infrastructure. When a regulator asks how an agent arrived at a specific decision — and under the EU AI Act, they will — the organisation needs to produce a complete chain of the agent’s reasoning, data inputs, and actions. This is not a reporting challenge; it is an engineering one that needs to be designed into the system from the start, not retrofitted after deployment.

    The agentic AI governance gap is not a future problem. It is a present one, widening with every new deployment. The organisations that treat governance as a technical discipline — building it into the engineering of their agentic systems rather than bolting it on as a compliance afterthought — will have a structural advantage as the technology matures. Those that do not will discover, as many enterprises have with earlier waves of technology adoption, that the cost of retrofitting oversight always exceeds the cost of building it in.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.

  • Strategically Ready, Operationally Stuck: What Deloitte’s 2026 AI Report Reveals About the Enterprise Gap

    Deloitte’s 2026 State of AI in the Enterprise report, based on a survey of 3,235 senior leaders across 24 countries, delivers a finding that should give every consulting practitioner pause: more companies than ever believe their AI strategy is sound, but fewer feel ready to actually execute it.

    The gap between strategic confidence and operational readiness is the defining tension in enterprise AI right now — and it has implications for how consultancies sell, deliver, and staff their AI practices.

    The preparedness paradox

    According to Deloitte’s survey, 42% of companies now consider their strategy highly prepared for AI adoption, up from the previous year. That sounds encouraging until you read the next line: those same organisations report feeling less prepared than before on infrastructure, data, risk management, and talent.

    This is not a minor statistical wrinkle. It suggests that enterprises have become more fluent in talking about AI — they can articulate a vision, identify use cases, perhaps even secure board-level backing — but the operational foundations needed to move from pilot to production remain weak. Worker access to AI rose by 50% in 2025, and organisations expect the number with 40% or more of AI projects in production to double within six months. Whether the infrastructure exists to support that ambition is another matter entirely.

    The skills picture reinforces the point. Insufficient worker skills were identified as the biggest barrier to integrating AI into existing workflows. The most common response — educating the broader workforce to raise AI fluency (53%) — is necessary but not sufficient. Far fewer organisations are redesigning roles, workflows, or career paths around AI. Education tells people what AI can do. Restructuring tells the organisation how to use it.

    Sovereign AI moves from policy to procurement

    One of the report’s more significant findings is the emergence of sovereign AI as a practical enterprise concern, not just a political talking point. In Singapore, 77% of businesses surveyed said that data residency and in-country or in-region compute considerations are now important to their strategic planning.

    This echoes a dynamic that maddaisy recently examined in the European context, where Capgemini’s CEO Aiman Ezzat outlined a pragmatic four-layer sovereignty framework while signing partnerships with all three major US hyperscalers. Deloitte’s data suggests that the same tension — between the desire for local control and the reality of global infrastructure — is playing out across Asia Pacific and the Middle East, not just Europe.

    As Computer Weekly reported, organisations in the Middle East are approaching a similar inflection point, with sovereign and agentic AI expected to define the next phase of digital transformation. The pattern is consistent: governments want control, enterprises want capability, and the consulting industry is positioning itself to broker the compromise.

    Agentic AI outpaces its guardrails

    Perhaps the most striking data point in the Deloitte report concerns agentic AI — systems designed to plan, execute, and optimise tasks with minimal human oversight. Usage is set to rise sharply over the next two years, but only one in five companies has a mature governance model for autonomous AI agents.

    In Singapore, the numbers are particularly stark: 72% of businesses plan to deploy agentic AI across several operational areas within two years, up from just 15% today. That is a nearly fivefold increase in deployment with governance frameworks that are, by the report’s own assessment, not yet fit for purpose.

    The use cases are real enough. Deloitte cites financial services firms using AI agents to capture meeting actions and track follow-through, airlines deploying agents for common customer transactions, and manufacturers using autonomous systems to optimise product development trade-offs. These are not speculative applications — they are in production. But the governance question is not academic either. When an AI agent autonomously rebooks a flight or commits to a procurement decision, the question of accountability, audit trails, and regulatory compliance becomes urgent.

    Accenture stops counting

    A useful counterpoint to Deloitte’s enterprise survey comes from the supply side. In December, Accenture announced that it would stop separately reporting its advanced AI bookings — a category covering generative, agentic, and physical AI — because the technology had become “so pervasive” it was embedded across nearly everything the firm delivers.

    CEO Julie Sweet framed this as a sign of maturity: AI is no longer a distinct workstream but a feature of all client engagements. Advanced AI bookings hit $2.2 billion in the first quarter of fiscal 2026, double the prior year. The company has reached its target of 80,000 AI and data professionals.

    There is a less generous reading, of course. Stopping disclosure also makes it harder for investors and analysts to track whether AI is generating new revenue or simply being relabelled within existing services. But taken alongside Accenture’s acquisition of Faculty, a UK-based AI firm, in February 2026, the direction is clear: the major consultancies are absorbing AI into their core delivery model rather than treating it as a separate practice.

    What practitioners should watch

    The Deloitte report’s most useful contribution is not its optimism about AI’s potential — the industry has no shortage of that — but its honest accounting of where enterprises actually stand. Three signals are worth tracking.

    First, the strategy-execution gap will drive consulting demand, but not the kind that involves slide decks and maturity assessments. Enterprises need help with the operational plumbing: data architecture, infrastructure modernisation, and workflow redesign. The consultancies that can deliver engineering alongside strategy will win the next phase.

    Second, sovereign AI is becoming a procurement criterion, not just a policy aspiration. For firms operating across multiple jurisdictions — which includes most enterprise consulting clients — this means every AI deployment now carries a compliance dimension that did not exist two years ago.

    Third, the governance gap around agentic AI is a genuine risk, not a theoretical concern. As autonomous systems move from pilots to production, the organisations that invested early in oversight frameworks will have a structural advantage. Those that did not will find themselves either slowing down or taking on liability they have not fully priced.

    The AI story in 2026 is less about whether the technology works — increasingly, it does — and more about whether organisations can build the operational, regulatory, and governance foundations to use it responsibly at scale. The Deloitte data suggests most are not there yet, but they think they are. That gap is where the real work begins.