Tag: deloitte

  • The Strategy-Bandwidth Gap: Why CSOs Are Being Sidelined on Their Own AI Agendas

    Ninety-five per cent of chief strategy officers expect AI and technology-led disruption to materially reshape their strategic priorities this year. Yet only 28 per cent co-lead their organisation’s AI-related decisions. That gap — between expecting AI to change everything and actually having the authority to steer it — is the central finding of Deloitte’s 2026 Chief Strategy Officer Survey, and it has implications well beyond the strategy function.

    For maddaisy.com’s consulting audience, the data confirms a pattern that has been building for months. The executives whose job description is literally to chart the company’s future are being outpaced by the very transformation they are supposed to lead.

    The bandwidth problem is structural, not personal

    Deloitte’s survey paints a picture of a role under strain. More than half of CSOs report managing too many priorities with too little time. Approximately half have five or fewer direct reports. Many rely on rotating external support — often consultants — to advance critical initiatives, which ironically consumes more coordination time and leaves less room for the strategic thinking the role demands.

    Despite this, expectations continue to expand. Nearly two-thirds of CSOs now lead cross-functional transformation efforts, and more than half drive enterprise-wide agendas that stretch well beyond the role’s traditional remit. The mandate is growing; the resources are not.

    Only 35 per cent of CSOs say they co-lead or fully own decision-making for their organisation’s top priorities. As Gagan Chawla, Deloitte’s US Business Strategy Practice leader, puts it: strategy leaders should “reimagine their mandate and champion governance that aligns decision-making power with enterprise priorities to help ensure strategy drives value, not just insight.”

    That is a polite way of saying many CSOs are producing analysis that nobody with execution authority is acting on.

    The AI authority gap

    The most striking disconnect is on AI specifically. Nearly every CSO surveyed expects AI to reshape competitive dynamics. Yet the survey finds that only 28 per cent co-lead enterprise AI decisions, and just 16 per cent say their organisation uses AI to fundamentally reimagine lines of business or create new competitive advantages.

    This sits uncomfortably alongside data maddaisy.com has been tracking. PwC’s 2026 CEO Survey found that 56 per cent of chief executives cannot point to measurable revenue gains from AI. If the executives responsible for enterprise strategy are not materially involved in AI decisions, the ROI gap starts to look less like a technology problem and more like an organisational design failure. AI investments are being made by technology teams, approved by finance, and executed by operations — while the people tasked with ensuring these investments align with long-term competitive positioning watch from the sidelines.

    The Deloitte data suggests momentum is building — 51 per cent of CSOs now view AI as a strategic partner that enhances insight and accelerates execution, and 61 per cent report investing in AI literacy. But viewing AI as useful and having authority over its deployment are very different things.

    Where this connects to consulting

    The CSO bandwidth gap has direct consequences for the consulting industry. Strategy firms have traditionally sold to the strategy function — providing the research, analysis, and frameworks that CSOs use to set direction. If those CSOs lack the authority to act on strategic recommendations, the value proposition of strategy consulting shifts.

    This helps explain the trend maddaisy.com examined last week in the shift from billable hours to outcome-based consulting. When McKinsey ties a third of its revenue to measurable business outcomes rather than advisory engagements, it is partly responding to a reality where clients’ strategy functions cannot translate advice into action without hands-on support. The consulting engagement is moving from “here is what you should do” to “let us do it with you” — and that shift is driven not just by AI capabilities but by the operational reality that internal strategy teams are stretched too thin to execute.

    Accenture’s recent disclosure reinforces the point from the technology side. The firm announced in December that it would stop separately reporting advanced AI bookings because AI is now “embedded in some way across nearly everything we do.” With $11.5 billion in AI bookings across 11,000 projects and revenue of $4.8 billion, Accenture has reached a point where AI is infrastructure, not a standalone initiative. For CSOs trying to “own” the AI strategy, this creates an additional challenge: when AI permeates every function, there is no single strategy to own.

    Confidence without control

    Perhaps the most telling number in the Deloitte survey is the confidence gap. Seventy-two per cent of CSOs are optimistic about their organisation’s prospects, compared with just 24 per cent who feel the same about the global economy. Strategy leaders believe their companies can win despite external uncertainty — but this confidence sits alongside an admission that they lack the bandwidth and authority to ensure it happens.

    Adam Giblin, Deloitte’s US Chief Strategy Officer Programme lead, frames it directly: “CSOs are navigating a world where uncertainty isn’t episodic, it’s the status quo.” The implication is that strategy can no longer be a periodic exercise — an annual planning cycle punctuated by quarterly reviews. It needs to be continuous, embedded, and authoritative.

    For organisations that have not yet reconciled the CSO’s expanding mandate with actual decision-making power, the path forward is not complicated but it is uncomfortable. It means giving strategy leaders a seat at the AI governance table — not as observers, but as co-owners. It means aligning resource allocation with strategic priorities rather than expecting a team of five to drive enterprise-wide transformation. And it means recognising that the AI ROI gap is, in significant part, a strategy execution gap.

    The companies that close it will not be the ones with the most advanced AI models. They will be the ones where the person responsible for competitive positioning actually has the authority to shape how AI is deployed.

  • The Governance Gap No One Is Closing: Why Agentic AI Is Outrunning Enterprise Oversight

    Three-quarters of enterprises plan to deploy agentic AI within two years. Only one in five has a mature governance model for it. That arithmetic should concern anyone responsible for enterprise technology strategy.

    The figures come from Deloitte’s 2026 State of AI in the Enterprise report, and they represent something more specific than the familiar story of AI adoption outpacing regulation. The challenge with agentic AI is not that rules do not exist — as maddaisy recently examined, the EU AI Act, Colorado’s AI Act, and a growing patchwork of global regulations are creating real enforcement deadlines. The challenge is that agentic systems demand a fundamentally different kind of oversight, and most organisations have not built the operational machinery to provide it.

    What makes agentic AI different

    Conventional AI systems — including the generative AI tools that have dominated enterprise adoption over the past two years — operate in an advisory mode. They suggest, summarise, draft, and classify. A human reviews the output and decides what to do with it. Governance for these systems, while imperfect, fits within existing frameworks: you audit the model, monitor outputs, and maintain a human in the decision loop.

    Agentic AI breaks that model. These systems are designed to plan, execute, and adjust autonomously — booking flights, approving procurement decisions, triaging customer complaints, or managing collections workflows without waiting for human sign-off. Oracle’s new agentic banking platform, launched in February 2026, illustrates the trajectory: domain-specific agents handle loan originations, credit decisioning, and compliance checks, with human oversight positioned as a “human-in-the-loop” role rather than a gatekeeping one.

    The distinction matters because it changes where governance must operate. With advisory AI, oversight happens after the model produces an output and before a human acts on it. With agentic AI, the system is the actor. Governance must be embedded in real time — monitoring agent behaviour as it happens, enforcing boundaries on what an agent can and cannot do, and maintaining audit trails that capture not just decisions but the full chain of reasoning and actions that led to them.

    The 21% problem

    Deloitte’s finding that only 21% of companies have a mature agentic AI governance model is striking, but the detail beneath it is more revealing. In Singapore, where deployment ambitions are among the highest globally — 72% of businesses plan to deploy agentic AI across multiple operational areas within two years, up from 15% today — the mature governance figure drops to just 14%.

    As maddaisy noted in its analysis of the broader Deloitte report, this fits a wider pattern: organisations are increasingly confident in their AI strategy but declining in readiness on the operational foundations needed to execute it. The agentic governance gap is perhaps the sharpest expression of this paradox — a technology that is advancing from pilot to production while the controls needed to run it safely remain in early stages.

    Half of Singapore respondents reported using a patchwork of public and internal proprietary frameworks to assess agent risk and performance. That is not a governance model — it is improvisation.

    Why existing frameworks fall short

    The AI Trends Report 2026, published by statworx and AI Hub Frankfurt, identifies three operational disciplines that are becoming foundational for reliable agentic AI: AI governance, DataOps, and what the report terms AgentOps — the operational layer for managing autonomous AI agents in production.

    AgentOps is a useful concept because it captures what most enterprise governance frameworks currently lack. Traditional AI governance focuses on model development: training data quality, bias testing, documentation, and approval workflows before deployment. That is necessary but insufficient for systems that learn, adapt, and take actions in production environments.

    Agentic systems require runtime governance: clear boundaries on agent autonomy (what decisions can the agent make independently, and which require escalation?), real-time monitoring of agent behaviour against expected parameters, kill switches for when agents drift outside acceptable bounds, and comprehensive audit trails that regulators can inspect after the fact.

    The EU AI Act’s requirements for high-risk systems — documented risk management, technical logging, human oversight mechanisms, and conformity assessments — implicitly assume this kind of operational infrastructure. But most organisations have not yet translated those requirements into engineering reality.

    Deployment is not waiting for governance

    The uncomfortable truth is that agentic AI is entering production regardless of whether governance is ready. Oracle is shipping banking agents now. Companies like AMD and Heathrow Airport are deploying autonomous agents in customer experience roles. Gartner predicts agentic systems will autonomously resolve 80% of customer service issues by 2028.

    Constellation Research offers a useful counterweight to the hype, arguing that agentic AI is “more of a feature than a revolution” and that the real measure of value is decision velocity — how quickly smaller decision trees and processes can be automated at scale. This framing is helpful because it reduces the abstraction. An AI agent rebooking a flight is not a paradigm shift; it is a process automation with a more sophisticated reasoning layer. But that reasoning layer is precisely what makes governance harder. The agent is not following a static script — it is making contextual judgements, and those judgements need oversight.

    What the governance gap actually costs

    The business case for closing the governance gap is not primarily about regulatory fines, though those are real. It is about operational risk. When an agentic system autonomously commits to a procurement decision, misprices a financial product, or gives a customer incorrect information with real-world consequences, the liability question is immediate and the reputational exposure is direct.

    It is also about scaling. Deloitte’s data shows that companies with stronger governance foundations are deploying agentic AI more successfully — they start with lower-risk use cases, build governance capabilities alongside deployment, and scale deliberately. Organisations that skip the governance step find themselves either slowing down when something goes wrong or, worse, not knowing that something has gone wrong until a regulator or customer tells them.

    What needs to happen

    The gap between agentic AI deployment and agentic AI governance is not going to close on its own. Three practical steps can narrow it.

    Define agent autonomy boundaries explicitly. For every agentic AI deployment, organisations need a clear specification of what the agent can do independently, what requires human approval, and what is prohibited. These boundaries should be codified in the system, not just written in a policy document. The Oracle banking platform’s “human-in-the-loop” architecture is one model, but even that needs specificity about when and how the loop engages.

    Invest in runtime monitoring, not just pre-deployment testing. The governance challenge with agentic AI is that it operates continuously and adapts to context. Pre-deployment audits are necessary but not sufficient. Organisations need real-time monitoring that tracks agent decisions against expected parameters and flags anomalies before they compound.

    Build audit trails as engineering infrastructure. When a regulator asks how an agent arrived at a specific decision — and under the EU AI Act, they will — the organisation needs to produce a complete chain of the agent’s reasoning, data inputs, and actions. This is not a reporting challenge; it is an engineering one that needs to be designed into the system from the start, not retrofitted after deployment.

    The agentic AI governance gap is not a future problem. It is a present one, widening with every new deployment. The organisations that treat governance as a technical discipline — building it into the engineering of their agentic systems rather than bolting it on as a compliance afterthought — will have a structural advantage as the technology matures. Those that do not will discover, as many enterprises have with earlier waves of technology adoption, that the cost of retrofitting oversight always exceeds the cost of building it in.

  • Strategically Ready, Operationally Stuck: What Deloitte’s 2026 AI Report Reveals About the Enterprise Gap

    Deloitte’s 2026 State of AI in the Enterprise report, based on a survey of 3,235 senior leaders across 24 countries, delivers a finding that should give every consulting practitioner pause: more companies than ever believe their AI strategy is sound, but fewer feel ready to actually execute it.

    The gap between strategic confidence and operational readiness is the defining tension in enterprise AI right now — and it has implications for how consultancies sell, deliver, and staff their AI practices.

    The preparedness paradox

    According to Deloitte’s survey, 42% of companies now consider their strategy highly prepared for AI adoption, up from the previous year. That sounds encouraging until you read the next line: those same organisations report feeling less prepared than before on infrastructure, data, risk management, and talent.

    This is not a minor statistical wrinkle. It suggests that enterprises have become more fluent in talking about AI — they can articulate a vision, identify use cases, perhaps even secure board-level backing — but the operational foundations needed to move from pilot to production remain weak. Worker access to AI rose by 50% in 2025, and organisations expect the number with 40% or more of AI projects in production to double within six months. Whether the infrastructure exists to support that ambition is another matter entirely.

    The skills picture reinforces the point. Insufficient worker skills were identified as the biggest barrier to integrating AI into existing workflows. The most common response — educating the broader workforce to raise AI fluency (53%) — is necessary but not sufficient. Far fewer organisations are redesigning roles, workflows, or career paths around AI. Education tells people what AI can do. Restructuring tells the organisation how to use it.

    Sovereign AI moves from policy to procurement

    One of the report’s more significant findings is the emergence of sovereign AI as a practical enterprise concern, not just a political talking point. In Singapore, 77% of businesses surveyed said that data residency and in-country or in-region compute considerations are now important to their strategic planning.

    This echoes a dynamic that maddaisy recently examined in the European context, where Capgemini’s CEO Aiman Ezzat outlined a pragmatic four-layer sovereignty framework while signing partnerships with all three major US hyperscalers. Deloitte’s data suggests that the same tension — between the desire for local control and the reality of global infrastructure — is playing out across Asia Pacific and the Middle East, not just Europe.

    As Computer Weekly reported, organisations in the Middle East are approaching a similar inflection point, with sovereign and agentic AI expected to define the next phase of digital transformation. The pattern is consistent: governments want control, enterprises want capability, and the consulting industry is positioning itself to broker the compromise.

    Agentic AI outpaces its guardrails

    Perhaps the most striking data point in the Deloitte report concerns agentic AI — systems designed to plan, execute, and optimise tasks with minimal human oversight. Usage is set to rise sharply over the next two years, but only one in five companies has a mature governance model for autonomous AI agents.

    In Singapore, the numbers are particularly stark: 72% of businesses plan to deploy agentic AI across several operational areas within two years, up from just 15% today. That is a nearly fivefold increase in deployment with governance frameworks that are, by the report’s own assessment, not yet fit for purpose.

    The use cases are real enough. Deloitte cites financial services firms using AI agents to capture meeting actions and track follow-through, airlines deploying agents for common customer transactions, and manufacturers using autonomous systems to optimise product development trade-offs. These are not speculative applications — they are in production. But the governance question is not academic either. When an AI agent autonomously rebooks a flight or commits to a procurement decision, the question of accountability, audit trails, and regulatory compliance becomes urgent.

    Accenture stops counting

    A useful counterpoint to Deloitte’s enterprise survey comes from the supply side. In December, Accenture announced that it would stop separately reporting its advanced AI bookings — a category covering generative, agentic, and physical AI — because the technology had become “so pervasive” it was embedded across nearly everything the firm delivers.

    CEO Julie Sweet framed this as a sign of maturity: AI is no longer a distinct workstream but a feature of all client engagements. Advanced AI bookings hit $2.2 billion in the first quarter of fiscal 2026, double the prior year. The company has reached its target of 80,000 AI and data professionals.

    There is a less generous reading, of course. Stopping disclosure also makes it harder for investors and analysts to track whether AI is generating new revenue or simply being relabelled within existing services. But taken alongside Accenture’s acquisition of Faculty, a UK-based AI firm, in February 2026, the direction is clear: the major consultancies are absorbing AI into their core delivery model rather than treating it as a separate practice.

    What practitioners should watch

    The Deloitte report’s most useful contribution is not its optimism about AI’s potential — the industry has no shortage of that — but its honest accounting of where enterprises actually stand. Three signals are worth tracking.

    First, the strategy-execution gap will drive consulting demand, but not the kind that involves slide decks and maturity assessments. Enterprises need help with the operational plumbing: data architecture, infrastructure modernisation, and workflow redesign. The consultancies that can deliver engineering alongside strategy will win the next phase.

    Second, sovereign AI is becoming a procurement criterion, not just a policy aspiration. For firms operating across multiple jurisdictions — which includes most enterprise consulting clients — this means every AI deployment now carries a compliance dimension that did not exist two years ago.

    Third, the governance gap around agentic AI is a genuine risk, not a theoretical concern. As autonomous systems move from pilots to production, the organisations that invested early in oversight frameworks will have a structural advantage. Those that did not will find themselves either slowing down or taking on liability they have not fully priced.

    The AI story in 2026 is less about whether the technology works — increasingly, it does — and more about whether organisations can build the operational, regulatory, and governance foundations to use it responsibly at scale. The Deloitte data suggests most are not there yet, but they think they are. That gap is where the real work begins.