Tag: enterprise-ai

  • The Strategy-Bandwidth Gap: Why CSOs Are Being Sidelined on Their Own AI Agendas

    Ninety-five per cent of chief strategy officers expect AI and technology-led disruption to materially reshape their strategic priorities this year. Yet only 28 per cent co-lead their organisation’s AI-related decisions. That gap — between expecting AI to change everything and actually having the authority to steer it — is the central finding of Deloitte’s 2026 Chief Strategy Officer Survey, and it has implications well beyond the strategy function.

    For maddaisy.com’s consulting audience, the data confirms a pattern that has been building for months. The executives whose job description is literally to chart the company’s future are being outpaced by the very transformation they are supposed to lead.

    The bandwidth problem is structural, not personal

    Deloitte’s survey paints a picture of a role under strain. More than half of CSOs report managing too many priorities with too little time. Approximately half have five or fewer direct reports. Many rely on rotating external support — often consultants — to advance critical initiatives, which ironically consumes more coordination time and leaves less room for the strategic thinking the role demands.

    Despite this, expectations continue to expand. Nearly two-thirds of CSOs now lead cross-functional transformation efforts, and more than half drive enterprise-wide agendas that stretch well beyond the role’s traditional remit. The mandate is growing; the resources are not.

    Only 35 per cent of CSOs say they co-lead or fully own decision-making for their organisation’s top priorities. As Gagan Chawla, Deloitte’s US Business Strategy Practice leader, puts it: strategy leaders should “reimagine their mandate and champion governance that aligns decision-making power with enterprise priorities to help ensure strategy drives value, not just insight.”

    That is a polite way of saying many CSOs are producing analysis that nobody with execution authority is acting on.

    The AI authority gap

    The most striking disconnect is on AI specifically. Nearly every CSO surveyed expects AI to reshape competitive dynamics. Yet the survey finds that only 28 per cent co-lead enterprise AI decisions, and just 16 per cent say their organisation uses AI to fundamentally reimagine lines of business or create new competitive advantages.

    This sits uncomfortably alongside data maddaisy.com has been tracking. PwC’s 2026 CEO Survey found that 56 per cent of chief executives cannot point to measurable revenue gains from AI. If the executives responsible for enterprise strategy are not materially involved in AI decisions, the ROI gap starts to look less like a technology problem and more like an organisational design failure. AI investments are being made by technology teams, approved by finance, and executed by operations — while the people tasked with ensuring these investments align with long-term competitive positioning watch from the sidelines.

    The Deloitte data suggests momentum is building — 51 per cent of CSOs now view AI as a strategic partner that enhances insight and accelerates execution, and 61 per cent report investing in AI literacy. But viewing AI as useful and having authority over its deployment are very different things.

    Where this connects to consulting

    The CSO bandwidth gap has direct consequences for the consulting industry. Strategy firms have traditionally sold to the strategy function — providing the research, analysis, and frameworks that CSOs use to set direction. If those CSOs lack the authority to act on strategic recommendations, the value proposition of strategy consulting shifts.

    This helps explain the trend maddaisy.com examined last week in the shift from billable hours to outcome-based consulting. When McKinsey ties a third of its revenue to measurable business outcomes rather than advisory engagements, it is partly responding to a reality where clients’ strategy functions cannot translate advice into action without hands-on support. The consulting engagement is moving from “here is what you should do” to “let us do it with you” — and that shift is driven not just by AI capabilities but by the operational reality that internal strategy teams are stretched too thin to execute.

    Accenture’s recent disclosure reinforces the point from the technology side. The firm announced in December that it would stop separately reporting advanced AI bookings because AI is now “embedded in some way across nearly everything we do.” With $11.5 billion in AI bookings across 11,000 projects and revenue of $4.8 billion, Accenture has reached a point where AI is infrastructure, not a standalone initiative. For CSOs trying to “own” the AI strategy, this creates an additional challenge: when AI permeates every function, there is no single strategy to own.

    Confidence without control

    Perhaps the most telling number in the Deloitte survey is the confidence gap. Seventy-two per cent of CSOs are optimistic about their organisation’s prospects, compared with just 24 per cent who feel the same about the global economy. Strategy leaders believe their companies can win despite external uncertainty — but this confidence sits alongside an admission that they lack the bandwidth and authority to ensure it happens.

    Adam Giblin, Deloitte’s US Chief Strategy Officer Programme lead, frames it directly: “CSOs are navigating a world where uncertainty isn’t episodic, it’s the status quo.” The implication is that strategy can no longer be a periodic exercise — an annual planning cycle punctuated by quarterly reviews. It needs to be continuous, embedded, and authoritative.

    For organisations that have not yet reconciled the CSO’s expanding mandate with actual decision-making power, the path forward is not complicated but it is uncomfortable. It means giving strategy leaders a seat at the AI governance table — not as observers, but as co-owners. It means aligning resource allocation with strategic priorities rather than expecting a team of five to drive enterprise-wide transformation. And it means recognising that the AI ROI gap is, in significant part, a strategy execution gap.

    The companies that close it will not be the ones with the most advanced AI models. They will be the ones where the person responsible for competitive positioning actually has the authority to shape how AI is deployed.

  • From Billable Hours to Shared Risk: Consulting’s AI-Driven Business Model Shift

    McKinsey’s 25,000 AI agents grabbed the headlines, but the more consequential number is the roughly one-third of the firm’s revenue now tied to outcome-based engagements. Across the industry, AI is not just changing how consultants work – it is rewriting how they get paid.

    When maddaisy.com examined McKinsey’s 25,000 AI agent deployment last week, the focus was on scale: was deploying one digital agent for every 1.6 human employees a first-mover advantage or an expensive experiment? The answer may depend less on the agent count and more on a quieter transformation happening in parallel – the shift from selling time to selling outcomes.

    The Model That Built an Industry Is Under Pressure

    For decades, consulting has run on a straightforward exchange: expertise for time, billed by the hour or the project. It is a model that has produced extraordinary margins, but it carries an inherent misalignment. Consultants profit from the complexity of a problem, not necessarily from solving it quickly.

    AI agents threaten that dynamic directly. When an algorithm can synthesise research in minutes that previously took analysts weeks, the hours-based model starts to look exposed. McKinsey’s own data – 1.5 million hours saved on search and synthesis work – quantifies exactly how much billable time AI has already removed from the equation.

    Rather than watching margins erode, McKinsey is pivoting. Speaking on the HBR IdeaCast in February, CEO Bob Sternfels confirmed that outcome-based engagements – where McKinsey co-invests alongside clients and ties fees to measurable business results – now account for roughly a third of the firm’s revenue. Two years ago, that figure was negligible.

    The Economics Only Work with AI

    This is where the 25,000 agents become strategically coherent. Outcome-based consulting is inherently riskier than fee-for-service; the firm only earns if the client succeeds. To make the economics work, you need two things: lower delivery costs and higher confidence in results.

    AI agents address both. QuantumBlack, McKinsey’s 1,700-person AI division, now drives 40% of the firm’s total work. Non-client-facing headcount has fallen 25%, while output from those teams has risen 10% – the “25 squared” model. The savings create the margin headroom needed to absorb the risk of outcome-based pricing.

    It is not just McKinsey making this calculation. BCG has deployed “forward-deployed consultants” who build AI tools directly within client organisations, effectively embedding methodology as software rather than slides. Capgemini has trained 310,000 employees on generative AI, though its agentic AI bookings only reached 10% of quarterly pipeline by Q4 2025. Accenture has stopped reporting AI bookings separately because, as the firm noted in its Q1 fiscal 2026 results, AI is now embedded in virtually every engagement.

    The Client Side of the Equation

    The timing is not coincidental. As maddaisy.com reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot demonstrate revenue gains from AI. When the client cannot prove value, the consultancy offering to underwrite outcomes holds a powerful negotiating position – essentially saying, “we believe in this enough to stake our fees on it.”

    The irony is that consultancies are asking clients to trust their AI capabilities while the industry’s own track record on AI delivery remains uneven. Deloitte’s own 2026 State of AI report found that 42% of organisations consider their AI strategy “highly prepared” but feel markedly less ready on infrastructure, data governance, and talent.

    McKinsey faces this credibility gap from a different direction. The firm’s State of AI research found that only 5% of companies globally see AI hitting their bottom line. Positioning itself as the firm that can deliver measurable outcomes means McKinsey is, in effect, claiming to solve a problem that its own research says almost no one has solved.

    What Changes for Practitioners

    For consultants and the organisations that hire them, three practical implications stand out.

    Procurement shifts. If outcome-based pricing becomes the norm, procurement teams will need to evaluate consulting engagements more like joint ventures than service contracts. That means assessing the firm’s AI capabilities, data infrastructure, and delivery methodology – not just the partner’s credentials and the day rate.

    The talent model is splitting. Sternfels has been explicit that McKinsey wants “great consultants and/or great technologists, groomed to be both.” The traditional path – from analyst to associate to engagement manager – now runs alongside a technical track where consultants build and deploy AI systems. BCG’s vibe-coding consultants are an early version of this hybrid role.

    Governance becomes shared. When a consultancy co-invests in outcomes and deploys AI agents within a client’s operations, the governance question becomes bilateral. As maddaisy.com has covered extensively, the gap between AI deployment speed and governance maturity is already the defining risk of 2026. Outcome-based models widen this gap further, because neither party has clear precedent for who owns the risk when an AI agent produces flawed analysis that drives a business decision.

    The Bigger Picture

    The consulting industry has weathered previous disruptions – offshoring, automation, the rise of in-house strategy teams – by evolving its value proposition. This time, the change is structural. AI agents do not just reduce the cost of delivering advice; they make it possible to charge for results instead.

    McKinsey’s 25,000 agents are best understood not as a technology deployment but as a financial instrument – the infrastructure that underwrites a new revenue model. Whether the model works depends on something no agent can automate: whether clients actually achieve the outcomes both parties are betting on.

  • Shadow AI and the Year-Two Governance Challenge

    Over a third of employees have already used AI tools without their organisation’s knowledge or permission. As enforcement deadlines tighten, the gap between what staff are doing and what leadership has sanctioned is becoming one of enterprise AI’s most urgent — and most overlooked — governance problems.

    The term “shadow AI” describes AI tools adopted by employees outside official channels — free-tier large language models used to draft client communications, image generators repurposed for internal presentations, coding assistants plugged into development environments without security review. It is the AI equivalent of shadow IT, but with a critical difference: the data exposure risks are significantly higher and the speed of adoption is faster than anything IT departments have previously managed.

    According to the State of Information Security Report 2025 from ISMS.online, 37% of organisations surveyed said employees had already used generative AI tools without organisational permission or guidance. A further 34% identified shadow AI as a top emerging threat for the next 12 months. These are not projections about some distant risk — they describe what is happening now, in organisations that thought they had AI under control.

    The year-two reckoning

    The pattern is familiar to anyone who lived through the early cloud adoption cycle, but compressed into a fraction of the time. Year one — roughly 2024 into early 2025 — was characterised by experimentation. Departments trialled AI tools, leaders encouraged innovation, and governance was deferred in favour of speed. The implicit message in many organisations was: try things, move fast, we will sort out the rules later.

    Year two is when “later” arrives. And the numbers suggest most organisations are not ready for it. The same ISMS.online report found that 54% of respondents admitted their business had adopted AI technology too quickly and was now facing challenges in scaling it back or implementing it more responsibly. That figure represents a majority of enterprises acknowledging, in effect, that they have accumulated governance debt they do not yet know how to service.

    This aligns with a pattern maddaisy has been tracking across several dimensions. Deloitte’s 2026 State of AI report revealed that organisations report growing strategic confidence in AI but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute responsibly. Shadow AI is the ground-level expression of that paradox: strategy says “adopt AI,” but operations never built the guardrails to manage what adoption actually looks like across thousands of employees making independent tool choices every day.

    Why traditional IT governance falls short

    The instinct in many organisations is to treat shadow AI like shadow IT — block unsanctioned tools, enforce approved vendor lists, and route everything through procurement. That approach, while understandable, misses what makes AI different.

    When an employee uses an unapproved project management tool, the risk is primarily operational: data silos, integration headaches, wasted licences. When an employee pastes client data, financial projections, or intellectual property into a free-tier language model, the risk is fundamentally different. That data may be used for model training, stored in jurisdictions with different privacy regimes, or exposed through security vulnerabilities the organisation has no visibility into.

    As CIO.com recently noted, boards are increasingly recognising that AI is not waiting for permission — it is already shaping decisions through vendor systems, employee-adopted tools, and embedded algorithms that grow more powerful without explicit organisational consent. The question for governance teams is not whether employees are using unsanctioned AI. It is how much sensitive data has already left the building.

    The regulatory dimension

    Shadow AI also creates a specific compliance exposure that many organisations have not fully mapped. As maddaisy examined last week, the EU AI Act reaches its most consequential enforcement milestone in August 2026, with requirements for high-risk AI systems carrying penalties of up to €35 million or 7% of global annual turnover. Colorado’s AI Act takes effect in June 2026 with its own set of requirements around algorithmic discrimination and impact assessments.

    The compliance challenge with shadow AI is that an organisation cannot demonstrate responsible use of systems it does not know exist. If an employee in a hiring function uses an unsanctioned AI tool to screen CVs, or a financial analyst uses a free language model to generate risk assessments, the organisation may be deploying high-risk AI — as defined by regulators — without any of the documentation, monitoring, or impact assessment that compliance requires.

    This is not a theoretical concern. It is a direct consequence of the gap between adoption speed and governance maturity that maddaisy has documented across the enterprise AI landscape.

    What a pragmatic response looks like

    The organisations handling shadow AI most effectively are not the ones deploying the heaviest restrictions. They are the ones that recognised early that prohibition does not work when the tools are free, browser-based, and genuinely useful.

    A pragmatic governance approach has several components. First, visibility: understanding what AI tools employees are actually using, through network monitoring, surveys, and — critically — creating an environment where people feel safe disclosing their usage rather than hiding it. Second, clear usage policies that distinguish between acceptable and unacceptable use cases, specifying what data categories must never enter external AI systems regardless of the tool’s provenance.

    Third, an approved toolset that is genuinely competitive with the free alternatives. One of the most common drivers of shadow AI is that official enterprise AI tools are slower, more restricted, or simply worse than what employees can access on their own. If the sanctioned option requires a three-week procurement process while ChatGPT is a browser tab away, governance has already lost.

    Fourth, the frameworks exist. ISO 42001 provides a structured approach to establishing an AI management system, covering policy, roles, impact assessment, and continuous improvement through the familiar Plan-Do-Check-Act cycle. It is not a silver bullet, but it offers a starting point for organisations that currently have no systematic approach to AI governance beyond hoping the problem does not escalate.

    The window is narrowing

    The uncomfortable reality for many enterprises is that shadow AI has already created facts on the ground. Data has been shared with external models. Workflows have been built around unsanctioned tools. Employees have integrated AI into their daily routines in ways that would be disruptive to simply switch off.

    The year-two governance challenge is not about preventing AI adoption — that ship has sailed. It is about catching up with what has already happened, building the visibility and policy infrastructure to manage it going forward, and doing so before enforcement deadlines turn a governance gap into a compliance crisis.

    For consultants and practitioners advising organisations through this transition, the message is straightforward: audit first, policy second, technology third. The organisations that will navigate this best are not the ones with the most sophisticated AI strategies. They are the ones that know, concretely and completely, what AI is actually being used within their walls — and by whom.

  • Lloyds Banking Group’s £100 Million AI Bet: What the UK’s First Agentic Financial Assistant Means for Enterprise AI

    Lloyds Banking Group expects its artificial intelligence programme to deliver more than £100 million in value this year — double the £50 million it attributes to generative AI in 2025. The figures, disclosed alongside the group’s annual results in January 2026, represent one of the more concrete attempts by a major financial institution to attach a number to what AI is actually worth.

    That specificity matters. As maddaisy examined last week, PwC’s 2026 Global CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. Lloyds is not claiming to have solved the ROI puzzle entirely, but it is doing something most enterprises have not: publishing the numbers and tying them to specific operational improvements rather than vague promises of transformation.

    The financial assistant: what it actually does

    The headline initiative is a customer-facing AI financial assistant, which Lloyds describes as the first agentic AI tool of its kind offered by a UK bank. Announced in November 2025 and scheduled for public rollout in early 2026, the assistant sits within the Lloyds mobile app and is designed to help customers manage spending, savings, and investments through natural conversation.

    The system uses a combination of generative AI for its conversational interface and agentic AI to process requests and execute actions. In practical terms, a customer can query a payment, ask for a spending breakdown, or request guidance on savings options — and the assistant will interpret the request, plan the necessary steps, and carry them out. Where it reaches the limits of what automated support can handle, it refers users to human specialists.

    The scope is intended to expand. Lloyds has said the assistant will eventually cover its full product suite, from mortgages to car finance to protection products, serving its 28 million customer accounts across the Lloyds, Halifax, Bank of Scotland, and Scottish Widows brands.

    Testing at scale, not in a lab

    Before public launch, Lloyds tested the assistant with approximately 7,000 employees, who collectively completed around 12,000 trials. That is a meaningful pilot — large enough to surface edge cases and failure modes that a controlled lab environment would miss, and conducted with users who understand the bank’s products well enough to stress-test the system’s accuracy.

    The employee testing sits alongside a broader internal AI deployment that has already delivered measurable results. Athena, the group’s AI-powered internal search assistant, is used by 20,000 colleagues and has reduced information search times by 66%. GitHub Copilot, deployed to 5,000 engineers, has driven a 50% improvement in code conversion for legacy systems. An AI-powered HR assistant resolves 90% of queries correctly on the first attempt.

    These are not experimental pilots. They are production tools used at scale, and the fact that Lloyds is willing to attach specific performance metrics to each one distinguishes its approach from the many enterprises that describe AI impact in qualitative terms only.

    The ROI question: credible or convenient?

    The £100 million figure invites scrutiny, and it should. “Value” in corporate AI disclosures is notoriously slippery — it can mean cost savings, time savings converted to a monetary equivalent, revenue uplift, or some combination of all three. Lloyds has not published a detailed methodology for how it arrived at the £50 million figure for 2025 or how it projects the 2026 target.

    That said, the bank’s approach has features that lend it more credibility than many comparable claims. The internal tools have named user populations and specific performance benchmarks. The customer-facing assistant was tested with thousands of employees before launch, not unveiled as a concept. And the 2025 figure is presented as a delivered outcome, not a forecast — a distinction that matters when most enterprises are still struggling to prove any return at all.

    Lloyds also rose 12 places in the Evident AI Global Index last year — the strongest improvement of any UK bank — suggesting that external assessors see substance behind the claims.

    Agentic AI in financial services: the governance dimension

    The move to customer-facing agentic AI in banking raises governance questions that go beyond what internal productivity tools require. As maddaisy explored earlier this week, Deloitte’s 2026 AI report found that only one in five enterprises has a mature governance model for agentic systems. When those systems move from internal search assistants to customer-facing financial advice, the stakes escalate considerably.

    A banking AI that can execute transactions, provide savings guidance, and eventually handle mortgage queries operates in regulated territory. The Financial Conduct Authority’s expectations around suitability, fair treatment, and clear communication apply regardless of whether the advice comes from a human or an algorithm. Lloyds has acknowledged this by building in human referral pathways, but the real test will come at scale — when millions of customers interact with the system simultaneously, and edge cases multiply.

    Ron van Kemenade, the group’s chief operating officer, has framed the launch as “a pivotal step in our strategy as we continue to reimagine the Group for our customers and colleagues.” Ranil Boteju, chief data and analytics officer, has positioned it as a demonstration of responsible deployment, noting that the assistant “can understand and respond to specific, hyper-personalised customer requests and retains memory to offer a more holistic experience, ensuring the generated answer is safe to present to customers.”

    What this signals for the sector

    Lloyds is not the first bank to deploy AI, nor the first to make bold claims about its value. What distinguishes this move is the combination of a concrete financial baseline (£50 million delivered), a named and tested product (the financial assistant), a clear expansion roadmap (full product suite), and an institutional commitment to upskilling (a new AI Academy for its 67,000 employees).

    For practitioners watching the enterprise AI landscape, the Lloyds case offers a useful reference point against the prevailing narrative of deployment fatigue and unproven returns. It does not resolve the broader ROI question — one bank’s results do not establish an industry pattern — but it does suggest that organisations which invest in specific, measurable use cases and test rigorously before launch can move beyond the proof-of-concept purgatory that still traps most enterprises.

    The harder question is what happens next. An AI assistant that helps customers check spending patterns is useful. One that advises on mortgages and investment products enters a different category of risk and regulatory complexity. How Lloyds navigates that expansion — and whether the £100 million value target holds up under the scrutiny of real-world deployment — will be worth watching over the months ahead.

  • Capgemini Added 82,300 Offshore Workers Last Year. So Much for AI Replacing Everyone.

    If artificial intelligence is about to make IT services firms obsolete, someone forgot to tell Capgemini. The French consultancy ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. Its offshore workforce now stands at 279,200, a 42% increase, representing two-thirds of total headcount.

    At the same time, the company is planning €700 million in restructuring charges to reshape its workforce for AI. It is positioning itself as, in CEO Aiman Ezzat’s words, “the catalyst for enterprise-wide AI adoption.”

    The contradiction is only apparent. What Capgemini’s numbers actually reveal is the gap between the market’s AI narrative and the operational reality of large-scale enterprise transformation.

    The WNS factor

    The headline headcount surge requires context. The bulk of those 82,300 new offshore employees came from Capgemini’s acquisition of WNS, the India-headquartered business process services firm, completed in late 2025. Cloud4C, another acquisition, contributed further. This was not organic hiring — it was a deliberate strategic bet on scaling offshore delivery capacity at precisely the moment investors were questioning whether such capacity has a future.

    The acquisition arithmetic is revealing. Capgemini’s 2026 revenue growth guidance of 6.5% to 8.5% includes 4.5 to 5 percentage points from acquisitions, primarily WNS. Strip out the acquired growth, and organic expansion looks more like 2% to 3.5%. The company needed WNS to hit its targets.

    WNS brings something specific: expertise in AI-powered business process services, particularly in financial services, insurance, and healthcare. The company had already identified around 100 cross-selling opportunities and signed an intelligent operations contract worth more than €600 million. This is not a firm buying bodies for the sake of scale. It is buying the delivery infrastructure needed to operationalise AI at enterprise level.

    The restructuring counterweight

    The other side of the equation is the €700 million restructuring programme, most of which will land in 2026. Capgemini describes this as adapting “workforce and skills” to align with demand for AI-driven services. In plainer terms: some roles are being eliminated or relocated, while others — particularly those requiring AI engineering, data science, and agentic AI expertise — are being created or upskilled.

    As maddaisy noted when examining Capgemini’s full-year results last week, generative and agentic AI accounted for more than 10% of group bookings in Q4, up from around 5% earlier in the year. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI. The restructuring is not a contradiction of the hiring — it is the other half of the same workforce transformation.

    The pattern is expand first, optimise second. Acquire the delivery capacity, then reshape it. It is a playbook that makes more commercial sense than the market’s preferred narrative of AI simply deleting headcount.

    What the market gets wrong

    Capgemini’s share price has fallen roughly 26% in 2026, driven largely by investor anxiety that AI will cannibalise the IT services business model. The logic runs: if AI can automate code generation, testing, and business process management, why would enterprises pay consultancies to do it?

    Ezzat addressed this directly in the post-earnings call. “I don’t think clients are thinking this way about reduction,” he told MarketWatch. “Clients are looking at how critical it is for them to adopt AI and where it can have an impact.”

    The distinction matters. Enterprises are not replacing their consulting relationships with AI tools. They are asking their consultants to help them implement AI — which, for now, requires more people, not fewer. The skills are different. The delivery models are changing. But the demand for hands-on expertise in making AI work within complex organisational environments is, if anything, increasing.

    This aligns with what Ezzat told Fortune separately: “AI is a business. It is not a technology. It cannot just be used to keep the house running.” His argument is that CEOs who treat AI as an efficiency tool for individual departments are missing the larger opportunity — and the larger threat — of enterprise-wide transformation.

    The offshore model evolves, but does not disappear

    The shift to 66% offshore headcount is not a return to the labour arbitrage model of the 2000s. Capgemini’s onshore workforce held steady at 144,200. What changed is the nature of offshore work. WNS’s strength is in intelligent operations — business process services enhanced by AI, automation, and analytics. These are not call centres or basic coding shops. They are delivery centres where AI tools augment human workers on complex processes.

    This is broadly consistent with what the wider consulting industry is signalling. McKinsey’s deployment of 25,000 AI agents across its workforce, as maddaisy reported this week, points in the same direction: AI as workforce augmentation, not replacement. The difference is that McKinsey is building internally, while Capgemini is acquiring externally. Both are betting that the future of professional services involves more AI and more people, deployed differently.

    What practitioners should watch

    For consultants and enterprise leaders, Capgemini’s workforce data offers a useful reality check against the AI replacement narrative. Three signals are worth tracking.

    First, the restructuring outcomes. If the €700 million programme results in meaningful upskilling rather than simple headcount reduction, it will validate the “expand and reshape” model. If it quietly becomes a cost-cutting exercise, the AI transformation story weakens.

    Second, the organic growth rate. With acquisitions contributing nearly five percentage points of 2026 growth, the underlying business needs to demonstrate it can grow on its own merits. The Q4 acceleration to over 10% AI bookings was promising, but one quarter does not make a trend.

    Third, the onshore-offshore ratio over time. If AI genuinely transforms delivery models, the 66% offshore share should eventually stabilise or even decline as automation reduces the need for large delivery teams. If it keeps rising, the industry is still in the scale-up phase, and the productivity gains from AI remain further away than the marketing suggests.

    Capgemini’s apparent paradox — adding 82,300 people while betting its future on AI — is not a paradox at all. It is the messy, expensive reality of what enterprise AI transformation actually looks like on the ground: more complexity before less, more people before fewer, more investment before returns. The firms that navigate this transition phase successfully will define the next era of professional services. The market just needs to be patient enough to let them.

  • The Non-AI Agenda: What CIOs Are Actually Prioritising Beyond Artificial Intelligence in 2026

    Global IT spending will hit $6.15 trillion in 2026, a 10.8 per cent rise on the previous year. But dig beneath that headline and the distribution is strikingly lopsided. Gartner’s latest forecast projects AI spending to surge 80.8 per cent and data centre outlays to climb 31.7 per cent, while communications services and device budgets limp along at low single digits. The message is clear: AI is swallowing the budget. The question is what happens to everything else.

    That question matters because, as maddaisy has documented over recent weeks, the AI investment thesis is under strain. PwC’s 2026 CEO Survey found that 56 per cent of executives cannot point to measurable revenue gains from their AI programmes, and governance frameworks are lagging well behind deployment ambitions. If the technology absorbing most of the budget has yet to prove its return, the priorities being squeezed to fund it deserve closer scrutiny.

    The squeeze is real — and deliberate

    John-David Lovelock, a vice president analyst at Gartner, describes a dynamic in which runaway AI spending is forcing CIOs to find savings elsewhere — and IT services providers are bearing the brunt. The logic is straightforward: buyers expect their vendors to be using AI internally, and they want those efficiency gains reflected in lower fees.

    “CIOs need to find somewhere that they have control of their budget, and they can pick on the services companies because they’re using AI,” Lovelock told CIO.com.

    But not every non-AI line item can be trimmed without consequences. Several priorities are growing more urgent precisely because of AI adoption, not despite it.

    Cybersecurity: AI’s expanding attack surface

    The most immediate non-AI priority is, paradoxically, driven by AI itself. Dmitry Nazarevich, CTO at software firm Innowise, notes that his company’s security spending increase “is directly related to the increase in exposure and risk to data associated with the increased attack surface resulting from the introduction of generative AI.”

    This is not a theoretical concern. Every new AI model integrated into enterprise workflows introduces new vectors — data exfiltration through prompt injection, model poisoning through compromised training data, and the simple reality that agentic systems with write access to business processes can do real damage if they malfunction or are manipulated. As enterprises move from experimental AI pilots to production deployments, security spending is not optional — it is the prerequisite.

    Data foundations: the unsexy precondition

    Salesforce CIO Dan Shmitt offered a telling anecdote about an AI agent on the company’s help site that surfaced two conflicting answers to the same question. “Our first reaction was to assume the model was wrong,” he said. “The truth was that our data and content needed more consistency.”

    This pattern — blaming the model when the real problem is the data — is remarkably common. Capgemini’s TechnoVision 2026 framework places “thriving on data” as one of its nine foundational technology domains, emphasising data sharing, AI-driven insights, and sustainable data practices. The message is consistent across vendors and analysts: AI systems are only as reliable as the data infrastructure beneath them.

    For CIOs who have spent two years funding AI pilots, investing in data quality, master data management, and integration architecture may feel like a step backwards. It is not. It is the work that determines whether those pilots ever graduate to production.

    Technical debt and modernisation

    Legacy systems are not merely an annoyance in an AI-first world — they are a bottleneck. Nazarevich highlights that increased modernisation spending at Innowise is “partly a result of the fact that legacy or outdated systems limit the effectiveness of AI technology and delay deployment schedules.”

    This creates a compounding problem. Organisations that deferred modernisation to fund AI experiments now find that their AI initiatives are underperforming because the underlying platforms cannot support them. The CIO.com analysis of strategic imperatives for 2026 makes the point explicitly: if eight out of 10 strategic priorities relate to AI, “you’re likely missing some critical emerging technologies and trends.”

    Edge computing, digital twins, and platform modernisation may not generate the boardroom excitement of a generative AI demo, but they are the infrastructure on which AI capabilities depend.

    FinOps: managing the unpredictable bill

    AI workloads have introduced a new kind of cost unpredictability into IT budgets. Unlike traditional cloud computing, where usage patterns are relatively stable and forecastable, AI inference costs can spike with demand in ways that are difficult to model in advance. Innowise reports increased FinOps spending specifically because of “the unpredictability of computing bills created by AI workloads.”

    FinOps — the practice of bringing financial accountability to cloud spending — is no longer a niche discipline for cloud-native firms. For any organisation running AI at scale, it has become an essential management capability. Without it, the 80 per cent surge in AI spending that Gartner forecasts could easily overshoot, consuming budget earmarked for the very modernisation and security initiatives that AI requires to succeed.

    Workforce fluency: the persistent gap

    Technology investment alone solves nothing if the people using it are not equipped to do so effectively. Rebecca Gasser, global CIO at FGS Global, frames this as a literacy challenge: building digital and AI fluency across the organisation so that workers “can be more agile and adaptable to the ongoing changes.”

    Pat Lawicki, CIO of TruStage, puts it more directly: “We’re committed to balancing innovation with humanity: leveraging digital tools where they add real value while preserving the human connection that defines trust and empathy.”

    This is consistent with the pattern maddaisy has tracked in recent coverage of AI-driven burnout and the organisational failures behind poor AI rollouts. The technology works best when employees understand it, trust it, and can exercise judgement about when to rely on it and when to intervene. That requires sustained investment in training, change management, and communication — none of which appear in an AI spending forecast but all of which determine whether the forecast delivers value.

    The convergence argument

    The most sophisticated framing of the CIO’s 2026 agenda comes not from treating AI and non-AI priorities as competitors for budget, but from recognising their interdependence. Capgemini’s TechnoVision 2026 describes a shift toward “synchronicity at scale” — the idea that boundaries between digital, physical, and biological innovation are dissolving, and that the CIO’s job is to orchestrate across all of them simultaneously.

    In practice, this means cybersecurity investment protects AI deployments. Data foundation work makes AI outputs reliable. Modernisation enables AI to reach production. FinOps keeps AI costs sustainable. Workforce fluency ensures AI adoption sticks.

    The CIOs who treat 2026 as an AI-only year will likely find themselves explaining, 12 months from now, why their AI investments still are not delivering returns. The ones who invest in the full stack — the security, the data, the infrastructure, the people — are building the conditions under which AI can actually work.

    That is not a story about choosing between AI and everything else. It is a story about understanding that AI does not operate in isolation, and neither should the budget that funds it.

  • McKinsey’s 25,000 AI Agents: First-Mover Advantage or the Industry’s Biggest Experiment?

    McKinsey now counts 25,000 AI agents among its workforce — roughly one for every 1.6 human employees. That ratio, disclosed by CEO Bob Sternfels at the Consumer Electronics Show and confirmed by the firm, makes the consultancy’s internal agentic build-out one of the most aggressive in professional services.

    The numbers have moved quickly. Eighteen months ago, McKinsey operated a few thousand agents. Today, through its AI arm QuantumBlack, AI-related work accounts for 40% of the firm’s output. The agents have saved an estimated 1.5 million hours on search and synthesis tasks. Non-client-facing headcount has fallen 25%, yet output from those teams has risen 10%.

    Sternfels’s stated ambition is to pair every one of McKinsey’s 40,000 employees with at least one AI agent within the next 18 months.

    Scale versus substance

    The scale is eye-catching. Whether it is meaningful depends on what you count as an agent and how you measure the return.

    McKinsey’s rivals are openly sceptical. EY’s global engineering chief has argued that “a handful of agents do the heavy lifting” and that value should be tracked through efficiency KPIs, not headcount. PwC’s chief AI officer has called agent count “probably the wrong measure”, advocating instead for quality and workflow optimisation. Their counterargument is clear: a smaller fleet of high-performing agents, rigorously measured, may deliver more than a vast deployment still being calibrated.

    The critique lands on familiar ground. As maddaisy examined earlier today, PwC’s 2026 Global CEO Survey found that 56% of chief executives still cannot point to revenue gains from their AI investments. The deployment-versus-outcomes gap is the central tension in enterprise AI right now, and McKinsey’s bet raises the question of whether the firm is racing ahead of the same problem — or solving it.

    From advisory to infrastructure

    The more consequential shift may be in McKinsey’s business model. Sternfels described a move away from the firm’s traditional fee-for-service approach toward a model where McKinsey works with clients to identify joint business cases and then helps underwrite the outcomes.

    This is a significant departure for a firm built on advisory fees and billable hours. It positions McKinsey less as a strategic counsellor and more as an infrastructure partner — one that brings its own AI workforce to bear on client problems and shares in the measurable results.

    QuantumBlack, with 1,700 people, now drives all of McKinsey’s AI initiatives. Alex Singla, the senior partner who co-leads the unit, has described the firm’s evolving recruitment profile: candidates who can move fluidly between traditional consulting and engineering, and who can work alongside AI rather than simply directing it.

    Boston Consulting Group is pursuing a similar direction, deploying “forward-deployed consultants” who build AI tools directly on client projects. But McKinsey’s scale of internal adoption — 25,000 agents embedded across the firm — gives it a data advantage that is harder to replicate. Every internal deployment generates operational insight into what works, what fails, and how agentic systems behave at enterprise scale.

    The governance question maddaisy has been tracking

    The timing of McKinsey’s announcement is worth noting against the backdrop of the agentic AI governance gap maddaisy covered earlier this week. Deloitte’s data showed that only 21% of companies have mature governance models for agentic AI, even as three-quarters plan to deploy it within two years. And a broader pattern has emerged across maddaisy’s recent coverage: enterprises are strategically confident about AI but operationally underprepared.

    McKinsey, as both a deployer and an adviser, sits at the intersection of this tension. If the firm can demonstrate that 25,000 agents operate reliably at scale — with governance, measurement, and accountability frameworks to match — it will have built the most persuasive case study in the industry. If the agents outrun oversight, the reputational exposure is equally significant. When an AI agent produces an analysis and the recommendation proves wrong, the liability question is not academic.

    What practitioners should watch

    For consulting professionals and enterprise leaders watching this play out, three things matter more than the headline number.

    First, the metric that matters is not agent count but outcome attribution. McKinsey’s 1.5 million hours saved is a process metric. The firm’s shift to underwriting client outcomes suggests it understands the need to move beyond efficiency and toward measurable business impact — the same gap that PwC’s CEO Survey identified industry-wide.

    Second, the talent model is changing faster than many firms acknowledge. McKinsey’s search for hybrid consultant-engineers, and BCG’s forward-deployed model, signal that the traditional consulting skill set is being augmented, not just supported, by AI fluency. Firms that treat AI as a productivity tool rather than a workforce design challenge will fall behind.

    Third, scale creates its own governance requirements. As McKinsey’s own Carolyn Dewar argued in Fortune, the real risk is not the technology but how leaders manage the fear and trust dynamics that surround it. Deploying 25,000 agents without the organisational infrastructure to govern them would validate every concern the firm’s rivals have raised.

    McKinsey’s wager is that first-mover scale in agentic AI creates a compounding advantage — more data, better workflows, stronger client proof points. The industry is about to find out whether volume leads to value, or whether a smaller, sharper approach gets there first.

  • The Governance Gap No One Is Closing: Why Agentic AI Is Outrunning Enterprise Oversight

    Three-quarters of enterprises plan to deploy agentic AI within two years. Only one in five has a mature governance model for it. That arithmetic should concern anyone responsible for enterprise technology strategy.

    The figures come from Deloitte’s 2026 State of AI in the Enterprise report, and they represent something more specific than the familiar story of AI adoption outpacing regulation. The challenge with agentic AI is not that rules do not exist — as maddaisy recently examined, the EU AI Act, Colorado’s AI Act, and a growing patchwork of global regulations are creating real enforcement deadlines. The challenge is that agentic systems demand a fundamentally different kind of oversight, and most organisations have not built the operational machinery to provide it.

    What makes agentic AI different

    Conventional AI systems — including the generative AI tools that have dominated enterprise adoption over the past two years — operate in an advisory mode. They suggest, summarise, draft, and classify. A human reviews the output and decides what to do with it. Governance for these systems, while imperfect, fits within existing frameworks: you audit the model, monitor outputs, and maintain a human in the decision loop.

    Agentic AI breaks that model. These systems are designed to plan, execute, and adjust autonomously — booking flights, approving procurement decisions, triaging customer complaints, or managing collections workflows without waiting for human sign-off. Oracle’s new agentic banking platform, launched in February 2026, illustrates the trajectory: domain-specific agents handle loan originations, credit decisioning, and compliance checks, with human oversight positioned as a “human-in-the-loop” role rather than a gatekeeping one.

    The distinction matters because it changes where governance must operate. With advisory AI, oversight happens after the model produces an output and before a human acts on it. With agentic AI, the system is the actor. Governance must be embedded in real time — monitoring agent behaviour as it happens, enforcing boundaries on what an agent can and cannot do, and maintaining audit trails that capture not just decisions but the full chain of reasoning and actions that led to them.

    The 21% problem

    Deloitte’s finding that only 21% of companies have a mature agentic AI governance model is striking, but the detail beneath it is more revealing. In Singapore, where deployment ambitions are among the highest globally — 72% of businesses plan to deploy agentic AI across multiple operational areas within two years, up from 15% today — the mature governance figure drops to just 14%.

    As maddaisy noted in its analysis of the broader Deloitte report, this fits a wider pattern: organisations are increasingly confident in their AI strategy but declining in readiness on the operational foundations needed to execute it. The agentic governance gap is perhaps the sharpest expression of this paradox — a technology that is advancing from pilot to production while the controls needed to run it safely remain in early stages.

    Half of Singapore respondents reported using a patchwork of public and internal proprietary frameworks to assess agent risk and performance. That is not a governance model — it is improvisation.

    Why existing frameworks fall short

    The AI Trends Report 2026, published by statworx and AI Hub Frankfurt, identifies three operational disciplines that are becoming foundational for reliable agentic AI: AI governance, DataOps, and what the report terms AgentOps — the operational layer for managing autonomous AI agents in production.

    AgentOps is a useful concept because it captures what most enterprise governance frameworks currently lack. Traditional AI governance focuses on model development: training data quality, bias testing, documentation, and approval workflows before deployment. That is necessary but insufficient for systems that learn, adapt, and take actions in production environments.

    Agentic systems require runtime governance: clear boundaries on agent autonomy (what decisions can the agent make independently, and which require escalation?), real-time monitoring of agent behaviour against expected parameters, kill switches for when agents drift outside acceptable bounds, and comprehensive audit trails that regulators can inspect after the fact.

    The EU AI Act’s requirements for high-risk systems — documented risk management, technical logging, human oversight mechanisms, and conformity assessments — implicitly assume this kind of operational infrastructure. But most organisations have not yet translated those requirements into engineering reality.

    Deployment is not waiting for governance

    The uncomfortable truth is that agentic AI is entering production regardless of whether governance is ready. Oracle is shipping banking agents now. Companies like AMD and Heathrow Airport are deploying autonomous agents in customer experience roles. Gartner predicts agentic systems will autonomously resolve 80% of customer service issues by 2028.

    Constellation Research offers a useful counterweight to the hype, arguing that agentic AI is “more of a feature than a revolution” and that the real measure of value is decision velocity — how quickly smaller decision trees and processes can be automated at scale. This framing is helpful because it reduces the abstraction. An AI agent rebooking a flight is not a paradigm shift; it is a process automation with a more sophisticated reasoning layer. But that reasoning layer is precisely what makes governance harder. The agent is not following a static script — it is making contextual judgements, and those judgements need oversight.

    What the governance gap actually costs

    The business case for closing the governance gap is not primarily about regulatory fines, though those are real. It is about operational risk. When an agentic system autonomously commits to a procurement decision, misprices a financial product, or gives a customer incorrect information with real-world consequences, the liability question is immediate and the reputational exposure is direct.

    It is also about scaling. Deloitte’s data shows that companies with stronger governance foundations are deploying agentic AI more successfully — they start with lower-risk use cases, build governance capabilities alongside deployment, and scale deliberately. Organisations that skip the governance step find themselves either slowing down when something goes wrong or, worse, not knowing that something has gone wrong until a regulator or customer tells them.

    What needs to happen

    The gap between agentic AI deployment and agentic AI governance is not going to close on its own. Three practical steps can narrow it.

    Define agent autonomy boundaries explicitly. For every agentic AI deployment, organisations need a clear specification of what the agent can do independently, what requires human approval, and what is prohibited. These boundaries should be codified in the system, not just written in a policy document. The Oracle banking platform’s “human-in-the-loop” architecture is one model, but even that needs specificity about when and how the loop engages.

    Invest in runtime monitoring, not just pre-deployment testing. The governance challenge with agentic AI is that it operates continuously and adapts to context. Pre-deployment audits are necessary but not sufficient. Organisations need real-time monitoring that tracks agent decisions against expected parameters and flags anomalies before they compound.

    Build audit trails as engineering infrastructure. When a regulator asks how an agent arrived at a specific decision — and under the EU AI Act, they will — the organisation needs to produce a complete chain of the agent’s reasoning, data inputs, and actions. This is not a reporting challenge; it is an engineering one that needs to be designed into the system from the start, not retrofitted after deployment.

    The agentic AI governance gap is not a future problem. It is a present one, widening with every new deployment. The organisations that treat governance as a technical discipline — building it into the engineering of their agentic systems rather than bolting it on as a compliance afterthought — will have a structural advantage as the technology matures. Those that do not will discover, as many enterprises have with earlier waves of technology adoption, that the cost of retrofitting oversight always exceeds the cost of building it in.

  • The AI ROI Crisis: Why 56% of CEOs Still Cannot Prove Value From Their AI Investments

    More than half of the world’s chief executives cannot point to revenue gains from their AI investments. That is not a fringe finding from an alarmist report — it is the central conclusion of PwC’s 2026 Global CEO Survey, which polled 4,454 business leaders and was released at Davos in January.

    The figure — 56% reporting no measurable revenue uplift from AI — lands at a moment when enterprise AI spending continues to accelerate. Budgets are growing, headcounts in AI-adjacent roles are expanding, and the consulting industry’s order books are thick with transformation mandates. Yet the returns remain stubbornly elusive for the majority. Only 12% of CEOs surveyed reported achieving both revenue growth and cost reduction from their AI programmes.

    This is not, as some commentary has framed it, evidence that AI does not work. It is evidence that most organisations have not yet figured out how to make it work — a distinction that matters enormously for practitioners and consultants navigating the current landscape.

    The pattern maddaisy has been tracking

    PwC’s data confirms a dynamic that maddaisy has examined from several angles over the past fortnight. Deloitte’s 2026 State of AI report revealed that enterprises feel more strategically confident about AI than ever, yet less operationally ready — a paradox the report’s authors attributed to weak data infrastructure, insufficient talent, and immature governance. Research published in Harvard Business Review, which maddaisy covered last week, found that AI tools were making employees busier rather than more productive, with efficiency gains absorbed by task expansion rather than redirected toward higher-value work.

    The ROI gap is what happens when these operational failures compound. Organisations adopt AI tools without redesigning workflows. They measure deployment (how many teams have access) rather than outcomes (what changed as a result). They invest in the technology layer while underinvesting in the organisational layer — the process changes, role redesigns, and measurement frameworks that turn a pilot into a production capability.

    A measurement problem masquerading as a technology problem

    One of the more revealing aspects of the PwC data is what it implies about how enterprises are tracking AI value. CEO revenue confidence sits at a five-year low of 30%, and this pessimism correlates with the inability to demonstrate AI returns. But the question is whether the returns genuinely are not there, or whether organisations simply lack the instrumentation to detect them.

    The answer, for many enterprises, is likely both. Some AI deployments are genuinely failing to deliver — deployed in the wrong processes, aimed at the wrong problems, or undermined by poor data quality. But others may be generating real value that never surfaces in the metrics that CEOs review. Time saved in middle-office processes, reduced error rates in document handling, faster iteration cycles in product development — these are real gains, but they rarely appear on a revenue line unless someone has built the measurement architecture to capture them.

    This is a familiar pattern in enterprise technology adoption. The early years of cloud computing saw similar complaints: organisations spent heavily on migration but struggled to quantify the business impact beyond infrastructure cost reduction. The value was real — in agility, speed to market, and developer productivity — but it took years for finance teams to develop frameworks that could track it. AI is following the same trajectory, with the added complication that its benefits are often diffuse, spread across many small improvements rather than concentrated in a single, measurable outcome.

    What the 12% are doing differently

    PwC’s survey found that the minority of organisations achieving both revenue and cost benefits from AI share common characteristics. They have invested in data foundations — not just data lakes and pipelines, but governance structures that ensure data quality, accessibility, and appropriate use. They have moved beyond isolated pilots to embed AI into core business processes. And they treat AI adoption as an organisational change programme, not a technology deployment.

    None of this is conceptually new. Consultancies have been advising clients on change management, data governance, and process redesign for decades. What is new is the speed at which the gap between leaders and laggards is widening. The 12% who have cracked the ROI equation are pulling ahead, using AI-generated insights to inform strategy, AI-automated processes to reduce costs, and AI-enhanced products to capture new revenue. The 56% who have not are still running pilots, still debating governance frameworks, and still struggling to answer the board’s most basic question: what are we getting for this money?

    The consulting industry’s uncomfortable position

    For the consulting sector, PwC’s findings create an awkward tension. The firms advising enterprises on AI strategy are, in many cases, the same firms whose clients cannot demonstrate returns. This is not necessarily a reflection of poor advice — the operational barriers to AI value are genuine and deep — but it does raise questions about what consulting engagements are actually delivering.

    As maddaisy noted when examining Capgemini’s recent results, CEO Aiman Ezzat explicitly framed the company’s direction as a shift “from AI hype to AI realism.” Generative AI bookings exceeded 8% of Capgemini’s total for the year, but the company’s 2026 revenue guidance fell slightly below analyst expectations — a reminder that even firms positioning AI at the centre of their strategy face questions about whether the investment is translating into proportional growth.

    The risk for consultancies is that the ROI gap erodes client confidence in AI-related engagements. If more than half of CEOs see no revenue benefit, the appetite for further AI spending — and the advisory services that accompany it — may tighten. The counter-argument, which PwC’s own data supports, is that the solution to poor AI returns is not less AI but better AI implementation. That is a consulting engagement waiting to happen, provided firms can credibly demonstrate that they know how to close the gap.

    What practitioners should watch

    Three developments will shape whether the AI ROI picture improves or deteriorates over the coming quarters.

    First, the regulatory environment is tightening. As maddaisy recently examined, the EU AI Act’s high-risk system requirements become enforceable in August 2026, and state-level legislation in the US is creating a fragmented compliance landscape. Compliance costs will add to the total cost of AI ownership, making the ROI equation harder to balance for organisations that have not already built governance into their deployment model.

    Second, the rise of agentic AI — systems that plan and execute tasks with minimal human oversight — will test whether organisations can capture value from more autonomous AI without losing control. Deloitte’s data showed a nearly fivefold increase in planned agentic AI deployments over the next two years, but only one in five companies has a mature governance model for autonomous agents. The ROI potential is significant; so is the risk of expensive failures.

    Third, and perhaps most importantly, watch for a shift in how organisations measure AI value. The enterprises that move beyond “did revenue go up?” to more granular metrics — cycle time reduction, error rate improvement, employee capacity freed for strategic work — will be better positioned to demonstrate returns and justify continued investment. The measurement framework may matter as much as the technology itself.

    PwC’s 56% figure is striking, but it is a snapshot of a transition, not a verdict on AI’s potential. The technology is not the bottleneck. Execution is. And for consultants and practitioners, that distinction is where the real work — and the real opportunity — lies.

  • Deploy First, Fix Later: How Poor AI Rollouts Are Engineering Burnout Into the System

    Over the past week, maddaisy has examined two dimensions of the AI-burnout nexus: the work intensification mechanisms that make employees busier rather than better, and the organisational frameworks that might address the people side of the equation. But there is a third dimension that sits upstream of both: the technology deployment itself.

    Before burnout becomes a management problem, it is often an implementation problem. And a growing body of evidence suggests that the way organisations roll out AI tools — rushed timelines, inadequate capability-building, and a persistent belief that go-live equals readiness — is baking unsustainable working conditions into the system from day one.

    The capability gap that no one budgets for

    A detailed analysis published by CIO.com this month draws on workforce upskilling data from large-scale enterprise rollouts to make a blunt assessment: systems rarely fail because the technology does not work. They fail because the organisation has not built the infrastructure to support the people using them.

    The numbers are sobering. Organisations routinely experience productivity drops of 30 to 40% within the first 90 days of a major technology go-live when workforce capability has not been adequately addressed. Support tickets triple. Workaround behaviours — offline spreadsheets, manual reconciliations, shadow systems — proliferate as employees revert to what feels safe and controllable. Post-go-live support costs run 40 to 60% over budget. ROI timelines slip by six to 12 months.

    None of this is caused by employee resistance or technological failure. It is the predictable consequence of treating capability-building as a training event rather than a systemic requirement.

    The 10% problem

    Perhaps the most striking data point concerns training transfer. Research on technology implementation training suggests that only 10 to 20% of skills learned in formal programmes translate into sustained on-the-job performance. The issue is not poor course design. It is the assumption that a three-day training session two weeks before go-live can substitute for the repeated practice, pattern recognition, and feedback loops that genuine capability requires.

    This matters directly for the burnout conversation. When employees lack the capability to operate new systems confidently, every minor issue becomes a support ticket. Every exception becomes an escalation. The cognitive load of navigating unfamiliar tools while maintaining output expectations creates exactly the kind of sustained pressure that Harvard Business Review’s recent research identified as driving work intensification. The tools are not the problem — the deployment is.

    Where AI deployments differ from traditional IT rollouts

    Enterprise technology deployments have always carried capability risks. But AI introduces specific complications that amplify them.

    First, AI tools change the scope of work, not just the process. As maddaisy’s earlier analysis noted, employees using AI do not simply do the same tasks faster — they absorb new responsibilities, blur role boundaries, and take on work that previously justified additional headcount. A traditional ERP deployment changes how someone does their job. An AI deployment can change what their job is. Upskilling programmes designed around process training cannot address a shift that is fundamentally about role redesign.

    Second, AI output is probabilistic, not deterministic. An ERP system produces the same result given the same inputs. An AI tool might produce different outputs each time, requiring users to exercise judgement about quality, accuracy, and appropriateness. That judgement cannot be trained in a classroom — it is built through experience, and it demands a kind of cognitive engagement that is qualitatively different from following a process manual.

    Third, AI raises the visibility of individual output. When an AI tool enables someone to produce a first draft in minutes rather than hours, the speed becomes the new baseline expectation. Employees who are still building capability with the tool face pressure to match output norms set by early adopters or by the tool’s theoretical capacity. The result is the self-reinforcing acceleration cycle that researchers have now documented repeatedly.

    Governance as deployment infrastructure

    The CIO.com analysis proposes treating upskilling as a governance concern rather than a training administration task — embedding capability-building into the transformation workstream with the same rigour as data migration or integration testing. This means defining capability in behavioural terms tied to business processes, starting practice in sandbox environments as soon as process designs stabilise, and measuring performance data rather than training completion rates.

    One practical recommendation stands out: establishing a “performance council” distinct from the training team. This group — composed of process owners, frontline managers, and high-performing end users — meets weekly to review whether people are performing reliably in live operations. They examine error rates, support ticket patterns, workaround behaviours, and time-to-competency. Critically, they have the authority to pause rollouts when the data shows capability is not sticking.

    This is not a radical proposal. It is the kind of operational discipline that well-run technology programmes have always applied to infrastructure and integration. The fact that it needs to be articulated separately for workforce capability suggests how often organisations skip the people side of deployment planning.

    Connecting the implementation layer to the burnout evidence

    The burnout data that has emerged over recent weeks tells a consistent story. DHR Global reports that 83% of workers experience some degree of burnout, with engagement dropping 24 percentage points in a single year. Only 34% of employees say their organisation has communicated AI’s workplace impact clearly. Among entry-level staff, that figure falls to 12%.

    These are not just management failures. They are deployment failures. When organisations ship AI tools without building the capability infrastructure to support them — without adequate training transfer, without performance monitoring, without role redesign — they are not just creating a change management problem. They are creating a technical debt of human capability that compounds over time, manifesting as burnout, disengagement, and quiet resistance.

    As Deloitte’s 2026 AI report made clear, the gap between strategic confidence and operational readiness remains the defining feature of enterprise AI. The implementation evidence suggests that closing this gap requires treating workforce capability not as a soft skill or an HR deliverable, but as core deployment infrastructure — as essential as the data pipeline and as measurable as system uptime.

    What this means for the next phase

    The organisations that avoid engineering burnout into their AI deployments will share a common trait: they will treat go-live as the beginning of capability-building, not the end. They will budget for the 90-day productivity dip and plan for it rather than being surprised by it. They will measure whether people can perform reliably at scale under real business pressure, not whether they completed a training module.

    Most importantly, they will recognise that the fastest way to undermine an AI investment is not a technical failure — it is deploying capable technology to an unprepared workforce and then measuring success by adoption rates alone. The tools work. The question, as it has always been with technology transformations, is whether the organisation around them does too.