Author: Connie Botelli

  • PwC Built an AI That Can Actually Read Enterprise Spreadsheets. Here Is Why That Matters.

    Most enterprise AI demonstrations involve chatbots, code generation, or image synthesis — capabilities that are impressive but often disconnected from the workflows where organisations actually make decisions. PwC has taken a different approach. On 19 February, the firm announced a frontier AI agent that can reliably reason across complex, multi-sheet enterprise spreadsheets — the kind of messy, formula-dense workbooks that underpin deals, risk assessments, and financial modelling across virtually every large organisation.

    The announcement would be easy to dismiss as incremental. It is, in fact, one of the more practically significant AI developments of the year so far.

    The Spreadsheet Problem No One Talks About

    AI has made rapid progress with text, images, and code. But enterprise spreadsheets have remained stubbornly resistant. The reason is structural: a typical enterprise workbook is not a neatly formatted data table. It is a sprawling, multi-sheet artefact containing hundreds of thousands of rows, cross-sheet formulas, hidden dependencies, embedded charts, and formatting inconsistencies accumulated over years of manual editing by multiple authors.

    Conventional AI systems — including the most advanced large language models — struggle with this complexity. They can process a clean CSV file or answer questions about a simple table. But ask them to trace a formula chain across five sheets in a workbook with 200,000 rows and inconsistent column headers, and accuracy collapses. For regulated industries where precision is non-negotiable — auditing, tax, financial due diligence — this limitation has kept spreadsheet analysis firmly in the domain of human practitioners.

    PwC’s agent addresses this directly. Combining multimodal pattern recognition with a retrieval-augmented architecture, the system can process up to 30 workbooks containing nearly four million cells. In internal benchmarks, it achieved roughly three times the accuracy of previously published methods while using 50% fewer computational tokens — a meaningful efficiency gain that reduces both cost and energy consumption.

    How It Works, Without the Hype

    The technical approach mirrors how experienced analysts actually work. Rather than attempting to ingest an entire workbook at once — a strategy that overwhelms even million-token context windows — the agent scans, indexes, and selectively retrieves relevant sections. It can jump across tabs, trace logic through formula chains, integrate visual elements like charts, and explain its reasoning with what PwC describes as “defensible precision.”

    Two internal use cases illustrate the practical impact. In engagement documentation, PwC teams work with large, nominally standardised workbooks that document business processes and controls. In practice, these files vary significantly — column names shift, fields appear in different orders, structures change between engagements. The agent handles this in two stages: first mapping the workbook’s structure, then extracting specific details using targeted retrieval rather than brute-force ingestion.

    In risk assessment, the agent replaces what was previously weeks of custom development work. Each new set of files could break existing programmatic approaches due to formatting variations. The agent indexes and extracts directly, regardless of these inconsistencies. PwC reports that what previously required weeks of configuration can now be completed in hours.

    The ROI Connection

    The timing of this announcement is worth noting. Earlier this month, maddaisy examined PwC’s own 2026 Global CEO Survey, which found that 56% of chief executives could not point to measurable revenue gains from their AI investments. Only 12% reported achieving both revenue growth and cost reduction from AI programmes.

    The spreadsheet agent is, in a sense, PwC’s answer to its own data. Rather than pursuing the kind of ambitious, organisation-wide AI transformation that the survey suggests most companies are failing at, this tool targets a specific, bounded problem: making AI useful where decisions actually get made. Spreadsheets are unglamorous, but they remain the substrate of enterprise decision-making across every industry. If AI cannot work reliably with them, the ROI gap that PwC’s own research documented will persist.

    Matt Wood, PwC’s Commercial Technology and Innovation Officer, was notably direct about the origin: “This didn’t start as a research project. It started because our teams were spending weeks manually tracing logic through workbooks that no existing tool could handle.”

    A Broader Pattern: Consulting Firms as Technology Builders

    This development fits a pattern that maddaisy has been tracking across the consulting industry. Firms are not merely advising clients on AI — they are building proprietary capabilities that change the economics of their own delivery. McKinsey’s 25,000 AI agents. Accenture’s ongoing automation of delivery operations. Now PwC, with a tool that converts weeks of manual work into hours.

    The competitive implications are significant. A firm that can process complex financial workbooks in hours rather than weeks can bid more aggressively on engagements, take on more work with the same headcount, and offer the outcome-based pricing models that clients increasingly prefer. The spreadsheet agent is not just a productivity tool — it is a structural advantage in the shifting economics of professional services.

    What Practitioners Should Watch

    For consultants and enterprise leaders, the PwC announcement carries a practical message: the AI value gap may start closing not through headline-grabbing deployments, but through targeted tools that tackle specific bottlenecks in existing workflows.

    The broader FP&A landscape is moving in the same direction. IBM’s 2026 analysis of financial planning trends highlights that 69% of CFOs now consider AI integral to their finance transformation strategy, with the primary applications centring on data ingestion, budget analysis, and narrative generation — precisely the kind of spreadsheet-adjacent work that PwC’s agent addresses.

    The question is no longer whether AI can handle enterprise data complexity. It is whether organisations will deploy these capabilities against the right problems — the mundane, time-intensive, precision-critical workflows where the return on investment is most measurable and most immediate.

    PwC appears to have started there. Given the firm’s own data on the AI ROI crisis, that is arguably the most credible place to begin.

  • America’s Fragmented State-by-State AI Regulation Is Creating a Compliance Labyrinth

    When maddaisy examined the shift from AI principles to penalties earlier this month, the United States’ growing patchwork of state-level AI laws featured as one of three converging forces. That brief mention understated the scale of what is now unfolding. With more than 1,000 AI-related bills introduced across all 50 states in 2025 alone, the US is rapidly building the most fragmented AI regulatory landscape of any major economy — and a federal government that wants to stop it but may lack the tools to do so.

    The federal-state standoff

    The tension is now explicit. In December 2025, the White House released a framework titled “Ensuring a National Policy Framework for Artificial Intelligence”, arguing that diverging state AI rules risk creating a harmful compliance patchwork that undermines American competitiveness. An accompanying fact sheet went further, suggesting the administration may consider litigation and funding mechanisms to counter state AI regimes it considers obstructive.

    This follows Executive Order 14179, issued in January 2025, which rescinded the Biden administration’s 2023 AI order and established a competitiveness-first federal posture. The July 2025 AI Action Plan outlined more than 90 measures to support AI infrastructure and innovation. The message from Washington is consistent: the federal government wants to set the rules, and it wants states to fall in line.

    The states, however, are not listening.

    Four approaches, one country

    What makes the US situation distinctive is not simply that states are regulating AI — it is that they are doing so in fundamentally different ways. Four models are emerging, each with different implications for organisations operating across state lines.

    Colorado: the comprehensive framework. Colorado’s AI Act (SB 24-205) remains the most ambitious state-level attempt at broad AI governance. It requires deployers of high-risk AI systems to conduct impact assessments, provide consumer transparency, and exercise reasonable care against algorithmic discrimination. Although enforcement was delayed to June 2026 through SB25B-004, the statutory framework is intact. Legal analysts note that the delay is strategic, not a retreat — Colorado is buying time to refine its approach while preserving regulatory leverage.

    California: regulation through existing powers. Rather than passing a single comprehensive AI act, California is weaving AI governance into its existing regulatory apparatus. The California Privacy Protection Agency finalised new regulations in September 2025 covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), with phased effective dates running through January 2027. The ADMT obligations are particularly significant: they convert what many organisations treat as aspirational governance practices — impact assessments, decision transparency, audit trails — into enforceable system design requirements.

    Washington: the multi-bill expansion. Washington has taken a fragmented-by-design approach, advancing separate bills for high-risk AI (HB 2157), transparency (HB 1168), training data regulation (HB 2503), and consumer protection in automated systems (SB 6284). The logic is pragmatic: AI risk does not fit into a single statutory box, so Washington is addressing it through multiple, targeted legislative vehicles. For compliance teams, this means tracking not one law but several, each with its own scope and requirements.

    Texas: government-first regulation. The Responsible AI Governance Act (HB 149), effective since January 2026, governs AI use within state government, prohibits certain high-risk applications, and establishes an advisory council. It is narrower than Colorado or California’s approaches but signals that even politically conservative states see a role for AI regulation — just a different one.

    The preemption problem

    The White House clearly wants federal preemption — a single national framework that would override state laws. But there are reasons to doubt this will materialise quickly, if at all.

    First, there is no comprehensive federal AI legislation on the table. The administration’s tools are executive orders and agency guidance, which carry less legal weight than statute and are vulnerable to court challenges. Second, the pattern from privacy law is instructive: the US still lacks a federal privacy law, and state laws like the California Consumer Privacy Act have effectively set national standards by default. AI regulation appears to be following the same trajectory.

    Third, states have institutional momentum. According to the National Conference of State Legislatures, all 50 states, Washington DC, and US territories introduced AI legislation in 2025. That level of legislative activity does not simply stop because the federal government asks it to. As one legal analysis from The Beckage Firm observes, states are not ignoring the federal signal — they are interpreting it differently, rolling out requirements more slowly or in smaller pieces, but maintaining control rather than ceding it.

    What this means for organisations

    For any company deploying AI across multiple US states — which is to say, most enterprises of any scale — the practical implications are significant and immediate.

    Multi-jurisdictional compliance is now unavoidable. Organisations cannot build a single AI governance programme around one state’s requirements and assume it covers the rest. Colorado’s impact assessment obligations differ from California’s ADMT rules, which differ from New York City’s bias audit requirements for automated employment tools. Each demands different documentation, different processes, and in some cases, different technical controls.

    Effective dates are stacking up. Colorado’s enforcement begins June 2026. California’s ADMT obligations take full effect January 2027. New York City’s AEDT rules are already being actively enforced. Texas’s government-focused requirements are live now. Organisations treating these as isolated events rather than overlapping compliance waves risk being caught unprepared.

    The privacy playbook applies. Companies that built adaptable privacy programmes — mapping data flows, documenting processing purposes, conducting impact assessments — when GDPR and the CCPA emerged are better positioned than those that treated each regulation as a separate exercise. The same principle holds for AI governance. The organisations that will manage this landscape most effectively are those building flexible frameworks capable of mapping risk assessments, accountability structures, and documentation across multiple jurisdictions simultaneously.

    The consulting opportunity — and obligation

    For consultants and practitioners, the US regulatory fragmentation creates both demand and responsibility. Demand, because multi-state AI compliance is genuinely complex and most organisations lack in-house expertise to navigate it. Responsibility, because the temptation to oversimplify — to sell a single “AI compliance solution” that claims to cover all jurisdictions — is real and should be resisted. The details matter, and they vary state by state.

    As maddaisy has noted in covering shadow AI governance challenges and the agentic AI governance gap, the operational machinery for AI oversight is still immature in most enterprises. Adding a fragmented regulatory landscape on top of that immaturity does not just increase cost — it increases the likelihood that organisations will get something materially wrong.

    The US may eventually get a federal AI framework. But the states are not waiting, and neither should the organisations that operate in them. The compliance labyrinth is already being built, one statehouse at a time.

  • OpenAI’s Frontier Alliance Confirms What Consultants Already Knew: AI Vendors Cannot Scale Alone

    OpenAI announced on 23 February that it has formed multi-year “Frontier Alliances” with McKinsey, Boston Consulting Group, Accenture, and Capgemini. The four firms will help sell, implement, and scale OpenAI’s Frontier platform — an enterprise system for building, deploying, and governing AI agents across an organisation’s technology stack.

    For readers who have been following maddaisy’s coverage of the consulting industry’s AI pivot, this is not a surprise. It is the logical next step in a pattern that has been building for months — and it tells us more about the limits of AI vendors than about the ambitions of consulting firms.

    The vendor cannot scale alone

    The most revealing line in the announcement came from Capgemini’s chief strategy officer, Fernando Alvarez: “If it was a walk in the park, OpenAI would have done it by themselves, so it’s recognition that it takes a village.”

    That candour is worth pausing on. OpenAI’s enterprise business accounts for roughly 40% of revenue, with expectations of reaching 50% by the end of the year. The company has already signed enterprise deals with Snowflake and ServiceNow this year and appointed Barret Zoph to lead enterprise sales. Yet it still needs consulting firms — with their existing client relationships, implementation expertise, and organisational change capabilities — to get its technology into production at scale.

    This is not a story about OpenAI’s generosity in sharing the enterprise market. It is an admission that the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. As maddaisy reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. The technology is not the bottleneck. Integration, governance, and organisational readiness are.

    A clear division of labour

    The alliance structure reveals how OpenAI sees the enterprise AI value chain. McKinsey and BCG are positioned as strategy and operating model partners — helping leadership teams determine where agents should be deployed and how workflows need to be redesigned. BCG CEO Christoph Schweizer noted that AI must be “linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives.”

    Accenture and Capgemini take the systems integration role: data architecture, cloud infrastructure, security, and the unglamorous work of connecting Frontier to the CRM platforms, HR systems, and internal tools that enterprises actually run on. Each firm is building dedicated practice groups and certifying teams on OpenAI technology. OpenAI’s own forward-deployed engineers will sit alongside them in client engagements.

    This two-tier model — strategy at the top, integration at the bottom — maps neatly onto the consulting industry’s existing hierarchy. It also creates a clear dependency: OpenAI provides the platform, the consultancies provide the last mile.

    The maddaisy continuity thread

    This announcement intersects with several stories maddaisy has been tracking. When we examined McKinsey’s 25,000 AI agent deployment, the question was whether the firm’s aggressive internal build-out was a first-mover advantage or an expensive experiment. The Frontier Alliance suggests McKinsey is now positioning that internal capability as a credential — evidence that it can deploy agentic AI at scale, which it can now offer to clients through the OpenAI partnership.

    Similarly, when maddaisy covered the shift from billable hours to outcome-based consulting, the question was how firms would make the economics work. Vendor alliances like this provide part of the answer: the consulting firm brings the implementation expertise, the AI vendor provides the platform, and the client pays for outcomes rather than hours. The risk is shared across the chain.

    And Capgemini’s dual bet — adding 82,300 offshore workers while simultaneously investing in AI — now makes more strategic sense. The offshore delivery capacity is precisely what is needed to operationalise Frontier at enterprise scale. The bodies and the bots are not competing; they are complementary.

    The SaaS vendors should be nervous

    As Fortune noted, the Frontier Alliance creates a specific tension for established software-as-a-service vendors. Salesforce, Microsoft, Workday, and ServiceNow all depend on these same consulting firms to market and deploy their products. Now those consultants will also be actively promoting an alternative platform — one that positions itself as a “semantic layer” sitting above the traditional SaaS stack.

    The consulting firms are not choosing sides. They are hedging. Accenture, for instance, signed a multi-year partnership with Anthropic in December 2025 and is now a Frontier Alliance member. The firms will sell whichever platform best fits a given client’s needs, which gives them leverage over the AI vendors rather than the other way around.

    For the SaaS incumbents, however, having McKinsey and BCG actively evangelise an AI-native alternative to C-suite buyers is a development they will not welcome. Investor anxiety in this space is already elevated — shares of several enterprise software companies have been punished over concerns that customers will choose AI-native platforms over traditional offerings.

    What to watch

    The Frontier Alliance is a partnership announcement, not a set of outcomes. The real test is whether this model — AI vendor plus consulting firm — can close the deployment gap that has kept enterprise AI adoption stubbornly below expectations.

    Three things matter from here. First, whether the certified practice groups produce measurably better outcomes than the piecemeal implementations enterprises have been attempting on their own. Second, whether Frontier’s “semantic layer” architecture genuinely simplifies agent deployment or simply adds another platform layer to an already complex stack. And third, whether the consulting firms’ simultaneous alliances with competing AI vendors — OpenAI, Anthropic, Google — create genuine client value or just a more complicated sales cycle.

    For practitioners, the immediate signal is clear: the enterprise AI market is consolidating around a vendor-plus-integrator model. If your organisation is planning an agentic AI deployment, the question is no longer which model to use. It is which combination of platform, integrator, and operating model redesign will actually get agents into production — and keep them there.

  • Agentic AI Drift: The Silent Production Risk No One Is Measuring

    When maddaisy examined the agentic AI governance gap last week, the focus was on a structural mismatch: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. That gap remains wide. But a more specific — and arguably more dangerous — operational risk is now coming into focus: agentic AI systems do not fail suddenly. They drift.

    A recent analysis published by CIO makes the case plainly. Unlike earlier generations of AI, which tend to produce identifiable errors — a wrong classification, a hallucinated fact — agentic systems degrade gradually. Their behaviour evolves incrementally as models are updated, prompts are refined, tools are added, and execution paths adapt to real-world conditions. For long stretches, everything appears fine. KPIs hold. No alarms fire. But underneath, the system’s risk posture has already shifted.

    The Problem with Demo-Driven Confidence

    Most organisations still evaluate agentic AI the way they evaluate any software feature: through demonstrations, curated test scenarios, and human judgment of output quality. In controlled settings, this looks adequate. Prompts are fresh, tools are stable, edge cases are avoided, and execution paths are short and predictable.

    Production is different. Prompts evolve. Dependencies fail intermittently. Execution depth varies. New behaviours emerge over time. Research from Stanford and Harvard has examined why many agentic systems perform convincingly in demonstrations but struggle under sustained real-world use — a gap that grows wider the longer a system runs.

    The result is a pattern that will be familiar to anyone who has managed complex software in production: a system passes all its review gates, earns early trust, and then becomes brittle or inconsistent months later, without any single change that clearly broke it. The difference with agentic AI is that the degradation is harder to detect, because the system’s outputs can still look reasonable even as the reasoning behind them has shifted.

    What Drift Actually Looks Like

    The CIO analysis includes a telling case study from a credit adjudication pilot. An agent designed to support high-risk lending decisions initially ran an income verification step consistently before producing recommendations. Over time, a series of small, individually reasonable changes — prompt adjustments for efficiency, a new tool for an edge case, a model upgrade, tweaked retry logic — caused the verification step to be skipped in 20 to 30 per cent of cases.

    No single run produced an obviously wrong result. Reviewers often agreed with the recommendations. But the way the agent arrived at those recommendations had fundamentally changed. In a credit context, that difference carries real financial and regulatory consequences.

    This is the nature of agentic drift: it is not a bug. It is the predictable outcome of complex, adaptive systems operating in changing environments. Two executions of the same agent with the same inputs can legitimately differ — that stochasticity is inherent to how modern agentic systems work. But it also means that point-in-time evaluation, one-off tests, and spot checks are structurally insufficient for production risk management.

    From Policy to Diagnostics

    When maddaisy covered the shadow AI governance challenge earlier this month, one theme was clear: governance frameworks are necessary but not sufficient. They define ownership, policies, escalation paths, and controls. What they often lack is an operational mechanism to answer a deceptively simple question: has the agent’s behaviour actually changed?

    Without that evidence, governance operates in the dark. Policy defines what should happen. Diagnostics establish what is actually happening. When measurement is absent, controls develop blind spots in precisely the live systems where agentic risk tends to accumulate.

    The Cloud Security Alliance has begun framing this as “cognitive degradation” — a systemic risk that emerges gradually rather than through sudden failure. Carnegie Mellon’s Software Engineering Institute has similarly emphasised the need for continuous testing and evaluation discipline in complex AI-enabled systems, drawing parallels to how other high-risk software domains manage operational risk.

    What Practitioners Should Watch For

    The emerging consensus points toward several operational principles for managing agentic drift:

    Behavioural baselines over output checks. No single execution is representative. What matters is how behaviour shows up across repeated runs under similar conditions. Organisations need to establish baselines — not for what an agent should do in the abstract, but for how it has actually behaved under known conditions — and then monitor for sustained deviations.

    Separate configuration changes from behavioural evidence. Prompt updates, tool additions, and model upgrades are important signals, but they are not evidence of drift on their own. What matters is persistence: transient deviations are often noise in stochastic systems, while sustained behavioural shifts across time and conditions are where risk begins to emerge.

    Treat agent behaviour as an operational signal. Internal audit teams are asking new questions about control and traceability. Regulators are paying closer attention to AI system behaviour. Platform teams are under growing pressure to demonstrate stability in live environments. “It looked fine in testing” is no longer a defensible operational posture, particularly in sectors — financial services, healthcare, compliance — where subtle behavioural changes carry real consequences.

    The Observability Gap

    This is, ultimately, the next chapter in the governance story maddaisy has been tracking. The first chapter — covered in the enforcement era analysis — was about moving from principles to rules. The second, examined through Deloitte’s enterprise data, was the gap between strategic confidence and operational readiness. This third chapter is more specific and more technical: the gap between having governance frameworks and having the observability infrastructure to make them work.

    The goal is not to eliminate drift. Drift is inevitable in adaptive systems. The goal is to detect it early — while it is still measurable, explainable, and correctable — rather than discovering it through incidents, audits, or post-mortems. Organisations that build this capability will be better positioned to deploy agentic AI at scale with confidence. Those that do not will continue to be surprised by systems that appeared stable, until they were not.

    For consultants advising on enterprise AI deployments, the implication is practical: governance reviews that stop at policy documentation are incomplete. The question to ask is not just whether a client has an AI governance framework, but whether they can tell you how their agents are behaving today compared to three months ago. If the answer is silence, that is where the work begins.

  • The Strategy-Bandwidth Gap: Why CSOs Are Being Sidelined on Their Own AI Agendas

    Ninety-five per cent of chief strategy officers expect AI and technology-led disruption to materially reshape their strategic priorities this year. Yet only 28 per cent co-lead their organisation’s AI-related decisions. That gap — between expecting AI to change everything and actually having the authority to steer it — is the central finding of Deloitte’s 2026 Chief Strategy Officer Survey, and it has implications well beyond the strategy function.

    For maddaisy.com’s consulting audience, the data confirms a pattern that has been building for months. The executives whose job description is literally to chart the company’s future are being outpaced by the very transformation they are supposed to lead.

    The bandwidth problem is structural, not personal

    Deloitte’s survey paints a picture of a role under strain. More than half of CSOs report managing too many priorities with too little time. Approximately half have five or fewer direct reports. Many rely on rotating external support — often consultants — to advance critical initiatives, which ironically consumes more coordination time and leaves less room for the strategic thinking the role demands.

    Despite this, expectations continue to expand. Nearly two-thirds of CSOs now lead cross-functional transformation efforts, and more than half drive enterprise-wide agendas that stretch well beyond the role’s traditional remit. The mandate is growing; the resources are not.

    Only 35 per cent of CSOs say they co-lead or fully own decision-making for their organisation’s top priorities. As Gagan Chawla, Deloitte’s US Business Strategy Practice leader, puts it: strategy leaders should “reimagine their mandate and champion governance that aligns decision-making power with enterprise priorities to help ensure strategy drives value, not just insight.”

    That is a polite way of saying many CSOs are producing analysis that nobody with execution authority is acting on.

    The AI authority gap

    The most striking disconnect is on AI specifically. Nearly every CSO surveyed expects AI to reshape competitive dynamics. Yet the survey finds that only 28 per cent co-lead enterprise AI decisions, and just 16 per cent say their organisation uses AI to fundamentally reimagine lines of business or create new competitive advantages.

    This sits uncomfortably alongside data maddaisy.com has been tracking. PwC’s 2026 CEO Survey found that 56 per cent of chief executives cannot point to measurable revenue gains from AI. If the executives responsible for enterprise strategy are not materially involved in AI decisions, the ROI gap starts to look less like a technology problem and more like an organisational design failure. AI investments are being made by technology teams, approved by finance, and executed by operations — while the people tasked with ensuring these investments align with long-term competitive positioning watch from the sidelines.

    The Deloitte data suggests momentum is building — 51 per cent of CSOs now view AI as a strategic partner that enhances insight and accelerates execution, and 61 per cent report investing in AI literacy. But viewing AI as useful and having authority over its deployment are very different things.

    Where this connects to consulting

    The CSO bandwidth gap has direct consequences for the consulting industry. Strategy firms have traditionally sold to the strategy function — providing the research, analysis, and frameworks that CSOs use to set direction. If those CSOs lack the authority to act on strategic recommendations, the value proposition of strategy consulting shifts.

    This helps explain the trend maddaisy.com examined last week in the shift from billable hours to outcome-based consulting. When McKinsey ties a third of its revenue to measurable business outcomes rather than advisory engagements, it is partly responding to a reality where clients’ strategy functions cannot translate advice into action without hands-on support. The consulting engagement is moving from “here is what you should do” to “let us do it with you” — and that shift is driven not just by AI capabilities but by the operational reality that internal strategy teams are stretched too thin to execute.

    Accenture’s recent disclosure reinforces the point from the technology side. The firm announced in December that it would stop separately reporting advanced AI bookings because AI is now “embedded in some way across nearly everything we do.” With $11.5 billion in AI bookings across 11,000 projects and revenue of $4.8 billion, Accenture has reached a point where AI is infrastructure, not a standalone initiative. For CSOs trying to “own” the AI strategy, this creates an additional challenge: when AI permeates every function, there is no single strategy to own.

    Confidence without control

    Perhaps the most telling number in the Deloitte survey is the confidence gap. Seventy-two per cent of CSOs are optimistic about their organisation’s prospects, compared with just 24 per cent who feel the same about the global economy. Strategy leaders believe their companies can win despite external uncertainty — but this confidence sits alongside an admission that they lack the bandwidth and authority to ensure it happens.

    Adam Giblin, Deloitte’s US Chief Strategy Officer Programme lead, frames it directly: “CSOs are navigating a world where uncertainty isn’t episodic, it’s the status quo.” The implication is that strategy can no longer be a periodic exercise — an annual planning cycle punctuated by quarterly reviews. It needs to be continuous, embedded, and authoritative.

    For organisations that have not yet reconciled the CSO’s expanding mandate with actual decision-making power, the path forward is not complicated but it is uncomfortable. It means giving strategy leaders a seat at the AI governance table — not as observers, but as co-owners. It means aligning resource allocation with strategic priorities rather than expecting a team of five to drive enterprise-wide transformation. And it means recognising that the AI ROI gap is, in significant part, a strategy execution gap.

    The companies that close it will not be the ones with the most advanced AI models. They will be the ones where the person responsible for competitive positioning actually has the authority to shape how AI is deployed.

  • Europe’s Cloud Sovereignty Rush Meets Its Regulatory Reality Check

    European sovereign cloud spending is set to nearly double in 2026, from $6.9 billion to $12.6 billion according to Gartner’s latest forecast. Every major US hyperscaler now has a European sovereignty answer. AWS launched its European Sovereign Cloud from Germany in January, backed by a €7.8 billion investment. Google operates through S3NS, a French joint venture with Thales that holds SecNumCloud certification. Microsoft has Delos Cloud in Germany and Bleu in France.

    Yet beneath the flood of partnership announcements and sovereign cloud launches sits a less comfortable truth: the regulatory framework driving all of this activity is still incomplete, sometimes contradictory, and in certain critical areas, stalled entirely. For organisations trying to build disaster plans around Europe’s digital infrastructure, the ground has not stopped moving.

    The regulatory pile-up

    Three major pieces of European regulation now intersect on questions of cloud resilience and digital sovereignty — and none of them align neatly.

    The Digital Operational Resilience Act (DORA), enforceable since January 2025, requires financial institutions to implement comprehensive ICT risk management frameworks, including detailed third-party risk assessments for cloud providers. DORA is specific, prescriptive, and already creating compliance pressure across European banking and insurance.

    The NIS2 directive, enforceable since October 2024, extends similar resilience requirements to a much broader set of critical infrastructure operators — energy, transport, health, and digital infrastructure itself. Where DORA targets financial services, NIS2 casts a wider net but leaves more room for national interpretation, creating an uneven patchwork across EU member states.

    Then there is the European Cybersecurity Certification Scheme for Cloud Services (EUCS), which was supposed to provide a unified standard for assessing cloud security across the EU — including, controversially, sovereignty requirements that would have effectively barred non-EU cloud providers from the highest certification tier. That sovereignty clause was stripped from the latest drafts under intense lobbying pressure. The scheme itself remains unadopted. In January 2026, the European Commission proposed a revised Cybersecurity Act that would overhaul the entire certification framework — effectively resetting the process while organisations wait for clarity that may not arrive before 2027.

    Disaster planning in a regulatory fog

    The practical consequence for enterprises is an uncomfortable paradox. Regulations now require detailed disaster recovery and business continuity plans for cloud-dependent operations. But the certification framework that would define what “sovereign” or “resilient” actually means in practice remains unfinished.

    As maddaisy examined last week, the SAP-Microsoft “break glass” contingency plan illustrates the tension. It offers a theoretical failover for European Azure workloads in a crisis scenario, but analysts questioned whether a disconnected copy of Azure could remain operationally viable beyond a few weeks. The plan satisfies a political need — demonstrating that contingency planning exists — without resolving the deeper technical question of what happens when a severed cloud stops receiving updates.

    Capgemini’s CEO Aiman Ezzat has framed this pragmatically, arguing that Europe has meaningful sovereignty over data, operations, and regulation — but not over the underlying technology stack. The four-layer model he has described reflects the reality most enterprises face: sovereign in governance, dependent on US technology, and now required by law to plan for scenarios where that dependency becomes a liability.

    The hyperscaler response: sovereignty as a service

    The US cloud providers have responded to the regulatory and political pressure with significant investment. AWS’s European Sovereign Cloud, operating from Brandenburg, is architecturally separated from other AWS Regions — a genuine sovereign partition with EU-resident leadership and local operational control. AWS CEO Matt Garman called it a “big bet”, with expansion planned for Belgium, the Netherlands, and Portugal.

    Google’s approach in France, through S3NS (a joint venture where Google holds a minority stake under French law), has achieved SecNumCloud 3.2 qualification — the most demanding sovereignty standard currently in force in Europe. Microsoft’s structure routes through nationally controlled entities: Delos Cloud in Germany and Bleu (co-owned by Capgemini and Orange) in France.

    The pattern across all three is consistent: legal and operational separation, EU-resident personnel, local data residency, and contingency plans for geopolitical disruption. What differs is the depth of that separation. A fully air-gapped partition like Google’s Distributed Cloud offering for defence clients sits at one end of the spectrum. A contractual failover arrangement like the SAP-Microsoft deal sits at the other. Most enterprise workloads will land somewhere in between — and DORA and NIS2 require organisations to understand precisely where.

    What practitioners need to do now

    For consultants and technology leaders navigating this landscape, three priorities stand out.

    First, classify workloads by sovereignty sensitivity before choosing infrastructure. Not every application needs the highest tier of sovereign protection. DORA’s third-party risk requirements are prescriptive but risk-proportionate — a core banking system and an internal collaboration tool do not demand the same level of contingency planning. The trap is treating sovereignty as a binary choice rather than a spectrum.

    Second, build disaster plans around regulatory timelines, not vendor announcements. DORA enforcement is live. NIS2 implementation varies by member state but is progressing. The EUCS framework is stalled, but the underlying requirements it was meant to codify — around data residency, operational control, and access restrictions — are already being enforced through sector-specific regulation and national certification schemes like France’s SecNumCloud. Waiting for a pan-European standard before acting is not a viable compliance strategy.

    Third, pressure-test vendor contingency claims. The proliferation of sovereign cloud offerings and disaster recovery partnerships creates an illusion of completeness. But as Forrester analyst Dario Maisto noted of the SAP-Microsoft plan, many of these arrangements remain untested and legally unproven. “This is not compliance as much as risk management,” he said. Organisations should ask pointed questions about update cycles, hardware dependencies, and the operational lifespan of any disconnected cloud environment.

    The long view

    European digital sovereignty has moved from policy aspiration to market reality faster than the regulatory framework can keep pace. The investment figures are significant — AWS alone is committing €7.8 billion. The compliance deadlines are real. The contingency plans exist, at least on paper.

    But the gap between what regulations require and what certification frameworks define remains open. For organisations building disaster plans today, the most honest assessment is that they are planning against a moving target, using vendor solutions that have never been tested in the crisis scenarios they are designed for. That is not a reason to delay — DORA and NIS2 make delay legally untenable. It is a reason to plan with humility, build in flexibility, and avoid treating any single vendor’s sovereignty narrative as a finished answer.

  • From Billable Hours to Shared Risk: Consulting’s AI-Driven Business Model Shift

    McKinsey’s 25,000 AI agents grabbed the headlines, but the more consequential number is the roughly one-third of the firm’s revenue now tied to outcome-based engagements. Across the industry, AI is not just changing how consultants work – it is rewriting how they get paid.

    When maddaisy.com examined McKinsey’s 25,000 AI agent deployment last week, the focus was on scale: was deploying one digital agent for every 1.6 human employees a first-mover advantage or an expensive experiment? The answer may depend less on the agent count and more on a quieter transformation happening in parallel – the shift from selling time to selling outcomes.

    The Model That Built an Industry Is Under Pressure

    For decades, consulting has run on a straightforward exchange: expertise for time, billed by the hour or the project. It is a model that has produced extraordinary margins, but it carries an inherent misalignment. Consultants profit from the complexity of a problem, not necessarily from solving it quickly.

    AI agents threaten that dynamic directly. When an algorithm can synthesise research in minutes that previously took analysts weeks, the hours-based model starts to look exposed. McKinsey’s own data – 1.5 million hours saved on search and synthesis work – quantifies exactly how much billable time AI has already removed from the equation.

    Rather than watching margins erode, McKinsey is pivoting. Speaking on the HBR IdeaCast in February, CEO Bob Sternfels confirmed that outcome-based engagements – where McKinsey co-invests alongside clients and ties fees to measurable business results – now account for roughly a third of the firm’s revenue. Two years ago, that figure was negligible.

    The Economics Only Work with AI

    This is where the 25,000 agents become strategically coherent. Outcome-based consulting is inherently riskier than fee-for-service; the firm only earns if the client succeeds. To make the economics work, you need two things: lower delivery costs and higher confidence in results.

    AI agents address both. QuantumBlack, McKinsey’s 1,700-person AI division, now drives 40% of the firm’s total work. Non-client-facing headcount has fallen 25%, while output from those teams has risen 10% – the “25 squared” model. The savings create the margin headroom needed to absorb the risk of outcome-based pricing.

    It is not just McKinsey making this calculation. BCG has deployed “forward-deployed consultants” who build AI tools directly within client organisations, effectively embedding methodology as software rather than slides. Capgemini has trained 310,000 employees on generative AI, though its agentic AI bookings only reached 10% of quarterly pipeline by Q4 2025. Accenture has stopped reporting AI bookings separately because, as the firm noted in its Q1 fiscal 2026 results, AI is now embedded in virtually every engagement.

    The Client Side of the Equation

    The timing is not coincidental. As maddaisy.com reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot demonstrate revenue gains from AI. When the client cannot prove value, the consultancy offering to underwrite outcomes holds a powerful negotiating position – essentially saying, “we believe in this enough to stake our fees on it.”

    The irony is that consultancies are asking clients to trust their AI capabilities while the industry’s own track record on AI delivery remains uneven. Deloitte’s own 2026 State of AI report found that 42% of organisations consider their AI strategy “highly prepared” but feel markedly less ready on infrastructure, data governance, and talent.

    McKinsey faces this credibility gap from a different direction. The firm’s State of AI research found that only 5% of companies globally see AI hitting their bottom line. Positioning itself as the firm that can deliver measurable outcomes means McKinsey is, in effect, claiming to solve a problem that its own research says almost no one has solved.

    What Changes for Practitioners

    For consultants and the organisations that hire them, three practical implications stand out.

    Procurement shifts. If outcome-based pricing becomes the norm, procurement teams will need to evaluate consulting engagements more like joint ventures than service contracts. That means assessing the firm’s AI capabilities, data infrastructure, and delivery methodology – not just the partner’s credentials and the day rate.

    The talent model is splitting. Sternfels has been explicit that McKinsey wants “great consultants and/or great technologists, groomed to be both.” The traditional path – from analyst to associate to engagement manager – now runs alongside a technical track where consultants build and deploy AI systems. BCG’s vibe-coding consultants are an early version of this hybrid role.

    Governance becomes shared. When a consultancy co-invests in outcomes and deploys AI agents within a client’s operations, the governance question becomes bilateral. As maddaisy.com has covered extensively, the gap between AI deployment speed and governance maturity is already the defining risk of 2026. Outcome-based models widen this gap further, because neither party has clear precedent for who owns the risk when an AI agent produces flawed analysis that drives a business decision.

    The Bigger Picture

    The consulting industry has weathered previous disruptions – offshoring, automation, the rise of in-house strategy teams – by evolving its value proposition. This time, the change is structural. AI agents do not just reduce the cost of delivering advice; they make it possible to charge for results instead.

    McKinsey’s 25,000 agents are best understood not as a technology deployment but as a financial instrument – the infrastructure that underwrites a new revenue model. Whether the model works depends on something no agent can automate: whether clients actually achieve the outcomes both parties are betting on.

  • Shadow AI and the Year-Two Governance Challenge

    Over a third of employees have already used AI tools without their organisation’s knowledge or permission. As enforcement deadlines tighten, the gap between what staff are doing and what leadership has sanctioned is becoming one of enterprise AI’s most urgent — and most overlooked — governance problems.

    The term “shadow AI” describes AI tools adopted by employees outside official channels — free-tier large language models used to draft client communications, image generators repurposed for internal presentations, coding assistants plugged into development environments without security review. It is the AI equivalent of shadow IT, but with a critical difference: the data exposure risks are significantly higher and the speed of adoption is faster than anything IT departments have previously managed.

    According to the State of Information Security Report 2025 from ISMS.online, 37% of organisations surveyed said employees had already used generative AI tools without organisational permission or guidance. A further 34% identified shadow AI as a top emerging threat for the next 12 months. These are not projections about some distant risk — they describe what is happening now, in organisations that thought they had AI under control.

    The year-two reckoning

    The pattern is familiar to anyone who lived through the early cloud adoption cycle, but compressed into a fraction of the time. Year one — roughly 2024 into early 2025 — was characterised by experimentation. Departments trialled AI tools, leaders encouraged innovation, and governance was deferred in favour of speed. The implicit message in many organisations was: try things, move fast, we will sort out the rules later.

    Year two is when “later” arrives. And the numbers suggest most organisations are not ready for it. The same ISMS.online report found that 54% of respondents admitted their business had adopted AI technology too quickly and was now facing challenges in scaling it back or implementing it more responsibly. That figure represents a majority of enterprises acknowledging, in effect, that they have accumulated governance debt they do not yet know how to service.

    This aligns with a pattern maddaisy has been tracking across several dimensions. Deloitte’s 2026 State of AI report revealed that organisations report growing strategic confidence in AI but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute responsibly. Shadow AI is the ground-level expression of that paradox: strategy says “adopt AI,” but operations never built the guardrails to manage what adoption actually looks like across thousands of employees making independent tool choices every day.

    Why traditional IT governance falls short

    The instinct in many organisations is to treat shadow AI like shadow IT — block unsanctioned tools, enforce approved vendor lists, and route everything through procurement. That approach, while understandable, misses what makes AI different.

    When an employee uses an unapproved project management tool, the risk is primarily operational: data silos, integration headaches, wasted licences. When an employee pastes client data, financial projections, or intellectual property into a free-tier language model, the risk is fundamentally different. That data may be used for model training, stored in jurisdictions with different privacy regimes, or exposed through security vulnerabilities the organisation has no visibility into.

    As CIO.com recently noted, boards are increasingly recognising that AI is not waiting for permission — it is already shaping decisions through vendor systems, employee-adopted tools, and embedded algorithms that grow more powerful without explicit organisational consent. The question for governance teams is not whether employees are using unsanctioned AI. It is how much sensitive data has already left the building.

    The regulatory dimension

    Shadow AI also creates a specific compliance exposure that many organisations have not fully mapped. As maddaisy examined last week, the EU AI Act reaches its most consequential enforcement milestone in August 2026, with requirements for high-risk AI systems carrying penalties of up to €35 million or 7% of global annual turnover. Colorado’s AI Act takes effect in June 2026 with its own set of requirements around algorithmic discrimination and impact assessments.

    The compliance challenge with shadow AI is that an organisation cannot demonstrate responsible use of systems it does not know exist. If an employee in a hiring function uses an unsanctioned AI tool to screen CVs, or a financial analyst uses a free language model to generate risk assessments, the organisation may be deploying high-risk AI — as defined by regulators — without any of the documentation, monitoring, or impact assessment that compliance requires.

    This is not a theoretical concern. It is a direct consequence of the gap between adoption speed and governance maturity that maddaisy has documented across the enterprise AI landscape.

    What a pragmatic response looks like

    The organisations handling shadow AI most effectively are not the ones deploying the heaviest restrictions. They are the ones that recognised early that prohibition does not work when the tools are free, browser-based, and genuinely useful.

    A pragmatic governance approach has several components. First, visibility: understanding what AI tools employees are actually using, through network monitoring, surveys, and — critically — creating an environment where people feel safe disclosing their usage rather than hiding it. Second, clear usage policies that distinguish between acceptable and unacceptable use cases, specifying what data categories must never enter external AI systems regardless of the tool’s provenance.

    Third, an approved toolset that is genuinely competitive with the free alternatives. One of the most common drivers of shadow AI is that official enterprise AI tools are slower, more restricted, or simply worse than what employees can access on their own. If the sanctioned option requires a three-week procurement process while ChatGPT is a browser tab away, governance has already lost.

    Fourth, the frameworks exist. ISO 42001 provides a structured approach to establishing an AI management system, covering policy, roles, impact assessment, and continuous improvement through the familiar Plan-Do-Check-Act cycle. It is not a silver bullet, but it offers a starting point for organisations that currently have no systematic approach to AI governance beyond hoping the problem does not escalate.

    The window is narrowing

    The uncomfortable reality for many enterprises is that shadow AI has already created facts on the ground. Data has been shared with external models. Workflows have been built around unsanctioned tools. Employees have integrated AI into their daily routines in ways that would be disruptive to simply switch off.

    The year-two governance challenge is not about preventing AI adoption — that ship has sailed. It is about catching up with what has already happened, building the visibility and policy infrastructure to manage it going forward, and doing so before enforcement deadlines turn a governance gap into a compliance crisis.

    For consultants and practitioners advising organisations through this transition, the message is straightforward: audit first, policy second, technology third. The organisations that will navigate this best are not the ones with the most sophisticated AI strategies. They are the ones that know, concretely and completely, what AI is actually being used within their walls — and by whom.

  • Lloyds Banking Group’s £100 Million AI Bet: What the UK’s First Agentic Financial Assistant Means for Enterprise AI

    Lloyds Banking Group expects its artificial intelligence programme to deliver more than £100 million in value this year — double the £50 million it attributes to generative AI in 2025. The figures, disclosed alongside the group’s annual results in January 2026, represent one of the more concrete attempts by a major financial institution to attach a number to what AI is actually worth.

    That specificity matters. As maddaisy examined last week, PwC’s 2026 Global CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. Lloyds is not claiming to have solved the ROI puzzle entirely, but it is doing something most enterprises have not: publishing the numbers and tying them to specific operational improvements rather than vague promises of transformation.

    The financial assistant: what it actually does

    The headline initiative is a customer-facing AI financial assistant, which Lloyds describes as the first agentic AI tool of its kind offered by a UK bank. Announced in November 2025 and scheduled for public rollout in early 2026, the assistant sits within the Lloyds mobile app and is designed to help customers manage spending, savings, and investments through natural conversation.

    The system uses a combination of generative AI for its conversational interface and agentic AI to process requests and execute actions. In practical terms, a customer can query a payment, ask for a spending breakdown, or request guidance on savings options — and the assistant will interpret the request, plan the necessary steps, and carry them out. Where it reaches the limits of what automated support can handle, it refers users to human specialists.

    The scope is intended to expand. Lloyds has said the assistant will eventually cover its full product suite, from mortgages to car finance to protection products, serving its 28 million customer accounts across the Lloyds, Halifax, Bank of Scotland, and Scottish Widows brands.

    Testing at scale, not in a lab

    Before public launch, Lloyds tested the assistant with approximately 7,000 employees, who collectively completed around 12,000 trials. That is a meaningful pilot — large enough to surface edge cases and failure modes that a controlled lab environment would miss, and conducted with users who understand the bank’s products well enough to stress-test the system’s accuracy.

    The employee testing sits alongside a broader internal AI deployment that has already delivered measurable results. Athena, the group’s AI-powered internal search assistant, is used by 20,000 colleagues and has reduced information search times by 66%. GitHub Copilot, deployed to 5,000 engineers, has driven a 50% improvement in code conversion for legacy systems. An AI-powered HR assistant resolves 90% of queries correctly on the first attempt.

    These are not experimental pilots. They are production tools used at scale, and the fact that Lloyds is willing to attach specific performance metrics to each one distinguishes its approach from the many enterprises that describe AI impact in qualitative terms only.

    The ROI question: credible or convenient?

    The £100 million figure invites scrutiny, and it should. “Value” in corporate AI disclosures is notoriously slippery — it can mean cost savings, time savings converted to a monetary equivalent, revenue uplift, or some combination of all three. Lloyds has not published a detailed methodology for how it arrived at the £50 million figure for 2025 or how it projects the 2026 target.

    That said, the bank’s approach has features that lend it more credibility than many comparable claims. The internal tools have named user populations and specific performance benchmarks. The customer-facing assistant was tested with thousands of employees before launch, not unveiled as a concept. And the 2025 figure is presented as a delivered outcome, not a forecast — a distinction that matters when most enterprises are still struggling to prove any return at all.

    Lloyds also rose 12 places in the Evident AI Global Index last year — the strongest improvement of any UK bank — suggesting that external assessors see substance behind the claims.

    Agentic AI in financial services: the governance dimension

    The move to customer-facing agentic AI in banking raises governance questions that go beyond what internal productivity tools require. As maddaisy explored earlier this week, Deloitte’s 2026 AI report found that only one in five enterprises has a mature governance model for agentic systems. When those systems move from internal search assistants to customer-facing financial advice, the stakes escalate considerably.

    A banking AI that can execute transactions, provide savings guidance, and eventually handle mortgage queries operates in regulated territory. The Financial Conduct Authority’s expectations around suitability, fair treatment, and clear communication apply regardless of whether the advice comes from a human or an algorithm. Lloyds has acknowledged this by building in human referral pathways, but the real test will come at scale — when millions of customers interact with the system simultaneously, and edge cases multiply.

    Ron van Kemenade, the group’s chief operating officer, has framed the launch as “a pivotal step in our strategy as we continue to reimagine the Group for our customers and colleagues.” Ranil Boteju, chief data and analytics officer, has positioned it as a demonstration of responsible deployment, noting that the assistant “can understand and respond to specific, hyper-personalised customer requests and retains memory to offer a more holistic experience, ensuring the generated answer is safe to present to customers.”

    What this signals for the sector

    Lloyds is not the first bank to deploy AI, nor the first to make bold claims about its value. What distinguishes this move is the combination of a concrete financial baseline (£50 million delivered), a named and tested product (the financial assistant), a clear expansion roadmap (full product suite), and an institutional commitment to upskilling (a new AI Academy for its 67,000 employees).

    For practitioners watching the enterprise AI landscape, the Lloyds case offers a useful reference point against the prevailing narrative of deployment fatigue and unproven returns. It does not resolve the broader ROI question — one bank’s results do not establish an industry pattern — but it does suggest that organisations which invest in specific, measurable use cases and test rigorously before launch can move beyond the proof-of-concept purgatory that still traps most enterprises.

    The harder question is what happens next. An AI assistant that helps customers check spending patterns is useful. One that advises on mortgages and investment products enters a different category of risk and regulatory complexity. How Lloyds navigates that expansion — and whether the £100 million value target holds up under the scrutiny of real-world deployment — will be worth watching over the months ahead.

  • Capgemini Added 82,300 Offshore Workers Last Year. So Much for AI Replacing Everyone.

    If artificial intelligence is about to make IT services firms obsolete, someone forgot to tell Capgemini. The French consultancy ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. Its offshore workforce now stands at 279,200, a 42% increase, representing two-thirds of total headcount.

    At the same time, the company is planning €700 million in restructuring charges to reshape its workforce for AI. It is positioning itself as, in CEO Aiman Ezzat’s words, “the catalyst for enterprise-wide AI adoption.”

    The contradiction is only apparent. What Capgemini’s numbers actually reveal is the gap between the market’s AI narrative and the operational reality of large-scale enterprise transformation.

    The WNS factor

    The headline headcount surge requires context. The bulk of those 82,300 new offshore employees came from Capgemini’s acquisition of WNS, the India-headquartered business process services firm, completed in late 2025. Cloud4C, another acquisition, contributed further. This was not organic hiring — it was a deliberate strategic bet on scaling offshore delivery capacity at precisely the moment investors were questioning whether such capacity has a future.

    The acquisition arithmetic is revealing. Capgemini’s 2026 revenue growth guidance of 6.5% to 8.5% includes 4.5 to 5 percentage points from acquisitions, primarily WNS. Strip out the acquired growth, and organic expansion looks more like 2% to 3.5%. The company needed WNS to hit its targets.

    WNS brings something specific: expertise in AI-powered business process services, particularly in financial services, insurance, and healthcare. The company had already identified around 100 cross-selling opportunities and signed an intelligent operations contract worth more than €600 million. This is not a firm buying bodies for the sake of scale. It is buying the delivery infrastructure needed to operationalise AI at enterprise level.

    The restructuring counterweight

    The other side of the equation is the €700 million restructuring programme, most of which will land in 2026. Capgemini describes this as adapting “workforce and skills” to align with demand for AI-driven services. In plainer terms: some roles are being eliminated or relocated, while others — particularly those requiring AI engineering, data science, and agentic AI expertise — are being created or upskilled.

    As maddaisy noted when examining Capgemini’s full-year results last week, generative and agentic AI accounted for more than 10% of group bookings in Q4, up from around 5% earlier in the year. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI. The restructuring is not a contradiction of the hiring — it is the other half of the same workforce transformation.

    The pattern is expand first, optimise second. Acquire the delivery capacity, then reshape it. It is a playbook that makes more commercial sense than the market’s preferred narrative of AI simply deleting headcount.

    What the market gets wrong

    Capgemini’s share price has fallen roughly 26% in 2026, driven largely by investor anxiety that AI will cannibalise the IT services business model. The logic runs: if AI can automate code generation, testing, and business process management, why would enterprises pay consultancies to do it?

    Ezzat addressed this directly in the post-earnings call. “I don’t think clients are thinking this way about reduction,” he told MarketWatch. “Clients are looking at how critical it is for them to adopt AI and where it can have an impact.”

    The distinction matters. Enterprises are not replacing their consulting relationships with AI tools. They are asking their consultants to help them implement AI — which, for now, requires more people, not fewer. The skills are different. The delivery models are changing. But the demand for hands-on expertise in making AI work within complex organisational environments is, if anything, increasing.

    This aligns with what Ezzat told Fortune separately: “AI is a business. It is not a technology. It cannot just be used to keep the house running.” His argument is that CEOs who treat AI as an efficiency tool for individual departments are missing the larger opportunity — and the larger threat — of enterprise-wide transformation.

    The offshore model evolves, but does not disappear

    The shift to 66% offshore headcount is not a return to the labour arbitrage model of the 2000s. Capgemini’s onshore workforce held steady at 144,200. What changed is the nature of offshore work. WNS’s strength is in intelligent operations — business process services enhanced by AI, automation, and analytics. These are not call centres or basic coding shops. They are delivery centres where AI tools augment human workers on complex processes.

    This is broadly consistent with what the wider consulting industry is signalling. McKinsey’s deployment of 25,000 AI agents across its workforce, as maddaisy reported this week, points in the same direction: AI as workforce augmentation, not replacement. The difference is that McKinsey is building internally, while Capgemini is acquiring externally. Both are betting that the future of professional services involves more AI and more people, deployed differently.

    What practitioners should watch

    For consultants and enterprise leaders, Capgemini’s workforce data offers a useful reality check against the AI replacement narrative. Three signals are worth tracking.

    First, the restructuring outcomes. If the €700 million programme results in meaningful upskilling rather than simple headcount reduction, it will validate the “expand and reshape” model. If it quietly becomes a cost-cutting exercise, the AI transformation story weakens.

    Second, the organic growth rate. With acquisitions contributing nearly five percentage points of 2026 growth, the underlying business needs to demonstrate it can grow on its own merits. The Q4 acceleration to over 10% AI bookings was promising, but one quarter does not make a trend.

    Third, the onshore-offshore ratio over time. If AI genuinely transforms delivery models, the 66% offshore share should eventually stabilise or even decline as automation reduces the need for large delivery teams. If it keeps rising, the industry is still in the scale-up phase, and the productivity gains from AI remain further away than the marketing suggests.

    Capgemini’s apparent paradox — adding 82,300 people while betting its future on AI — is not a paradox at all. It is the messy, expensive reality of what enterprise AI transformation actually looks like on the ground: more complexity before less, more people before fewer, more investment before returns. The firms that navigate this transition phase successfully will define the next era of professional services. The market just needs to be patient enough to let them.