Category: Consulting

  • Capgemini Added 82,300 Offshore Workers Last Year. So Much for AI Replacing Everyone.

    If artificial intelligence is about to make IT services firms obsolete, someone forgot to tell Capgemini. The French consultancy ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. Its offshore workforce now stands at 279,200, a 42% increase, representing two-thirds of total headcount.

    At the same time, the company is planning €700 million in restructuring charges to reshape its workforce for AI. It is positioning itself as, in CEO Aiman Ezzat’s words, “the catalyst for enterprise-wide AI adoption.”

    The contradiction is only apparent. What Capgemini’s numbers actually reveal is the gap between the market’s AI narrative and the operational reality of large-scale enterprise transformation.

    The WNS factor

    The headline headcount surge requires context. The bulk of those 82,300 new offshore employees came from Capgemini’s acquisition of WNS, the India-headquartered business process services firm, completed in late 2025. Cloud4C, another acquisition, contributed further. This was not organic hiring — it was a deliberate strategic bet on scaling offshore delivery capacity at precisely the moment investors were questioning whether such capacity has a future.

    The acquisition arithmetic is revealing. Capgemini’s 2026 revenue growth guidance of 6.5% to 8.5% includes 4.5 to 5 percentage points from acquisitions, primarily WNS. Strip out the acquired growth, and organic expansion looks more like 2% to 3.5%. The company needed WNS to hit its targets.

    WNS brings something specific: expertise in AI-powered business process services, particularly in financial services, insurance, and healthcare. The company had already identified around 100 cross-selling opportunities and signed an intelligent operations contract worth more than €600 million. This is not a firm buying bodies for the sake of scale. It is buying the delivery infrastructure needed to operationalise AI at enterprise level.

    The restructuring counterweight

    The other side of the equation is the €700 million restructuring programme, most of which will land in 2026. Capgemini describes this as adapting “workforce and skills” to align with demand for AI-driven services. In plainer terms: some roles are being eliminated or relocated, while others — particularly those requiring AI engineering, data science, and agentic AI expertise — are being created or upskilled.

    As maddaisy noted when examining Capgemini’s full-year results last week, generative and agentic AI accounted for more than 10% of group bookings in Q4, up from around 5% earlier in the year. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI. The restructuring is not a contradiction of the hiring — it is the other half of the same workforce transformation.

    The pattern is expand first, optimise second. Acquire the delivery capacity, then reshape it. It is a playbook that makes more commercial sense than the market’s preferred narrative of AI simply deleting headcount.

    What the market gets wrong

    Capgemini’s share price has fallen roughly 26% in 2026, driven largely by investor anxiety that AI will cannibalise the IT services business model. The logic runs: if AI can automate code generation, testing, and business process management, why would enterprises pay consultancies to do it?

    Ezzat addressed this directly in the post-earnings call. “I don’t think clients are thinking this way about reduction,” he told MarketWatch. “Clients are looking at how critical it is for them to adopt AI and where it can have an impact.”

    The distinction matters. Enterprises are not replacing their consulting relationships with AI tools. They are asking their consultants to help them implement AI — which, for now, requires more people, not fewer. The skills are different. The delivery models are changing. But the demand for hands-on expertise in making AI work within complex organisational environments is, if anything, increasing.

    This aligns with what Ezzat told Fortune separately: “AI is a business. It is not a technology. It cannot just be used to keep the house running.” His argument is that CEOs who treat AI as an efficiency tool for individual departments are missing the larger opportunity — and the larger threat — of enterprise-wide transformation.

    The offshore model evolves, but does not disappear

    The shift to 66% offshore headcount is not a return to the labour arbitrage model of the 2000s. Capgemini’s onshore workforce held steady at 144,200. What changed is the nature of offshore work. WNS’s strength is in intelligent operations — business process services enhanced by AI, automation, and analytics. These are not call centres or basic coding shops. They are delivery centres where AI tools augment human workers on complex processes.

    This is broadly consistent with what the wider consulting industry is signalling. McKinsey’s deployment of 25,000 AI agents across its workforce, as maddaisy reported this week, points in the same direction: AI as workforce augmentation, not replacement. The difference is that McKinsey is building internally, while Capgemini is acquiring externally. Both are betting that the future of professional services involves more AI and more people, deployed differently.

    What practitioners should watch

    For consultants and enterprise leaders, Capgemini’s workforce data offers a useful reality check against the AI replacement narrative. Three signals are worth tracking.

    First, the restructuring outcomes. If the €700 million programme results in meaningful upskilling rather than simple headcount reduction, it will validate the “expand and reshape” model. If it quietly becomes a cost-cutting exercise, the AI transformation story weakens.

    Second, the organic growth rate. With acquisitions contributing nearly five percentage points of 2026 growth, the underlying business needs to demonstrate it can grow on its own merits. The Q4 acceleration to over 10% AI bookings was promising, but one quarter does not make a trend.

    Third, the onshore-offshore ratio over time. If AI genuinely transforms delivery models, the 66% offshore share should eventually stabilise or even decline as automation reduces the need for large delivery teams. If it keeps rising, the industry is still in the scale-up phase, and the productivity gains from AI remain further away than the marketing suggests.

    Capgemini’s apparent paradox — adding 82,300 people while betting its future on AI — is not a paradox at all. It is the messy, expensive reality of what enterprise AI transformation actually looks like on the ground: more complexity before less, more people before fewer, more investment before returns. The firms that navigate this transition phase successfully will define the next era of professional services. The market just needs to be patient enough to let them.

  • McKinsey’s 25,000 AI Agents: First-Mover Advantage or the Industry’s Biggest Experiment?

    McKinsey now counts 25,000 AI agents among its workforce — roughly one for every 1.6 human employees. That ratio, disclosed by CEO Bob Sternfels at the Consumer Electronics Show and confirmed by the firm, makes the consultancy’s internal agentic build-out one of the most aggressive in professional services.

    The numbers have moved quickly. Eighteen months ago, McKinsey operated a few thousand agents. Today, through its AI arm QuantumBlack, AI-related work accounts for 40% of the firm’s output. The agents have saved an estimated 1.5 million hours on search and synthesis tasks. Non-client-facing headcount has fallen 25%, yet output from those teams has risen 10%.

    Sternfels’s stated ambition is to pair every one of McKinsey’s 40,000 employees with at least one AI agent within the next 18 months.

    Scale versus substance

    The scale is eye-catching. Whether it is meaningful depends on what you count as an agent and how you measure the return.

    McKinsey’s rivals are openly sceptical. EY’s global engineering chief has argued that “a handful of agents do the heavy lifting” and that value should be tracked through efficiency KPIs, not headcount. PwC’s chief AI officer has called agent count “probably the wrong measure”, advocating instead for quality and workflow optimisation. Their counterargument is clear: a smaller fleet of high-performing agents, rigorously measured, may deliver more than a vast deployment still being calibrated.

    The critique lands on familiar ground. As maddaisy examined earlier today, PwC’s 2026 Global CEO Survey found that 56% of chief executives still cannot point to revenue gains from their AI investments. The deployment-versus-outcomes gap is the central tension in enterprise AI right now, and McKinsey’s bet raises the question of whether the firm is racing ahead of the same problem — or solving it.

    From advisory to infrastructure

    The more consequential shift may be in McKinsey’s business model. Sternfels described a move away from the firm’s traditional fee-for-service approach toward a model where McKinsey works with clients to identify joint business cases and then helps underwrite the outcomes.

    This is a significant departure for a firm built on advisory fees and billable hours. It positions McKinsey less as a strategic counsellor and more as an infrastructure partner — one that brings its own AI workforce to bear on client problems and shares in the measurable results.

    QuantumBlack, with 1,700 people, now drives all of McKinsey’s AI initiatives. Alex Singla, the senior partner who co-leads the unit, has described the firm’s evolving recruitment profile: candidates who can move fluidly between traditional consulting and engineering, and who can work alongside AI rather than simply directing it.

    Boston Consulting Group is pursuing a similar direction, deploying “forward-deployed consultants” who build AI tools directly on client projects. But McKinsey’s scale of internal adoption — 25,000 agents embedded across the firm — gives it a data advantage that is harder to replicate. Every internal deployment generates operational insight into what works, what fails, and how agentic systems behave at enterprise scale.

    The governance question maddaisy has been tracking

    The timing of McKinsey’s announcement is worth noting against the backdrop of the agentic AI governance gap maddaisy covered earlier this week. Deloitte’s data showed that only 21% of companies have mature governance models for agentic AI, even as three-quarters plan to deploy it within two years. And a broader pattern has emerged across maddaisy’s recent coverage: enterprises are strategically confident about AI but operationally underprepared.

    McKinsey, as both a deployer and an adviser, sits at the intersection of this tension. If the firm can demonstrate that 25,000 agents operate reliably at scale — with governance, measurement, and accountability frameworks to match — it will have built the most persuasive case study in the industry. If the agents outrun oversight, the reputational exposure is equally significant. When an AI agent produces an analysis and the recommendation proves wrong, the liability question is not academic.

    What practitioners should watch

    For consulting professionals and enterprise leaders watching this play out, three things matter more than the headline number.

    First, the metric that matters is not agent count but outcome attribution. McKinsey’s 1.5 million hours saved is a process metric. The firm’s shift to underwriting client outcomes suggests it understands the need to move beyond efficiency and toward measurable business impact — the same gap that PwC’s CEO Survey identified industry-wide.

    Second, the talent model is changing faster than many firms acknowledge. McKinsey’s search for hybrid consultant-engineers, and BCG’s forward-deployed model, signal that the traditional consulting skill set is being augmented, not just supported, by AI fluency. Firms that treat AI as a productivity tool rather than a workforce design challenge will fall behind.

    Third, scale creates its own governance requirements. As McKinsey’s own Carolyn Dewar argued in Fortune, the real risk is not the technology but how leaders manage the fear and trust dynamics that surround it. Deploying 25,000 agents without the organisational infrastructure to govern them would validate every concern the firm’s rivals have raised.

    McKinsey’s wager is that first-mover scale in agentic AI creates a compounding advantage — more data, better workflows, stronger client proof points. The industry is about to find out whether volume leads to value, or whether a smaller, sharper approach gets there first.

  • The AI ROI Crisis: Why 56% of CEOs Still Cannot Prove Value From Their AI Investments

    More than half of the world’s chief executives cannot point to revenue gains from their AI investments. That is not a fringe finding from an alarmist report — it is the central conclusion of PwC’s 2026 Global CEO Survey, which polled 4,454 business leaders and was released at Davos in January.

    The figure — 56% reporting no measurable revenue uplift from AI — lands at a moment when enterprise AI spending continues to accelerate. Budgets are growing, headcounts in AI-adjacent roles are expanding, and the consulting industry’s order books are thick with transformation mandates. Yet the returns remain stubbornly elusive for the majority. Only 12% of CEOs surveyed reported achieving both revenue growth and cost reduction from their AI programmes.

    This is not, as some commentary has framed it, evidence that AI does not work. It is evidence that most organisations have not yet figured out how to make it work — a distinction that matters enormously for practitioners and consultants navigating the current landscape.

    The pattern maddaisy has been tracking

    PwC’s data confirms a dynamic that maddaisy has examined from several angles over the past fortnight. Deloitte’s 2026 State of AI report revealed that enterprises feel more strategically confident about AI than ever, yet less operationally ready — a paradox the report’s authors attributed to weak data infrastructure, insufficient talent, and immature governance. Research published in Harvard Business Review, which maddaisy covered last week, found that AI tools were making employees busier rather than more productive, with efficiency gains absorbed by task expansion rather than redirected toward higher-value work.

    The ROI gap is what happens when these operational failures compound. Organisations adopt AI tools without redesigning workflows. They measure deployment (how many teams have access) rather than outcomes (what changed as a result). They invest in the technology layer while underinvesting in the organisational layer — the process changes, role redesigns, and measurement frameworks that turn a pilot into a production capability.

    A measurement problem masquerading as a technology problem

    One of the more revealing aspects of the PwC data is what it implies about how enterprises are tracking AI value. CEO revenue confidence sits at a five-year low of 30%, and this pessimism correlates with the inability to demonstrate AI returns. But the question is whether the returns genuinely are not there, or whether organisations simply lack the instrumentation to detect them.

    The answer, for many enterprises, is likely both. Some AI deployments are genuinely failing to deliver — deployed in the wrong processes, aimed at the wrong problems, or undermined by poor data quality. But others may be generating real value that never surfaces in the metrics that CEOs review. Time saved in middle-office processes, reduced error rates in document handling, faster iteration cycles in product development — these are real gains, but they rarely appear on a revenue line unless someone has built the measurement architecture to capture them.

    This is a familiar pattern in enterprise technology adoption. The early years of cloud computing saw similar complaints: organisations spent heavily on migration but struggled to quantify the business impact beyond infrastructure cost reduction. The value was real — in agility, speed to market, and developer productivity — but it took years for finance teams to develop frameworks that could track it. AI is following the same trajectory, with the added complication that its benefits are often diffuse, spread across many small improvements rather than concentrated in a single, measurable outcome.

    What the 12% are doing differently

    PwC’s survey found that the minority of organisations achieving both revenue and cost benefits from AI share common characteristics. They have invested in data foundations — not just data lakes and pipelines, but governance structures that ensure data quality, accessibility, and appropriate use. They have moved beyond isolated pilots to embed AI into core business processes. And they treat AI adoption as an organisational change programme, not a technology deployment.

    None of this is conceptually new. Consultancies have been advising clients on change management, data governance, and process redesign for decades. What is new is the speed at which the gap between leaders and laggards is widening. The 12% who have cracked the ROI equation are pulling ahead, using AI-generated insights to inform strategy, AI-automated processes to reduce costs, and AI-enhanced products to capture new revenue. The 56% who have not are still running pilots, still debating governance frameworks, and still struggling to answer the board’s most basic question: what are we getting for this money?

    The consulting industry’s uncomfortable position

    For the consulting sector, PwC’s findings create an awkward tension. The firms advising enterprises on AI strategy are, in many cases, the same firms whose clients cannot demonstrate returns. This is not necessarily a reflection of poor advice — the operational barriers to AI value are genuine and deep — but it does raise questions about what consulting engagements are actually delivering.

    As maddaisy noted when examining Capgemini’s recent results, CEO Aiman Ezzat explicitly framed the company’s direction as a shift “from AI hype to AI realism.” Generative AI bookings exceeded 8% of Capgemini’s total for the year, but the company’s 2026 revenue guidance fell slightly below analyst expectations — a reminder that even firms positioning AI at the centre of their strategy face questions about whether the investment is translating into proportional growth.

    The risk for consultancies is that the ROI gap erodes client confidence in AI-related engagements. If more than half of CEOs see no revenue benefit, the appetite for further AI spending — and the advisory services that accompany it — may tighten. The counter-argument, which PwC’s own data supports, is that the solution to poor AI returns is not less AI but better AI implementation. That is a consulting engagement waiting to happen, provided firms can credibly demonstrate that they know how to close the gap.

    What practitioners should watch

    Three developments will shape whether the AI ROI picture improves or deteriorates over the coming quarters.

    First, the regulatory environment is tightening. As maddaisy recently examined, the EU AI Act’s high-risk system requirements become enforceable in August 2026, and state-level legislation in the US is creating a fragmented compliance landscape. Compliance costs will add to the total cost of AI ownership, making the ROI equation harder to balance for organisations that have not already built governance into their deployment model.

    Second, the rise of agentic AI — systems that plan and execute tasks with minimal human oversight — will test whether organisations can capture value from more autonomous AI without losing control. Deloitte’s data showed a nearly fivefold increase in planned agentic AI deployments over the next two years, but only one in five companies has a mature governance model for autonomous agents. The ROI potential is significant; so is the risk of expensive failures.

    Third, and perhaps most importantly, watch for a shift in how organisations measure AI value. The enterprises that move beyond “did revenue go up?” to more granular metrics — cycle time reduction, error rate improvement, employee capacity freed for strategic work — will be better positioned to demonstrate returns and justify continued investment. The measurement framework may matter as much as the technology itself.

    PwC’s 56% figure is striking, but it is a snapshot of a transition, not a verdict on AI’s potential. The technology is not the bottleneck. Execution is. And for consultants and practitioners, that distinction is where the real work — and the real opportunity — lies.

  • The Missing Half of AI Adoption: Why Organisations Are Failing the People Side of the Equation

    The evidence that AI tools can intensify work rather than lighten it has been building steadily. As maddaisy examined earlier this week, UC Berkeley researchers found that employees using AI worked faster but not better — absorbing more tasks, blurring work-life boundaries, and entering a self-reinforcing cycle of acceleration. The diagnosis is now well-documented. The question that remains largely unanswered is what organisations should actually do about it.

    New data suggests that most are doing very little — and that the gap between leadership confidence and frontline reality is wider than many boards realise.

    The engagement crisis hiding behind adoption metrics

    A 2026 workforce trends report from DHR Global paints a stark picture. Some 83% of workers report experiencing at least some degree of burnout, with the technology sector among the worst affected at 58%. That figure has held roughly steady since 2025 — but what has changed is burnout’s impact on engagement. More than half of employees (52%) now say burnout actively reduces their engagement at work, up from 34% just a year earlier.

    Meanwhile, overall engagement has dropped sharply. Only 64% of workers describe themselves as very or extremely engaged, down from 88% in 2025. That 24-percentage-point decline should concern any organisation that has been measuring AI success purely by deployment velocity.

    The report also reveals a telling asymmetry in how AI communication is handled. Only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly.” Among entry-level staff, that figure falls to just 12%. Among C-suite leaders, it rises to 69%. The people making decisions about AI adoption and the people living with its consequences occupy different informational worlds.

    Resistance is not the problem — silence is

    A common framing in boardrooms is that employees resist AI because they fear change. The reality, according to recent research published in Harvard Business Review, is more nuanced. Workers’ responses to AI map onto three core psychological needs: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues).

    When AI deployment satisfies these needs — by removing drudgery and enabling more skilled work — adoption tends to follow. When it frustrates them — by making expertise feel disposable, reducing worker agency, or replacing human collaboration with human-machine interaction — the response is not always visible protest. It is quiet disengagement. Some 31% of knowledge workers admit to actively working against their company’s AI initiatives. Among Gen Z workers, that figure rises to 41%.

    Perhaps more telling: 54% of workers say they would use unapproved AI tools, and 32% hide their AI use from employers entirely. The adoption numbers that leadership teams celebrate may obscure a more complicated picture of how AI is actually being used — and resisted — on the ground.

    From diagnosis to framework

    If the problem is well-documented, the response toolkit is still emerging. Two frameworks have gained traction in the consulting and HR strategy space this year, and both share a common starting point: organisations need to treat AI adoption as a change management challenge, not a technology deployment exercise.

    The first, proposed by Ranganathan and Ye in their HBR study on work intensification, is the concept of “AI practice” — deliberate organisational norms around how AI is used. This includes structured pauses before major decisions (to counteract the speed bias that AI creates), sequencing work to reduce context-switching, and protecting time for human connection. The researchers argue that without intentional guardrails, the default trajectory is escalation: faster output, rising expectations, and eventual burnout.

    The second is the AWARE framework, developed by researchers studying psychological resistance to AI. It stands for: Acknowledge employee concerns proactively; Watch for both adaptive and maladaptive coping behaviours; Align support systems with the psychological needs that AI may be disrupting; Redesign workflows around human-AI complementarities rather than simple substitution; and Empower workers through transparency and genuine participation in implementation decisions.

    Neither framework is complicated. Both amount to structured versions of what good management has always looked like: listen to your people, involve them in decisions that affect their work, and pay attention to unintended consequences. The fact that these principles need to be formalised into acronyms and published in academic journals suggests how far many organisations have drifted from applying them during AI rollouts.

    The consulting opportunity — and obligation

    For consultancies advising on AI transformation, this data represents both a market opportunity and a professional obligation. The firms that positioned AI adoption as primarily a technical challenge — choose the right model, integrate the right data pipeline, deploy at scale — are leaving the harder, more valuable work on the table.

    The harder work is organisational. It involves job redesign, not just process automation. It requires mapping which tasks benefit from AI augmentation and which need to remain human — not as a one-off exercise, but as an ongoing practice. It means building managerial capability in areas that technology deployments rarely prioritise: emotional intelligence, coaching, and the ability to recognise cognitive overload before it becomes a retention problem.

    Only 44% of business leaders currently involve workers in AI implementation decisions. That figure alone explains much of the resistance and burnout data. People who have no say in how their work changes will inevitably feel that change is happening to them rather than with them — regardless of whether the technology itself is genuinely useful.

    What comes next

    The most thoughtful organisations are beginning to reframe AI not as a productivity multiplier but as a capacity management tool. The question shifts from “how can AI help people do more?” to “how can AI help people think more clearly, recover more effectively, and focus on what actually matters?” It is a subtle but significant distinction — one that moves the conversation from output volume to work quality and sustainability.

    As maddaisy noted in its analysis of Deloitte’s 2026 AI report, the gap between strategic confidence and operational readiness remains one of the defining features of enterprise AI. The burnout and engagement data suggests that this gap is not just a technology integration problem. It is, at its core, a people management problem — and one that will not be solved by deploying more tools faster.

    The organisations that get this right will not be the ones with the most AI models in production. They will be the ones that treated adoption as a human process from the beginning.

  • The Productivity Trap: Why AI Tools Are Making Employees Busier, Not Better

    The promise of AI in the workplace has always rested on a simple equation: automate the routine, free up humans for higher-value work. It is the pitch that has launched a thousand consulting engagements and underpinned billions in enterprise software investment. But new research suggests the equation may be running in reverse.

    A study published in Harvard Business Review this month, based on eight months of ethnographic research at a US technology company with roughly 200 employees, found that AI tools did not reduce workload. They intensified it. Employees worked faster, took on more tasks, and felt busier than before — despite the efficiency gains that AI was supposed to deliver.

    The researchers, Aruna Ranganathan and Xingqi Maggie Ye from UC Berkeley’s Haas School of Business, identified three distinct mechanisms through which AI was quietly ratcheting up the pressure.

    Task expansion: doing more with less becomes doing more with the same

    The first mechanism was task expansion. When AI tools made certain tasks faster, employees did not use the freed-up time for strategic thinking or creative work. Instead, they absorbed responsibilities that had previously belonged to other roles. Product managers and designers started writing code. Researchers took on engineering tasks. Workers assumed duties that, in the researchers’ words, “might previously have justified additional help.”

    This is a pattern that will be familiar to anyone who has watched organisations respond to efficiency gains over the past two decades. The spreadsheet did not eliminate accounting departments — it gave accountants more to do. Email did not reduce communication overhead — it multiplied it. AI appears to be following the same trajectory, with one critical difference: the speed at which task expansion occurs is considerably faster.

    The study also identified a secondary burden. Engineers found themselves spending additional time reviewing their colleagues’ AI-assisted work, a form of quality assurance that had not existed before because the work itself had not existed before.

    The vanishing boundary between work and rest

    The second mechanism was the blurring of work-life boundaries. Employees began incorporating work into what had previously been downtime — lunch breaks, gaps between meetings, even waiting for files to load. Because interacting with an AI tool felt, as one participant put it, “closer to chatting than to undertaking a formal task,” the psychological barrier to picking up work during off moments effectively disappeared.

    This is a subtler form of intensification, and arguably a more dangerous one. The physical cues that traditionally separated work from rest — closing a document, leaving a desk, switching off a screen — lose their power when work can be initiated through a conversational prompt on any device. The distinction between being productive and being available collapses.

    The multitasking illusion

    The third mechanism was increased multitasking. With AI handling parts of each task, employees managed multiple active threads simultaneously, creating what the researchers described as “continual switching of attention.” Worse, because AI made fast output visible to colleagues, it raised speed expectations across teams. Employees felt pressure not just to work with AI, but to keep pace with the output norms that AI made possible.

    The result was a self-reinforcing cycle: AI accelerated certain tasks, which raised expectations for speed, which made workers more reliant on AI, which widened the scope of what they attempted. As one engineer told the researchers, they felt “busier than before” despite the supposed time savings.

    Not new, but newly documented

    It is worth noting that the HBR findings are not entirely without precedent. Research from the University of Chicago and the University of Copenhagen published last year found that AI chatbots saved workers only about an hour per week — and that the tools created enough new tasks to largely nullify even that modest gain. What Ranganathan and Ye have added is the ethnographic depth: eight months of direct observation, 40 interviews, and a detailed account of the mechanisms through which intensification occurs.

    For consulting practitioners, this distinction matters. The earlier studies quantified the problem. This one explains the pathways. And that makes it actionable.

    Connecting the dots: from strategy gap to people gap

    The timing of this research is significant. As maddaisy noted last week, Deloitte’s 2026 State of AI in the Enterprise report revealed a widening gap between strategic confidence and operational readiness. Forty-two per cent of companies consider their AI strategy highly prepared, yet fewer feel equipped to execute it.

    The burnout research suggests one reason that gap persists: organisations are measuring AI adoption by deployment metrics — tools rolled out, processes automated, tasks per hour — while ignoring the human cost of that adoption. The operational readiness problem is not just about data pipelines and integration architecture. It is about whether the people using these tools can sustain the pace that the tools enable.

    This also has implications for the AI governance frameworks now coming into force across Europe and beyond. Most governance discussions focus on algorithmic bias, data privacy, and transparency. Workforce wellbeing — whether AI deployment is creating sustainable working conditions — barely features. As enforcement mechanisms sharpen, that gap may become harder to defend.

    What practitioners should watch

    The HBR researchers recommend what they call “AI practice” — a set of organisational disciplines designed to counteract intensification. These include intentional pauses (structured breaks for assessment), sequencing (deliberate pacing rather than continuous output), and human grounding (protected time for dialogue and connection).

    These are not revolutionary ideas. They are, in essence, good management practices adapted for an AI-augmented workplace. But the fact that they need to be articulated at all tells a story about how many organisations are deploying AI tools without thinking through the second-order effects on their people.

    For consultancies advising on AI transformation, this research is a prompt to broaden the conversation. Deployment is not the finish line. If the tools make employees faster but not better — busier but not more effective — then the productivity gains that justified the investment may prove temporary, eroded by turnover, cognitive fatigue, and declining work quality.

    The question, as the researchers put it, is not whether AI will change work, but whether organisations will actively shape that change — or let it quietly shape them.

  • Capgemini’s Sovereignty Playbook: Bridging Europe’s AI Ambitions and American Infrastructure

    In the space of a single week in early February, Capgemini signed sovereignty-focused partnerships with all three major US hyperscalers — Google Cloud, AWS, and Microsoft. Days later, CEO Aiman Ezzat used the company’s full-year results presentation to publicly dismiss calls for complete European tech autonomy.

    The juxtaposition was deliberate, and it tells a more interesting story than the headline financials. Capgemini is not just adding AI capabilities to its consulting portfolio. It is building a distinct commercial proposition around one of Europe’s most politically charged technology questions: who controls the infrastructure that enterprises depend on?

    The gap between rhetoric and reality

    European digital sovereignty has been a policy preoccupation for several years now, accelerated by concerns over US government data access, the dominance of American cloud providers, and the growing strategic importance of AI infrastructure. The European Commission has pushed for greater technological independence. Member states have launched sovereign cloud initiatives. The language of autonomy is everywhere.

    The reality, as Ezzat put it bluntly during the post-earnings call, is more complicated. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.”

    This is not a controversial claim among practitioners, but it is a notable one for a CEO whose company is headquartered in Paris and whose chairman also leads the digital working group at the European Round Table for Industry. Ezzat has been discussing sovereignty with the European Commission in Brussels and at Davos. His position is informed, not casual.

    A four-layer framework

    Ezzat outlined what amounts to a practical sovereignty framework built around four layers: data, operations, regulation, and technology. His argument is that Europe has meaningful independence on the first three — data residency and governance, operational control over services, and regulatory authority through instruments like GDPR and the AI Act. The fourth layer, the underlying technology stack, is where US Big Tech dominance means full independence is neither achievable nor, in his view, desirable.

    Rather than pursuing autonomy at every layer, Capgemini’s approach is to offer clients “the right sovereignty solution based on the use case, the client environment, the government.” In practice, this means European-managed services running on American infrastructure — sovereign in governance and operations, pragmatic on technology.

    As maddaisy noted earlier this week in examining Capgemini’s full-year results, the company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. That trajectory, if it holds, represents a structural shift in how enterprise IT contracts are structured across Europe.

    Three partnerships, one message

    The timing of Capgemini’s hyperscaler announcements was no coincidence. On 6 February, the company expanded its partnership with Google Cloud, establishing a Sovereign Cloud Delivery Practice and Centre of Excellence. Capgemini will operate as a Google Distributed Cloud air-gapped operator — meaning it can deliver fully managed services with total isolation from the public internet, suited to defence, intelligence, and critical infrastructure clients.

    On 9 February, a similar announcement followed with AWS, focused on sovereign-ready cloud and AI capabilities. Two days later, Capgemini formalised integrated sovereignty solutions with Microsoft. Three announcements in five days, each offering variations on the same theme: Capgemini as the European operator sitting between the client and the American cloud.

    This is a positioning play with genuine commercial substance. For European enterprises navigating tightening regulation — particularly public sector organisations, financial institutions, and healthcare providers — the question is not whether to use cloud services but how to use them in ways that satisfy increasingly specific sovereignty requirements. Capgemini is betting it can be the answer to that question.

    Where AI and sovereignty converge

    The sovereignty proposition becomes more compelling when combined with Capgemini’s broader AI pivot. Generative and agentic AI bookings exceeded 10% of group bookings in Q4 2025, and the company has trained 310,000 employees on generative AI and 194,000 on agentic AI — systems designed to take autonomous actions rather than simply generate content.

    AI workloads are particularly sensitive from a sovereignty perspective. They involve large volumes of proprietary data, often require access to regulated information, and increasingly touch decision-making processes that organisations want to keep within controlled environments. A sovereign AI solution — where the model runs on infrastructure governed under European jurisdiction, operated by a European firm, but built on the technical capabilities of a US hyperscaler — addresses a specific and growing need.

    Ezzat framed AI itself with characteristic pragmatism in a separate interview with Fortune. “AI is a business. It is not a technology,” he said, warning leaders against treating it as a “black box being managed separately.” His caution against AI FOMO — “You don’t want to be too ahead of the learning curve. If you are, you’re investing and building capabilities that nobody wants” — suggests a company that has learned from watching the metaverse hype cycle play out.

    What to watch

    Capgemini’s sovereignty strategy raises several questions worth tracking. First, whether the 50%-by-2029 estimate for sovereignty-embedded contracts proves accurate, or whether it reflects the kind of optimistic forecasting that consulting firms are prone to when promoting a new service line. Second, how European competitors — particularly Atos, which has its own sovereignty ambitions, and smaller European cloud providers — respond to Capgemini’s hyperscaler-partnered model. Third, whether the European Commission’s own stance on sovereignty tilts toward the pragmatic Capgemini position or toward more aggressive technological independence.

    For consultants and practitioners, the practical takeaway is straightforward: sovereignty is moving from a compliance checkbox to a structural feature of European enterprise contracts. The firms that build credible delivery capabilities around it now — not just policy positions, but operational partnerships and trained workforces — will have a meaningful advantage as regulation tightens. Capgemini has placed its bet. The question is whether the market follows.

  • Capgemini’s AI Pivot: From Hype to Realism, with €22.5 Billion at Stake

    Capgemini’s full-year 2025 results, released on 13 February, offered one of the clearest snapshots yet of how a major IT consultancy is repositioning itself around artificial intelligence — and the distance still to travel.

    The French group reported revenues of €22.5 billion, up 3.4% at constant currency. Growth was modest by historical standards, weighed down by cautious enterprise spending across Continental Europe. But the fourth quarter told a different story: 10.6% growth, suggesting that client hesitancy may be starting to ease.

    From hype to realism

    CEO Aiman Ezzat framed the company’s direction as a shift “from AI hype to AI realism” — a phrase that carries weight given how frequently the consultancy sector has leaned on AI as a growth narrative without always delivering measurable outcomes.

    The numbers behind the claim are worth examining. Generative AI bookings exceeded 8% of Capgemini’s total for the year, rising above 10% in Q4. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI — the emerging category of AI systems designed to take autonomous actions rather than simply generate content.

    These are significant investments in capability. Whether they translate into proportional revenue growth remains the open question. Capgemini’s 2026 guidance of 6.5% to 8.5% revenue growth fell slightly below the 7.2% analyst consensus, suggesting the market expected more from a company positioning AI at the centre of its strategy.

    The sovereignty opportunity

    Beyond AI, Capgemini is betting on a second structural trend: digital sovereignty. The company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. For a European-headquartered firm, this represents a genuine competitive advantage over US-based rivals.

    Ezzat was notably pragmatic on the sovereignty question, dismissing calls for full European tech autonomy. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.” Instead, he advocated for finding “the right sovereignty solution based on the use case, the client environment, the government.”

    This positions Capgemini as a bridge between European regulatory ambitions and the practical reality that most enterprise infrastructure still runs on AWS, Google Cloud, and Microsoft Azure. The company has signed partnerships with all three US hyperscalers to deliver what it calls “sovereign” AI solutions — European-managed services running on American infrastructure.

    The uncomfortable parts

    Not everything in the results paints a straightforward growth picture. Net income declined 4.2% year-on-year to €1.6 billion. The stock has fallen approximately 29% so far in 2026, caught in a broader selloff of companies perceived as vulnerable to AI-driven disruption — an ironic position for a firm making AI the centrepiece of its strategy.

    There is also the matter of Capgemini Government Solutions, the US subsidiary the company announced it would sell following public backlash over a $4.8 million contract with US Immigration and Customs Enforcement. It is a reminder that the consulting business operates in political as well as commercial environments, and reputational risk can force strategic decisions that have little to do with market fundamentals.

    France, Capgemini’s home market, contracted by 4.1% — a concern for a company headquartered in Paris. The bright spots were North America (7.3% growth), the UK and Ireland (10.5%), and Asia Pacific and Latin America (13.8%). Financial Services led sectors with 9.2% growth, accelerating sharply to 20.4% in Q4.

    What to watch

    Capgemini’s results matter beyond its own balance sheet because they reflect broader dynamics affecting the consultancy and IT services sector. The shift from selling AI as a concept to delivering AI as an operational capability is where the next wave of value — and differentiation — will come from.

    Three things are worth tracking. First, whether GenAI bookings continue to accelerate past the 10% threshold reached in Q4, or whether that was an end-of-year surge. Second, how the sovereignty proposition develops as European regulation tightens — this could become a meaningful differentiator for European-headquartered firms. Third, whether the “Fit-for-growth” restructuring programme, which will cost an additional €200 million in cash outflow, delivers the operational efficiency the company is banking on.

    The consultancy sector has spent the better part of two years talking about AI. Capgemini’s latest results suggest it is now moving into the phase of proving it can deliver.