Tag: enterprise-ai

  • The Productivity Trap: Why AI Tools Are Making Employees Busier, Not Better

    The promise of AI in the workplace has always rested on a simple equation: automate the routine, free up humans for higher-value work. It is the pitch that has launched a thousand consulting engagements and underpinned billions in enterprise software investment. But new research suggests the equation may be running in reverse.

    A study published in Harvard Business Review this month, based on eight months of ethnographic research at a US technology company with roughly 200 employees, found that AI tools did not reduce workload. They intensified it. Employees worked faster, took on more tasks, and felt busier than before — despite the efficiency gains that AI was supposed to deliver.

    The researchers, Aruna Ranganathan and Xingqi Maggie Ye from UC Berkeley’s Haas School of Business, identified three distinct mechanisms through which AI was quietly ratcheting up the pressure.

    Task expansion: doing more with less becomes doing more with the same

    The first mechanism was task expansion. When AI tools made certain tasks faster, employees did not use the freed-up time for strategic thinking or creative work. Instead, they absorbed responsibilities that had previously belonged to other roles. Product managers and designers started writing code. Researchers took on engineering tasks. Workers assumed duties that, in the researchers’ words, “might previously have justified additional help.”

    This is a pattern that will be familiar to anyone who has watched organisations respond to efficiency gains over the past two decades. The spreadsheet did not eliminate accounting departments — it gave accountants more to do. Email did not reduce communication overhead — it multiplied it. AI appears to be following the same trajectory, with one critical difference: the speed at which task expansion occurs is considerably faster.

    The study also identified a secondary burden. Engineers found themselves spending additional time reviewing their colleagues’ AI-assisted work, a form of quality assurance that had not existed before because the work itself had not existed before.

    The vanishing boundary between work and rest

    The second mechanism was the blurring of work-life boundaries. Employees began incorporating work into what had previously been downtime — lunch breaks, gaps between meetings, even waiting for files to load. Because interacting with an AI tool felt, as one participant put it, “closer to chatting than to undertaking a formal task,” the psychological barrier to picking up work during off moments effectively disappeared.

    This is a subtler form of intensification, and arguably a more dangerous one. The physical cues that traditionally separated work from rest — closing a document, leaving a desk, switching off a screen — lose their power when work can be initiated through a conversational prompt on any device. The distinction between being productive and being available collapses.

    The multitasking illusion

    The third mechanism was increased multitasking. With AI handling parts of each task, employees managed multiple active threads simultaneously, creating what the researchers described as “continual switching of attention.” Worse, because AI made fast output visible to colleagues, it raised speed expectations across teams. Employees felt pressure not just to work with AI, but to keep pace with the output norms that AI made possible.

    The result was a self-reinforcing cycle: AI accelerated certain tasks, which raised expectations for speed, which made workers more reliant on AI, which widened the scope of what they attempted. As one engineer told the researchers, they felt “busier than before” despite the supposed time savings.

    Not new, but newly documented

    It is worth noting that the HBR findings are not entirely without precedent. Research from the University of Chicago and the University of Copenhagen published last year found that AI chatbots saved workers only about an hour per week — and that the tools created enough new tasks to largely nullify even that modest gain. What Ranganathan and Ye have added is the ethnographic depth: eight months of direct observation, 40 interviews, and a detailed account of the mechanisms through which intensification occurs.

    For consulting practitioners, this distinction matters. The earlier studies quantified the problem. This one explains the pathways. And that makes it actionable.

    Connecting the dots: from strategy gap to people gap

    The timing of this research is significant. As maddaisy noted last week, Deloitte’s 2026 State of AI in the Enterprise report revealed a widening gap between strategic confidence and operational readiness. Forty-two per cent of companies consider their AI strategy highly prepared, yet fewer feel equipped to execute it.

    The burnout research suggests one reason that gap persists: organisations are measuring AI adoption by deployment metrics — tools rolled out, processes automated, tasks per hour — while ignoring the human cost of that adoption. The operational readiness problem is not just about data pipelines and integration architecture. It is about whether the people using these tools can sustain the pace that the tools enable.

    This also has implications for the AI governance frameworks now coming into force across Europe and beyond. Most governance discussions focus on algorithmic bias, data privacy, and transparency. Workforce wellbeing — whether AI deployment is creating sustainable working conditions — barely features. As enforcement mechanisms sharpen, that gap may become harder to defend.

    What practitioners should watch

    The HBR researchers recommend what they call “AI practice” — a set of organisational disciplines designed to counteract intensification. These include intentional pauses (structured breaks for assessment), sequencing (deliberate pacing rather than continuous output), and human grounding (protected time for dialogue and connection).

    These are not revolutionary ideas. They are, in essence, good management practices adapted for an AI-augmented workplace. But the fact that they need to be articulated at all tells a story about how many organisations are deploying AI tools without thinking through the second-order effects on their people.

    For consultancies advising on AI transformation, this research is a prompt to broaden the conversation. Deployment is not the finish line. If the tools make employees faster but not better — busier but not more effective — then the productivity gains that justified the investment may prove temporary, eroded by turnover, cognitive fatigue, and declining work quality.

    The question, as the researchers put it, is not whether AI will change work, but whether organisations will actively shape that change — or let it quietly shape them.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.

  • Strategically Ready, Operationally Stuck: What Deloitte’s 2026 AI Report Reveals About the Enterprise Gap

    Deloitte’s 2026 State of AI in the Enterprise report, based on a survey of 3,235 senior leaders across 24 countries, delivers a finding that should give every consulting practitioner pause: more companies than ever believe their AI strategy is sound, but fewer feel ready to actually execute it.

    The gap between strategic confidence and operational readiness is the defining tension in enterprise AI right now — and it has implications for how consultancies sell, deliver, and staff their AI practices.

    The preparedness paradox

    According to Deloitte’s survey, 42% of companies now consider their strategy highly prepared for AI adoption, up from the previous year. That sounds encouraging until you read the next line: those same organisations report feeling less prepared than before on infrastructure, data, risk management, and talent.

    This is not a minor statistical wrinkle. It suggests that enterprises have become more fluent in talking about AI — they can articulate a vision, identify use cases, perhaps even secure board-level backing — but the operational foundations needed to move from pilot to production remain weak. Worker access to AI rose by 50% in 2025, and organisations expect the number with 40% or more of AI projects in production to double within six months. Whether the infrastructure exists to support that ambition is another matter entirely.

    The skills picture reinforces the point. Insufficient worker skills were identified as the biggest barrier to integrating AI into existing workflows. The most common response — educating the broader workforce to raise AI fluency (53%) — is necessary but not sufficient. Far fewer organisations are redesigning roles, workflows, or career paths around AI. Education tells people what AI can do. Restructuring tells the organisation how to use it.

    Sovereign AI moves from policy to procurement

    One of the report’s more significant findings is the emergence of sovereign AI as a practical enterprise concern, not just a political talking point. In Singapore, 77% of businesses surveyed said that data residency and in-country or in-region compute considerations are now important to their strategic planning.

    This echoes a dynamic that maddaisy recently examined in the European context, where Capgemini’s CEO Aiman Ezzat outlined a pragmatic four-layer sovereignty framework while signing partnerships with all three major US hyperscalers. Deloitte’s data suggests that the same tension — between the desire for local control and the reality of global infrastructure — is playing out across Asia Pacific and the Middle East, not just Europe.

    As Computer Weekly reported, organisations in the Middle East are approaching a similar inflection point, with sovereign and agentic AI expected to define the next phase of digital transformation. The pattern is consistent: governments want control, enterprises want capability, and the consulting industry is positioning itself to broker the compromise.

    Agentic AI outpaces its guardrails

    Perhaps the most striking data point in the Deloitte report concerns agentic AI — systems designed to plan, execute, and optimise tasks with minimal human oversight. Usage is set to rise sharply over the next two years, but only one in five companies has a mature governance model for autonomous AI agents.

    In Singapore, the numbers are particularly stark: 72% of businesses plan to deploy agentic AI across several operational areas within two years, up from just 15% today. That is a nearly fivefold increase in deployment with governance frameworks that are, by the report’s own assessment, not yet fit for purpose.

    The use cases are real enough. Deloitte cites financial services firms using AI agents to capture meeting actions and track follow-through, airlines deploying agents for common customer transactions, and manufacturers using autonomous systems to optimise product development trade-offs. These are not speculative applications — they are in production. But the governance question is not academic either. When an AI agent autonomously rebooks a flight or commits to a procurement decision, the question of accountability, audit trails, and regulatory compliance becomes urgent.

    Accenture stops counting

    A useful counterpoint to Deloitte’s enterprise survey comes from the supply side. In December, Accenture announced that it would stop separately reporting its advanced AI bookings — a category covering generative, agentic, and physical AI — because the technology had become “so pervasive” it was embedded across nearly everything the firm delivers.

    CEO Julie Sweet framed this as a sign of maturity: AI is no longer a distinct workstream but a feature of all client engagements. Advanced AI bookings hit $2.2 billion in the first quarter of fiscal 2026, double the prior year. The company has reached its target of 80,000 AI and data professionals.

    There is a less generous reading, of course. Stopping disclosure also makes it harder for investors and analysts to track whether AI is generating new revenue or simply being relabelled within existing services. But taken alongside Accenture’s acquisition of Faculty, a UK-based AI firm, in February 2026, the direction is clear: the major consultancies are absorbing AI into their core delivery model rather than treating it as a separate practice.

    What practitioners should watch

    The Deloitte report’s most useful contribution is not its optimism about AI’s potential — the industry has no shortage of that — but its honest accounting of where enterprises actually stand. Three signals are worth tracking.

    First, the strategy-execution gap will drive consulting demand, but not the kind that involves slide decks and maturity assessments. Enterprises need help with the operational plumbing: data architecture, infrastructure modernisation, and workflow redesign. The consultancies that can deliver engineering alongside strategy will win the next phase.

    Second, sovereign AI is becoming a procurement criterion, not just a policy aspiration. For firms operating across multiple jurisdictions — which includes most enterprise consulting clients — this means every AI deployment now carries a compliance dimension that did not exist two years ago.

    Third, the governance gap around agentic AI is a genuine risk, not a theoretical concern. As autonomous systems move from pilots to production, the organisations that invested early in oversight frameworks will have a structural advantage. Those that did not will find themselves either slowing down or taking on liability they have not fully priced.

    The AI story in 2026 is less about whether the technology works — increasingly, it does — and more about whether organisations can build the operational, regulatory, and governance foundations to use it responsibly at scale. The Deloitte data suggests most are not there yet, but they think they are. That gap is where the real work begins.

  • Capgemini’s Sovereignty Playbook: Bridging Europe’s AI Ambitions and American Infrastructure

    In the space of a single week in early February, Capgemini signed sovereignty-focused partnerships with all three major US hyperscalers — Google Cloud, AWS, and Microsoft. Days later, CEO Aiman Ezzat used the company’s full-year results presentation to publicly dismiss calls for complete European tech autonomy.

    The juxtaposition was deliberate, and it tells a more interesting story than the headline financials. Capgemini is not just adding AI capabilities to its consulting portfolio. It is building a distinct commercial proposition around one of Europe’s most politically charged technology questions: who controls the infrastructure that enterprises depend on?

    The gap between rhetoric and reality

    European digital sovereignty has been a policy preoccupation for several years now, accelerated by concerns over US government data access, the dominance of American cloud providers, and the growing strategic importance of AI infrastructure. The European Commission has pushed for greater technological independence. Member states have launched sovereign cloud initiatives. The language of autonomy is everywhere.

    The reality, as Ezzat put it bluntly during the post-earnings call, is more complicated. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.”

    This is not a controversial claim among practitioners, but it is a notable one for a CEO whose company is headquartered in Paris and whose chairman also leads the digital working group at the European Round Table for Industry. Ezzat has been discussing sovereignty with the European Commission in Brussels and at Davos. His position is informed, not casual.

    A four-layer framework

    Ezzat outlined what amounts to a practical sovereignty framework built around four layers: data, operations, regulation, and technology. His argument is that Europe has meaningful independence on the first three — data residency and governance, operational control over services, and regulatory authority through instruments like GDPR and the AI Act. The fourth layer, the underlying technology stack, is where US Big Tech dominance means full independence is neither achievable nor, in his view, desirable.

    Rather than pursuing autonomy at every layer, Capgemini’s approach is to offer clients “the right sovereignty solution based on the use case, the client environment, the government.” In practice, this means European-managed services running on American infrastructure — sovereign in governance and operations, pragmatic on technology.

    As maddaisy noted earlier this week in examining Capgemini’s full-year results, the company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. That trajectory, if it holds, represents a structural shift in how enterprise IT contracts are structured across Europe.

    Three partnerships, one message

    The timing of Capgemini’s hyperscaler announcements was no coincidence. On 6 February, the company expanded its partnership with Google Cloud, establishing a Sovereign Cloud Delivery Practice and Centre of Excellence. Capgemini will operate as a Google Distributed Cloud air-gapped operator — meaning it can deliver fully managed services with total isolation from the public internet, suited to defence, intelligence, and critical infrastructure clients.

    On 9 February, a similar announcement followed with AWS, focused on sovereign-ready cloud and AI capabilities. Two days later, Capgemini formalised integrated sovereignty solutions with Microsoft. Three announcements in five days, each offering variations on the same theme: Capgemini as the European operator sitting between the client and the American cloud.

    This is a positioning play with genuine commercial substance. For European enterprises navigating tightening regulation — particularly public sector organisations, financial institutions, and healthcare providers — the question is not whether to use cloud services but how to use them in ways that satisfy increasingly specific sovereignty requirements. Capgemini is betting it can be the answer to that question.

    Where AI and sovereignty converge

    The sovereignty proposition becomes more compelling when combined with Capgemini’s broader AI pivot. Generative and agentic AI bookings exceeded 10% of group bookings in Q4 2025, and the company has trained 310,000 employees on generative AI and 194,000 on agentic AI — systems designed to take autonomous actions rather than simply generate content.

    AI workloads are particularly sensitive from a sovereignty perspective. They involve large volumes of proprietary data, often require access to regulated information, and increasingly touch decision-making processes that organisations want to keep within controlled environments. A sovereign AI solution — where the model runs on infrastructure governed under European jurisdiction, operated by a European firm, but built on the technical capabilities of a US hyperscaler — addresses a specific and growing need.

    Ezzat framed AI itself with characteristic pragmatism in a separate interview with Fortune. “AI is a business. It is not a technology,” he said, warning leaders against treating it as a “black box being managed separately.” His caution against AI FOMO — “You don’t want to be too ahead of the learning curve. If you are, you’re investing and building capabilities that nobody wants” — suggests a company that has learned from watching the metaverse hype cycle play out.

    What to watch

    Capgemini’s sovereignty strategy raises several questions worth tracking. First, whether the 50%-by-2029 estimate for sovereignty-embedded contracts proves accurate, or whether it reflects the kind of optimistic forecasting that consulting firms are prone to when promoting a new service line. Second, how European competitors — particularly Atos, which has its own sovereignty ambitions, and smaller European cloud providers — respond to Capgemini’s hyperscaler-partnered model. Third, whether the European Commission’s own stance on sovereignty tilts toward the pragmatic Capgemini position or toward more aggressive technological independence.

    For consultants and practitioners, the practical takeaway is straightforward: sovereignty is moving from a compliance checkbox to a structural feature of European enterprise contracts. The firms that build credible delivery capabilities around it now — not just policy positions, but operational partnerships and trained workforces — will have a meaningful advantage as regulation tightens. Capgemini has placed its bet. The question is whether the market follows.

  • Capgemini’s AI Pivot: From Hype to Realism, with €22.5 Billion at Stake

    Capgemini’s full-year 2025 results, released on 13 February, offered one of the clearest snapshots yet of how a major IT consultancy is repositioning itself around artificial intelligence — and the distance still to travel.

    The French group reported revenues of €22.5 billion, up 3.4% at constant currency. Growth was modest by historical standards, weighed down by cautious enterprise spending across Continental Europe. But the fourth quarter told a different story: 10.6% growth, suggesting that client hesitancy may be starting to ease.

    From hype to realism

    CEO Aiman Ezzat framed the company’s direction as a shift “from AI hype to AI realism” — a phrase that carries weight given how frequently the consultancy sector has leaned on AI as a growth narrative without always delivering measurable outcomes.

    The numbers behind the claim are worth examining. Generative AI bookings exceeded 8% of Capgemini’s total for the year, rising above 10% in Q4. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI — the emerging category of AI systems designed to take autonomous actions rather than simply generate content.

    These are significant investments in capability. Whether they translate into proportional revenue growth remains the open question. Capgemini’s 2026 guidance of 6.5% to 8.5% revenue growth fell slightly below the 7.2% analyst consensus, suggesting the market expected more from a company positioning AI at the centre of its strategy.

    The sovereignty opportunity

    Beyond AI, Capgemini is betting on a second structural trend: digital sovereignty. The company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. For a European-headquartered firm, this represents a genuine competitive advantage over US-based rivals.

    Ezzat was notably pragmatic on the sovereignty question, dismissing calls for full European tech autonomy. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.” Instead, he advocated for finding “the right sovereignty solution based on the use case, the client environment, the government.”

    This positions Capgemini as a bridge between European regulatory ambitions and the practical reality that most enterprise infrastructure still runs on AWS, Google Cloud, and Microsoft Azure. The company has signed partnerships with all three US hyperscalers to deliver what it calls “sovereign” AI solutions — European-managed services running on American infrastructure.

    The uncomfortable parts

    Not everything in the results paints a straightforward growth picture. Net income declined 4.2% year-on-year to €1.6 billion. The stock has fallen approximately 29% so far in 2026, caught in a broader selloff of companies perceived as vulnerable to AI-driven disruption — an ironic position for a firm making AI the centrepiece of its strategy.

    There is also the matter of Capgemini Government Solutions, the US subsidiary the company announced it would sell following public backlash over a $4.8 million contract with US Immigration and Customs Enforcement. It is a reminder that the consulting business operates in political as well as commercial environments, and reputational risk can force strategic decisions that have little to do with market fundamentals.

    France, Capgemini’s home market, contracted by 4.1% — a concern for a company headquartered in Paris. The bright spots were North America (7.3% growth), the UK and Ireland (10.5%), and Asia Pacific and Latin America (13.8%). Financial Services led sectors with 9.2% growth, accelerating sharply to 20.4% in Q4.

    What to watch

    Capgemini’s results matter beyond its own balance sheet because they reflect broader dynamics affecting the consultancy and IT services sector. The shift from selling AI as a concept to delivering AI as an operational capability is where the next wave of value — and differentiation — will come from.

    Three things are worth tracking. First, whether GenAI bookings continue to accelerate past the 10% threshold reached in Q4, or whether that was an end-of-year surge. Second, how the sovereignty proposition develops as European regulation tightens — this could become a meaningful differentiator for European-headquartered firms. Third, whether the “Fit-for-growth” restructuring programme, which will cost an additional €200 million in cash outflow, delivers the operational efficiency the company is banking on.

    The consultancy sector has spent the better part of two years talking about AI. Capgemini’s latest results suggest it is now moving into the phase of proving it can deliver.