Capgemini’s CEO Makes the Unfashionable Case for Pacing Your AI Investment

There is a particular kind of courage in telling a room full of executives to slow down. Aiman Ezzat, CEO of Capgemini, has been doing exactly that – and his reasoning deserves more attention than the typical “move fast or die” narrative that dominates AI strategy discussions.

“You don’t want to be too ahead of the learning curve,” Ezzat told Fortune in February. “If you are, you’re investing and building capabilities that nobody wants.”

Coming from the head of a €22.5 billion consultancy that has trained 310,000 employees on generative AI and is actively building labs for quantum computing, 6G, and robotics, this is not a counsel of inaction. It is a strategic position on pacing – one that puts Ezzat at odds with much of the technology industry’s current mood.

The FOMO problem

The fear of missing out on AI has become a boardroom affliction. Boston Consulting Group reports that half of CEOs now believe their job is at risk if AI investments fail to deliver returns. That pressure creates a predictable dynamic: spend big, move fast, worry about outcomes later.

The data suggests the worry-later approach is not working. EY research shows that while 88% of employees report using AI at work, organisations are failing to capture up to 40% of the potential benefits. In the UK, only 21% of workers felt confident using AI as of January 2026. The tools are arriving faster than the capacity to use them well.

Ezzat’s argument is that this gap is not a technology problem. It is a pacing problem. Companies are deploying AI capabilities ahead of their organisation’s ability to absorb them – and ahead of genuine customer demand for the outcomes those capabilities promise.

AI is a business, not a technology

The more substantive part of Ezzat’s case is about framing. Too many leadership teams, he argues, treat AI as “a black box that’s being managed separately” – a technology initiative bolted onto the existing business rather than a force reshaping how the business operates.

“The question you have to focus on is: ‘How can your business be significantly disrupted by AI?’” Ezzat says. “Not ‘How is your finance team going to become more efficient?’ I’m sure your CFO will deal with that at the end of the day.”

The distinction matters. Departmental efficiency projects – automating invoice processing, summarising meeting notes, generating marketing copy – are the low-hanging fruit that most enterprises are picking right now. They deliver incremental gains but rarely transform a business model. The harder question, the one Ezzat wants CEOs to sit with, is whether AI fundamentally changes what a company sells, how it competes, or what its customers expect.

That question takes time to answer well. Rushing it produces expensive experiments that solve the wrong problems.

The trust deficit

Perhaps the most underexplored part of Ezzat’s argument is about human trust. “How do you get humans to trust the agent?” he asks. “The agent can trust the human, but the human doesn’t really trust the agent.”

This cuts to a practical reality that technology roadmaps tend to gloss over. Agentic AI – systems designed to take autonomous actions rather than simply generate content – is the next wave of enterprise deployment. As maddaisy.com noted when covering Capgemini’s role in the OpenAI Frontier Alliance, the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. Trust is a significant part of why.

Employees who do not trust AI agents will find ways to work around them. Managers who cannot explain AI-driven decisions to clients will revert to manual processes. Organisations that deploy autonomous systems faster than their culture can absorb them will create friction, not efficiency.

Ezzat draws an analogy to ergonomics – the mid-twentieth century discipline of designing tools for humans rather than forcing humans to adapt to tools. “Bad chairs lead to bad backs,” he observes. “Bad AI is likely to be far more consequential.”

Consistent with Capgemini’s own playbook

What makes Ezzat’s position credible is that Capgemini’s recent actions align with it. The company’s approach has been to invest broadly but scale selectively.

As maddaisy.com’s analysis of the company’s 2025 results highlighted, generative AI bookings rose above 10% of total bookings in Q4 – meaningful but not yet dominant. The company maintains labs for emerging technologies including quantum and 6G, keeping a foot in multiple possible futures without betting the firm on any single one.

Meanwhile, the company added 82,300 offshore workers in 2025 – largely through the WNS acquisition – while simultaneously earmarking €700 million for workforce restructuring. The message is clear: AI changes the shape of the workforce, but it does not eliminate the need for one. Building the human infrastructure to deliver AI at scale takes as much investment as the technology itself.

The metaverse lesson

Ezzat’s most pointed comparison is to the metaverse – a technology that commanded billions in corporate investment before the market concluded that customer demand had been dramatically overstated. Capgemini itself experimented with a metaverse lab. Mark Zuckerberg renamed his company around it. Now, as Ezzat puts it, “like air fryers, its time may now have passed.”

The parallel is not that AI will follow the metaverse into irrelevance – the use cases are far more concrete, and the enterprise adoption data is already stronger. The point is about the cost of overcommitment. Companies that invested heavily in metaverse capabilities before the market was ready wrote off those investments. The same risk exists with AI, particularly in areas like agentic systems where the technology’s capability is advancing faster than organisational readiness to use it.

Ezzat’s prescription is agility over ambition: small pilots, constant monitoring, and the willingness to scale rapidly when adoption genuinely accelerates. “We have to be investing – but not too much – to be able to be aware of the technology, following at the speed to make sure that we are ready to scale when the adoption starts to accelerate.”

What this means for practitioners

For consultants advising clients on AI strategy, Ezzat’s framework offers a useful counterweight to the prevailing urgency. The question is not whether to invest in AI – that debate is settled. The question is how to pace that investment so that capability, demand, and organisational readiness move roughly in step.

Companies that get the pacing right will avoid the twin traps of overinvestment (building capabilities nobody wants) and underinvestment (being caught flat-footed when adoption accelerates). In a market where half of CEOs fear for their jobs over AI outcomes, the discipline to move at the right speed – rather than the fastest speed – may prove to be the more valuable skill.