Category: AI

  • Lloyds Banking Group’s £100 Million AI Bet: What the UK’s First Agentic Financial Assistant Means for Enterprise AI

    Lloyds Banking Group expects its artificial intelligence programme to deliver more than £100 million in value this year — double the £50 million it attributes to generative AI in 2025. The figures, disclosed alongside the group’s annual results in January 2026, represent one of the more concrete attempts by a major financial institution to attach a number to what AI is actually worth.

    That specificity matters. As maddaisy examined last week, PwC’s 2026 Global CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. Lloyds is not claiming to have solved the ROI puzzle entirely, but it is doing something most enterprises have not: publishing the numbers and tying them to specific operational improvements rather than vague promises of transformation.

    The financial assistant: what it actually does

    The headline initiative is a customer-facing AI financial assistant, which Lloyds describes as the first agentic AI tool of its kind offered by a UK bank. Announced in November 2025 and scheduled for public rollout in early 2026, the assistant sits within the Lloyds mobile app and is designed to help customers manage spending, savings, and investments through natural conversation.

    The system uses a combination of generative AI for its conversational interface and agentic AI to process requests and execute actions. In practical terms, a customer can query a payment, ask for a spending breakdown, or request guidance on savings options — and the assistant will interpret the request, plan the necessary steps, and carry them out. Where it reaches the limits of what automated support can handle, it refers users to human specialists.

    The scope is intended to expand. Lloyds has said the assistant will eventually cover its full product suite, from mortgages to car finance to protection products, serving its 28 million customer accounts across the Lloyds, Halifax, Bank of Scotland, and Scottish Widows brands.

    Testing at scale, not in a lab

    Before public launch, Lloyds tested the assistant with approximately 7,000 employees, who collectively completed around 12,000 trials. That is a meaningful pilot — large enough to surface edge cases and failure modes that a controlled lab environment would miss, and conducted with users who understand the bank’s products well enough to stress-test the system’s accuracy.

    The employee testing sits alongside a broader internal AI deployment that has already delivered measurable results. Athena, the group’s AI-powered internal search assistant, is used by 20,000 colleagues and has reduced information search times by 66%. GitHub Copilot, deployed to 5,000 engineers, has driven a 50% improvement in code conversion for legacy systems. An AI-powered HR assistant resolves 90% of queries correctly on the first attempt.

    These are not experimental pilots. They are production tools used at scale, and the fact that Lloyds is willing to attach specific performance metrics to each one distinguishes its approach from the many enterprises that describe AI impact in qualitative terms only.

    The ROI question: credible or convenient?

    The £100 million figure invites scrutiny, and it should. “Value” in corporate AI disclosures is notoriously slippery — it can mean cost savings, time savings converted to a monetary equivalent, revenue uplift, or some combination of all three. Lloyds has not published a detailed methodology for how it arrived at the £50 million figure for 2025 or how it projects the 2026 target.

    That said, the bank’s approach has features that lend it more credibility than many comparable claims. The internal tools have named user populations and specific performance benchmarks. The customer-facing assistant was tested with thousands of employees before launch, not unveiled as a concept. And the 2025 figure is presented as a delivered outcome, not a forecast — a distinction that matters when most enterprises are still struggling to prove any return at all.

    Lloyds also rose 12 places in the Evident AI Global Index last year — the strongest improvement of any UK bank — suggesting that external assessors see substance behind the claims.

    Agentic AI in financial services: the governance dimension

    The move to customer-facing agentic AI in banking raises governance questions that go beyond what internal productivity tools require. As maddaisy explored earlier this week, Deloitte’s 2026 AI report found that only one in five enterprises has a mature governance model for agentic systems. When those systems move from internal search assistants to customer-facing financial advice, the stakes escalate considerably.

    A banking AI that can execute transactions, provide savings guidance, and eventually handle mortgage queries operates in regulated territory. The Financial Conduct Authority’s expectations around suitability, fair treatment, and clear communication apply regardless of whether the advice comes from a human or an algorithm. Lloyds has acknowledged this by building in human referral pathways, but the real test will come at scale — when millions of customers interact with the system simultaneously, and edge cases multiply.

    Ron van Kemenade, the group’s chief operating officer, has framed the launch as “a pivotal step in our strategy as we continue to reimagine the Group for our customers and colleagues.” Ranil Boteju, chief data and analytics officer, has positioned it as a demonstration of responsible deployment, noting that the assistant “can understand and respond to specific, hyper-personalised customer requests and retains memory to offer a more holistic experience, ensuring the generated answer is safe to present to customers.”

    What this signals for the sector

    Lloyds is not the first bank to deploy AI, nor the first to make bold claims about its value. What distinguishes this move is the combination of a concrete financial baseline (£50 million delivered), a named and tested product (the financial assistant), a clear expansion roadmap (full product suite), and an institutional commitment to upskilling (a new AI Academy for its 67,000 employees).

    For practitioners watching the enterprise AI landscape, the Lloyds case offers a useful reference point against the prevailing narrative of deployment fatigue and unproven returns. It does not resolve the broader ROI question — one bank’s results do not establish an industry pattern — but it does suggest that organisations which invest in specific, measurable use cases and test rigorously before launch can move beyond the proof-of-concept purgatory that still traps most enterprises.

    The harder question is what happens next. An AI assistant that helps customers check spending patterns is useful. One that advises on mortgages and investment products enters a different category of risk and regulatory complexity. How Lloyds navigates that expansion — and whether the £100 million value target holds up under the scrutiny of real-world deployment — will be worth watching over the months ahead.

  • The Governance Gap No One Is Closing: Why Agentic AI Is Outrunning Enterprise Oversight

    Three-quarters of enterprises plan to deploy agentic AI within two years. Only one in five has a mature governance model for it. That arithmetic should concern anyone responsible for enterprise technology strategy.

    The figures come from Deloitte’s 2026 State of AI in the Enterprise report, and they represent something more specific than the familiar story of AI adoption outpacing regulation. The challenge with agentic AI is not that rules do not exist — as maddaisy recently examined, the EU AI Act, Colorado’s AI Act, and a growing patchwork of global regulations are creating real enforcement deadlines. The challenge is that agentic systems demand a fundamentally different kind of oversight, and most organisations have not built the operational machinery to provide it.

    What makes agentic AI different

    Conventional AI systems — including the generative AI tools that have dominated enterprise adoption over the past two years — operate in an advisory mode. They suggest, summarise, draft, and classify. A human reviews the output and decides what to do with it. Governance for these systems, while imperfect, fits within existing frameworks: you audit the model, monitor outputs, and maintain a human in the decision loop.

    Agentic AI breaks that model. These systems are designed to plan, execute, and adjust autonomously — booking flights, approving procurement decisions, triaging customer complaints, or managing collections workflows without waiting for human sign-off. Oracle’s new agentic banking platform, launched in February 2026, illustrates the trajectory: domain-specific agents handle loan originations, credit decisioning, and compliance checks, with human oversight positioned as a “human-in-the-loop” role rather than a gatekeeping one.

    The distinction matters because it changes where governance must operate. With advisory AI, oversight happens after the model produces an output and before a human acts on it. With agentic AI, the system is the actor. Governance must be embedded in real time — monitoring agent behaviour as it happens, enforcing boundaries on what an agent can and cannot do, and maintaining audit trails that capture not just decisions but the full chain of reasoning and actions that led to them.

    The 21% problem

    Deloitte’s finding that only 21% of companies have a mature agentic AI governance model is striking, but the detail beneath it is more revealing. In Singapore, where deployment ambitions are among the highest globally — 72% of businesses plan to deploy agentic AI across multiple operational areas within two years, up from 15% today — the mature governance figure drops to just 14%.

    As maddaisy noted in its analysis of the broader Deloitte report, this fits a wider pattern: organisations are increasingly confident in their AI strategy but declining in readiness on the operational foundations needed to execute it. The agentic governance gap is perhaps the sharpest expression of this paradox — a technology that is advancing from pilot to production while the controls needed to run it safely remain in early stages.

    Half of Singapore respondents reported using a patchwork of public and internal proprietary frameworks to assess agent risk and performance. That is not a governance model — it is improvisation.

    Why existing frameworks fall short

    The AI Trends Report 2026, published by statworx and AI Hub Frankfurt, identifies three operational disciplines that are becoming foundational for reliable agentic AI: AI governance, DataOps, and what the report terms AgentOps — the operational layer for managing autonomous AI agents in production.

    AgentOps is a useful concept because it captures what most enterprise governance frameworks currently lack. Traditional AI governance focuses on model development: training data quality, bias testing, documentation, and approval workflows before deployment. That is necessary but insufficient for systems that learn, adapt, and take actions in production environments.

    Agentic systems require runtime governance: clear boundaries on agent autonomy (what decisions can the agent make independently, and which require escalation?), real-time monitoring of agent behaviour against expected parameters, kill switches for when agents drift outside acceptable bounds, and comprehensive audit trails that regulators can inspect after the fact.

    The EU AI Act’s requirements for high-risk systems — documented risk management, technical logging, human oversight mechanisms, and conformity assessments — implicitly assume this kind of operational infrastructure. But most organisations have not yet translated those requirements into engineering reality.

    Deployment is not waiting for governance

    The uncomfortable truth is that agentic AI is entering production regardless of whether governance is ready. Oracle is shipping banking agents now. Companies like AMD and Heathrow Airport are deploying autonomous agents in customer experience roles. Gartner predicts agentic systems will autonomously resolve 80% of customer service issues by 2028.

    Constellation Research offers a useful counterweight to the hype, arguing that agentic AI is “more of a feature than a revolution” and that the real measure of value is decision velocity — how quickly smaller decision trees and processes can be automated at scale. This framing is helpful because it reduces the abstraction. An AI agent rebooking a flight is not a paradigm shift; it is a process automation with a more sophisticated reasoning layer. But that reasoning layer is precisely what makes governance harder. The agent is not following a static script — it is making contextual judgements, and those judgements need oversight.

    What the governance gap actually costs

    The business case for closing the governance gap is not primarily about regulatory fines, though those are real. It is about operational risk. When an agentic system autonomously commits to a procurement decision, misprices a financial product, or gives a customer incorrect information with real-world consequences, the liability question is immediate and the reputational exposure is direct.

    It is also about scaling. Deloitte’s data shows that companies with stronger governance foundations are deploying agentic AI more successfully — they start with lower-risk use cases, build governance capabilities alongside deployment, and scale deliberately. Organisations that skip the governance step find themselves either slowing down when something goes wrong or, worse, not knowing that something has gone wrong until a regulator or customer tells them.

    What needs to happen

    The gap between agentic AI deployment and agentic AI governance is not going to close on its own. Three practical steps can narrow it.

    Define agent autonomy boundaries explicitly. For every agentic AI deployment, organisations need a clear specification of what the agent can do independently, what requires human approval, and what is prohibited. These boundaries should be codified in the system, not just written in a policy document. The Oracle banking platform’s “human-in-the-loop” architecture is one model, but even that needs specificity about when and how the loop engages.

    Invest in runtime monitoring, not just pre-deployment testing. The governance challenge with agentic AI is that it operates continuously and adapts to context. Pre-deployment audits are necessary but not sufficient. Organisations need real-time monitoring that tracks agent decisions against expected parameters and flags anomalies before they compound.

    Build audit trails as engineering infrastructure. When a regulator asks how an agent arrived at a specific decision — and under the EU AI Act, they will — the organisation needs to produce a complete chain of the agent’s reasoning, data inputs, and actions. This is not a reporting challenge; it is an engineering one that needs to be designed into the system from the start, not retrofitted after deployment.

    The agentic AI governance gap is not a future problem. It is a present one, widening with every new deployment. The organisations that treat governance as a technical discipline — building it into the engineering of their agentic systems rather than bolting it on as a compliance afterthought — will have a structural advantage as the technology matures. Those that do not will discover, as many enterprises have with earlier waves of technology adoption, that the cost of retrofitting oversight always exceeds the cost of building it in.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.

  • Capgemini’s Sovereignty Playbook: Bridging Europe’s AI Ambitions and American Infrastructure

    In the space of a single week in early February, Capgemini signed sovereignty-focused partnerships with all three major US hyperscalers — Google Cloud, AWS, and Microsoft. Days later, CEO Aiman Ezzat used the company’s full-year results presentation to publicly dismiss calls for complete European tech autonomy.

    The juxtaposition was deliberate, and it tells a more interesting story than the headline financials. Capgemini is not just adding AI capabilities to its consulting portfolio. It is building a distinct commercial proposition around one of Europe’s most politically charged technology questions: who controls the infrastructure that enterprises depend on?

    The gap between rhetoric and reality

    European digital sovereignty has been a policy preoccupation for several years now, accelerated by concerns over US government data access, the dominance of American cloud providers, and the growing strategic importance of AI infrastructure. The European Commission has pushed for greater technological independence. Member states have launched sovereign cloud initiatives. The language of autonomy is everywhere.

    The reality, as Ezzat put it bluntly during the post-earnings call, is more complicated. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.”

    This is not a controversial claim among practitioners, but it is a notable one for a CEO whose company is headquartered in Paris and whose chairman also leads the digital working group at the European Round Table for Industry. Ezzat has been discussing sovereignty with the European Commission in Brussels and at Davos. His position is informed, not casual.

    A four-layer framework

    Ezzat outlined what amounts to a practical sovereignty framework built around four layers: data, operations, regulation, and technology. His argument is that Europe has meaningful independence on the first three — data residency and governance, operational control over services, and regulatory authority through instruments like GDPR and the AI Act. The fourth layer, the underlying technology stack, is where US Big Tech dominance means full independence is neither achievable nor, in his view, desirable.

    Rather than pursuing autonomy at every layer, Capgemini’s approach is to offer clients “the right sovereignty solution based on the use case, the client environment, the government.” In practice, this means European-managed services running on American infrastructure — sovereign in governance and operations, pragmatic on technology.

    As maddaisy noted earlier this week in examining Capgemini’s full-year results, the company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. That trajectory, if it holds, represents a structural shift in how enterprise IT contracts are structured across Europe.

    Three partnerships, one message

    The timing of Capgemini’s hyperscaler announcements was no coincidence. On 6 February, the company expanded its partnership with Google Cloud, establishing a Sovereign Cloud Delivery Practice and Centre of Excellence. Capgemini will operate as a Google Distributed Cloud air-gapped operator — meaning it can deliver fully managed services with total isolation from the public internet, suited to defence, intelligence, and critical infrastructure clients.

    On 9 February, a similar announcement followed with AWS, focused on sovereign-ready cloud and AI capabilities. Two days later, Capgemini formalised integrated sovereignty solutions with Microsoft. Three announcements in five days, each offering variations on the same theme: Capgemini as the European operator sitting between the client and the American cloud.

    This is a positioning play with genuine commercial substance. For European enterprises navigating tightening regulation — particularly public sector organisations, financial institutions, and healthcare providers — the question is not whether to use cloud services but how to use them in ways that satisfy increasingly specific sovereignty requirements. Capgemini is betting it can be the answer to that question.

    Where AI and sovereignty converge

    The sovereignty proposition becomes more compelling when combined with Capgemini’s broader AI pivot. Generative and agentic AI bookings exceeded 10% of group bookings in Q4 2025, and the company has trained 310,000 employees on generative AI and 194,000 on agentic AI — systems designed to take autonomous actions rather than simply generate content.

    AI workloads are particularly sensitive from a sovereignty perspective. They involve large volumes of proprietary data, often require access to regulated information, and increasingly touch decision-making processes that organisations want to keep within controlled environments. A sovereign AI solution — where the model runs on infrastructure governed under European jurisdiction, operated by a European firm, but built on the technical capabilities of a US hyperscaler — addresses a specific and growing need.

    Ezzat framed AI itself with characteristic pragmatism in a separate interview with Fortune. “AI is a business. It is not a technology,” he said, warning leaders against treating it as a “black box being managed separately.” His caution against AI FOMO — “You don’t want to be too ahead of the learning curve. If you are, you’re investing and building capabilities that nobody wants” — suggests a company that has learned from watching the metaverse hype cycle play out.

    What to watch

    Capgemini’s sovereignty strategy raises several questions worth tracking. First, whether the 50%-by-2029 estimate for sovereignty-embedded contracts proves accurate, or whether it reflects the kind of optimistic forecasting that consulting firms are prone to when promoting a new service line. Second, how European competitors — particularly Atos, which has its own sovereignty ambitions, and smaller European cloud providers — respond to Capgemini’s hyperscaler-partnered model. Third, whether the European Commission’s own stance on sovereignty tilts toward the pragmatic Capgemini position or toward more aggressive technological independence.

    For consultants and practitioners, the practical takeaway is straightforward: sovereignty is moving from a compliance checkbox to a structural feature of European enterprise contracts. The firms that build credible delivery capabilities around it now — not just policy positions, but operational partnerships and trained workforces — will have a meaningful advantage as regulation tightens. Capgemini has placed its bet. The question is whether the market follows.

  • Capgemini’s AI Pivot: From Hype to Realism, with €22.5 Billion at Stake

    Capgemini’s full-year 2025 results, released on 13 February, offered one of the clearest snapshots yet of how a major IT consultancy is repositioning itself around artificial intelligence — and the distance still to travel.

    The French group reported revenues of €22.5 billion, up 3.4% at constant currency. Growth was modest by historical standards, weighed down by cautious enterprise spending across Continental Europe. But the fourth quarter told a different story: 10.6% growth, suggesting that client hesitancy may be starting to ease.

    From hype to realism

    CEO Aiman Ezzat framed the company’s direction as a shift “from AI hype to AI realism” — a phrase that carries weight given how frequently the consultancy sector has leaned on AI as a growth narrative without always delivering measurable outcomes.

    The numbers behind the claim are worth examining. Generative AI bookings exceeded 8% of Capgemini’s total for the year, rising above 10% in Q4. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI — the emerging category of AI systems designed to take autonomous actions rather than simply generate content.

    These are significant investments in capability. Whether they translate into proportional revenue growth remains the open question. Capgemini’s 2026 guidance of 6.5% to 8.5% revenue growth fell slightly below the 7.2% analyst consensus, suggesting the market expected more from a company positioning AI at the centre of its strategy.

    The sovereignty opportunity

    Beyond AI, Capgemini is betting on a second structural trend: digital sovereignty. The company estimates that over 50% of service contracts will include sovereignty requirements by 2029, up from just 5% in 2025. For a European-headquartered firm, this represents a genuine competitive advantage over US-based rivals.

    Ezzat was notably pragmatic on the sovereignty question, dismissing calls for full European tech autonomy. “There is no such thing as absolute sovereignty,” he told journalists. “Nobody has it, because no one has sovereignty over the entire value chain required to deliver services.” Instead, he advocated for finding “the right sovereignty solution based on the use case, the client environment, the government.”

    This positions Capgemini as a bridge between European regulatory ambitions and the practical reality that most enterprise infrastructure still runs on AWS, Google Cloud, and Microsoft Azure. The company has signed partnerships with all three US hyperscalers to deliver what it calls “sovereign” AI solutions — European-managed services running on American infrastructure.

    The uncomfortable parts

    Not everything in the results paints a straightforward growth picture. Net income declined 4.2% year-on-year to €1.6 billion. The stock has fallen approximately 29% so far in 2026, caught in a broader selloff of companies perceived as vulnerable to AI-driven disruption — an ironic position for a firm making AI the centrepiece of its strategy.

    There is also the matter of Capgemini Government Solutions, the US subsidiary the company announced it would sell following public backlash over a $4.8 million contract with US Immigration and Customs Enforcement. It is a reminder that the consulting business operates in political as well as commercial environments, and reputational risk can force strategic decisions that have little to do with market fundamentals.

    France, Capgemini’s home market, contracted by 4.1% — a concern for a company headquartered in Paris. The bright spots were North America (7.3% growth), the UK and Ireland (10.5%), and Asia Pacific and Latin America (13.8%). Financial Services led sectors with 9.2% growth, accelerating sharply to 20.4% in Q4.

    What to watch

    Capgemini’s results matter beyond its own balance sheet because they reflect broader dynamics affecting the consultancy and IT services sector. The shift from selling AI as a concept to delivering AI as an operational capability is where the next wave of value — and differentiation — will come from.

    Three things are worth tracking. First, whether GenAI bookings continue to accelerate past the 10% threshold reached in Q4, or whether that was an end-of-year surge. Second, how the sovereignty proposition develops as European regulation tightens — this could become a meaningful differentiator for European-headquartered firms. Third, whether the “Fit-for-growth” restructuring programme, which will cost an additional €200 million in cash outflow, delivers the operational efficiency the company is banking on.

    The consultancy sector has spent the better part of two years talking about AI. Capgemini’s latest results suggest it is now moving into the phase of proving it can deliver.