Tag: consulting-trends

  • CTOs and CHROs Are Climbing the Pay Table. That Tells You Where Boards Think Risk Lives Now.

    For decades, the path to being among a public company’s five highest-paid executives ran through operations, sales, or running a business unit. Technology leaders and HR chiefs were important, certainly — but they were support functions, not the positions that commanded top-tier compensation.

    That hierarchy is shifting. New research from The Conference Board, published this week, tracks the named executive officers (NEOs) — the five highest-paid executives — across the Russell 3000 from 2021 to 2025. The findings are striking: Chief Technology Officers appearing as NEOs increased by 61%, while Chief Human Resources Officers rose by 55%. In the same period, business unit leaders — historically the dominant non-CEO, non-CFO category — declined by 15%.

    The numbers are not subtle. They represent a structural revaluation of which functions boards consider most critical to company performance and risk.

    From support functions to enterprise risk owners

    The explanation is not difficult to locate. Two forces are converging: the AI boom is making technology capability an existential strategic question, and a persistent talent war is making the ability to attract, retain, and reskill workers a board-level concern rather than an HR department problem.

    “Growth in CHRO and CTO roles signals that talent, culture, and digital capability are now viewed as enterprise risks, not support functions,” said Andrew Jones, Principal Researcher at The Conference Board. “Boards are prioritising leaders who shape resilience and transformation across the organisation.”

    This tracks with what maddaisy has been reporting for months. The consulting pyramid piece in early March documented how firms are reshuffling roles rather than eliminating them — cutting some positions while creating others in AI engineering and data science. Accenture’s decision to track AI tool usage for promotions was another signal: when a firm ties career progression to technology adoption, the executive overseeing that technology becomes strategically indispensable.

    The specific numbers tell the story

    CTO NEO disclosures rose from 155 to 249 in the Russell 3000 between 2021 and 2025. CHRO disclosures went from 148 to 230. These are not marginal shifts — they represent boards deciding, through the bluntest mechanism available (compensation), that these roles belong at the top table.

    Meanwhile, business unit leaders fell from 1,734 to 1,475 disclosures. The decline suggests that boards are placing less emphasis on divisional performance and more on enterprise-wide capabilities: technology infrastructure, talent strategy, and the legal and regulatory architecture that governs both.

    That last point matters. Legal roles — including chief legal officers, corporate secretaries, and general counsels — saw the largest increase of any non-mandatory NEO category, rising 21% over the same period. As AI governance, data regulation, and compliance pressures mount, the lawyers are moving closer to the centre of power too.

    What this means for the talent market

    The compensation data carries implications well beyond boardroom politics. When CTOs and CHROs move into the highest-paid tier, it reshapes the talent pipeline for those roles. More ambitious executives will target those paths. Boards will demand different skill sets — not just technical competence for CTOs, but strategic vision for how AI and digital infrastructure create competitive advantage. Not just process management for CHROs, but the ability to navigate workforce transformation at scale.

    The Conference Board’s data also reveals an interesting gender dimension. Among S&P 500 NEOs, women CEOs earned 11% more than men in 2025. But male NEOs overall still earned 8% more in the S&P 500 and 12% more in the Russell 3000. The gap, as researcher Paul Hodgson noted, “largely reflects who holds which roles” — men remain more prevalent in higher-paid operational and commercial positions, with longer average tenure and concentration at larger firms.

    The COO plateau and what it signals

    One quieter finding deserves attention: COO representation rose just 6% over the period and has actually declined from a 2023 peak. The chief operating officer — once the natural second-in-command — appears to be losing ground to more specialised enterprise-wide roles. In an era where the critical operational questions are “how do we deploy AI safely” and “how do we retain the people who know how to do it,” a generalist operations mandate may no longer be enough to justify top-tier compensation.

    The board’s revealed preferences

    Compensation data has always been the most reliable indicator of what organisations actually value, as opposed to what they claim to value. Press releases can announce “people-first cultures” and “digital-first strategies” without consequence. Paying the CHRO and CTO as much as the head of your largest business unit is a commitment that shows up in proxy statements.

    The Conference Board’s findings confirm a pattern that has been building for several years: boards are redefining which risks are existential and which executives own them. The AI boom has made technology leadership a strategic imperative. The talent war — intensified by the very AI transformation companies are pursuing — has elevated workforce strategy from an administrative function to an enterprise risk.

    For consultants and practitioners, the practical implication is clear. The organisations they advise are restructuring their leadership hierarchies around technology and talent. Advisory work that once centred on operational efficiency and market strategy increasingly requires fluency in AI deployment, workforce transformation, and the regulatory landscape that governs both. The C-suite is telling you where the priorities are. The pay data just makes it impossible to ignore.

  • CX Metrics Are Broken and Enterprises Know It. The Question Is What Comes Next.

    One in three customer experience professionals cannot connect their organisation’s Net Promoter Score to financial performance. That is not the conclusion of a contrarian think piece – it is a finding from Medallia’s 2026 State of Customer Experience Report, which surveyed more than 1,500 consumers and 550 CX practitioners globally. The metric that has anchored boardroom CX conversations for the better part of two decades is, for a significant share of the profession, unmeasurably disconnected from the outcomes it is supposed to predict.

    And the profession knows it. According to the same report, 78 per cent of CX professionals plan to adopt at least one new metric in 2026.

    The perception gap that should worry everyone

    The most striking number in the Medallia research is not about NPS at all. It is the gap between how enterprises think they are performing and how their customers actually feel. Two-thirds of CX professionals believe their brand’s experience has improved over the past year. Only 16 per cent of consumers agree.

    That is not a minor calibration error. It is a 50-percentage-point disconnect between corporate self-assessment and market reality. And it has consequences: 40 per cent of consumers in the study reported switching brands within three months, often without the kind of dramatic service failure that would register on traditional satisfaction surveys. They simply drifted, quietly, toward something better – or at least less friction-filled.

    The implication is uncomfortable. Many enterprises are not just using the wrong metrics. They are using metrics that actively mislead them into believing things are going well when customers are already leaving.

    Too many metrics, too little insight

    The problem is not a shortage of measurement. If anything, it is the opposite. Research published in MIT Sloan Management Review found that large organisations routinely track between 50 and 200 CX metrics across multiple channels and touchpoints. Some track even more. The result is not better decision-making but what one report from Usan’s 2026 CX findings calls “data paralysis” – mountains of signals with no clear path from insight to action.

    MIT Sloan’s researchers, working with 14 subscription-services companies, found that most organisations collect metrics because they are industry-standard, not because they have been validated as useful for their specific customer journey. The recommendation is counterintuitive for an era obsessed with data: collect fewer metrics, but align the ones you keep to specific stages of the customer lifecycle where they can actually inform decisions.

    Medallia’s own data supports this. Among CX professionals using 10 or more data sources, 92 per cent said they could demonstrate return on investment. Among those using five or fewer, the figure dropped to 73 per cent. More data sources help – but only when they are deliberately chosen and connected, not accumulated by default.

    The structural problem underneath

    Even when CX teams produce meaningful insights, the organisational infrastructure often fails to act on them. Medallia’s survey found that 41 per cent of CX professionals worry their teams are too siloed from the rest of the business. One-third said entire departments take no action on CX findings. Half said their organisation does not respond often enough or with enough impact.

    Budget control compounds the issue. Fifty-eight per cent of CX practitioners said their initiatives are funded from budgets they do not directly oversee – a structural dependency that limits their ability to move quickly or invest in new approaches.

    This echoes a pattern maddaisy.com has tracked across several domains. When this site examined how enterprises are designing digital experiences for both humans and AI agents, the core finding was similar: organisations recognise that the landscape has shifted, but their internal structures have not caught up. The CX measurement problem is another instance of the same gap – not a lack of awareness, but a failure of organisational plumbing.

    Where the replacement metrics are coming from

    The 78 per cent of CX professionals looking for new metrics are not starting from scratch. A CX Network analysis of customer listening trends suggests three directions gaining traction.

    First, Customer Effort Score – a measure of how easy or difficult a specific interaction was – is increasingly favoured over NPS for transactional touchpoints, because it maps directly to a friction point rather than measuring a generalised sentiment. Second, frontline employee input is being formalised as a data source, recognising that the people handling customer interactions daily often have a more accurate picture of experience quality than any survey. Third, AI-powered analysis of unstructured data – support transcripts, social mentions, behavioural signals – is enabling what practitioners call “continuous listening”, replacing periodic surveys with ongoing, passive measurement.

    None of these are new ideas. What is new is the urgency. Transcom’s 2026 CX paradoxes report makes a pointed observation: many organisations are layering AI-driven CX tools on top of broken measurement foundations. Automating a process that was never validated does not fix it. It scales the dysfunction.

    What practitioners should be watching

    The shift away from legacy CX metrics is not a technology problem waiting for a technology solution. It is a governance problem. Organisations that want their CX measurement to mean something will need to do three things that are simple to describe and difficult to execute.

    They will need to audit their existing metrics ruthlessly – not asking “is this metric popular?” but “does this metric predict a financial outcome we care about?” They will need to connect whatever survives that audit to specific journey stages, so that insights map to decisions rather than floating in quarterly reports. And they will need to give CX teams the budget authority and cross-functional access to act on what they find, rather than producing dashboards that nobody is structurally empowered to respond to.

    The 50-point perception gap in Medallia’s data is not a measurement artefact. It is a mirror. Enterprises that cannot close it will keep telling themselves that customer experience is improving, right up until the churn numbers say otherwise.

  • Anthropic’s Pentagon Walkaway and the Price of AI Principles

    When maddaisy examined the Pentagon-Anthropic standoff earlier this month, the focus was on a new category of vendor risk – the possibility that political pressure could turn a trusted AI supplier into a regulatory liability overnight. That analysis ended with a warning: organisations in the defence supply chain needed to start modelling political risk alongside technical capability.

    Nine days later, the story has taken a turn that few risk models would have predicted. The company the US government tried to destroy is winning the consumer market.

    The market’s unexpected verdict

    Claude, Anthropic’s AI assistant, surged to the number one spot on Apple’s App Store in the days following the ban. The #QuitGPT movement saw 2.5 million users cancel ChatGPT subscriptions or publicly pledge to boycott OpenAI. Protesters gathered outside OpenAI’s San Francisco headquarters. Reddit and X filled with migration guides.

    The paradox is striking. Anthropic refused to remove ethical guardrails from its Pentagon contract – specifically, restrictions on mass surveillance and fully autonomous weapons. The government designated it a supply-chain risk, a label previously reserved for foreign adversaries. And the market responded by rewarding Anthropic with the most effective brand-building campaign in AI history – one the company never planned and could not have bought.

    What OpenAI’s ‘win’ actually cost

    OpenAI moved quickly to fill the gap. Sam Altman announced a deal to deploy models on the Pentagon’s classified networks, claiming the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human oversight requirements. The critical difference, as maddaisy noted at the time, is that OpenAI permits “all lawful uses” and relies on existing Pentagon policies rather than contractual restrictions.

    Altman defended the decision in Fortune, arguing that “the absence of responsible AI companies in the military space” would be worse than engagement on imperfect terms. It is a reasonable argument in isolation.

    But the commercial consequences arrived faster than the contracts. OpenAI’s head of robotics resigned over ethical concerns related to the Pentagon deal. Eleven OpenAI employees signed an open letter protesting the government’s treatment of Anthropic – even as their employer stood to benefit from it. OpenAI rushed out GPT-5.4 within 48 hours of GPT-5.3 Instant, in what looked less like a planned release and more like a company trying to change the subject.

    The talent cost may prove more significant than the user exodus. In an industry where the top researchers can choose where they work, being seen as the company that took the contract Anthropic refused is not a recruitment advantage.

    The strategic calculus of principles

    For decades, ethical positioning in technology was treated as a cost centre – a brand exercise that might prevent negative press but would never drive revenue. The Anthropic-Pentagon episode is the first major test of whether that assumption holds in the AI era.

    The early data suggests it does not. Anthropic is being rewarded by consumers and punished by government. OpenAI is being rewarded by government and punished by consumers. Both companies are learning, in real time, that the AI market has bifurcated into constituencies with fundamentally different values – and that serving one may come at the expense of the other.

    This is new territory for the technology industry. Previous ethical standoffs – Apple versus the FBI over iPhone encryption in 2016, Google’s Project Maven withdrawal in 2018 – were significant but did not produce the same kind of mass consumer response. The difference is that AI tools are personal. People interact with ChatGPT and Claude daily, often for sensitive tasks. When those tools become entangled with military use and surveillance, the reaction is visceral in a way that an encryption debate never was.

    The $60 billion question

    None of this means Anthropic’s position is comfortable. The company has received more than $60 billion in total investment from over 200 venture capital funds. If the supply-chain risk designation holds, enterprise clients in the defence ecosystem will be contractually barred from using Anthropic’s products. Anthropic has announced it will challenge the designation in court, but legal processes move slowly and government procurement decisions move fast.

    The Pro-Human Declaration, a framework released by a bipartisan coalition of researchers and former officials, was finalised before the standoff but published in its aftermath. Its signatories include Steve Bannon and Susan Rice – a pairing that underlines just how broadly the anxiety about unchecked AI development has spread. Among its provisions: mandatory pre-deployment testing, prohibitions on fully autonomous weapons, and an outright moratorium on superintelligence development until scientific consensus on safety is established.

    Whether such frameworks gain legislative traction remains an open question. As maddaisy has previously noted, more than 1,000 AI-related bills were introduced across all 50 US states in 2025 alone, while federal action has been conspicuously absent. The Pentagon-Anthropic episode may be the catalyst that changes that – or it may simply add another chapter to the regulatory vacuum.

    What this means for practitioners

    For organisations evaluating AI vendors, the lesson is not that principles are good and pragmatism is bad. It is that ethical positioning has become a material factor in AI vendor strategy – one that affects talent retention, consumer trust, regulatory exposure, and government access in ways that are increasingly difficult to hedge.

    The vendor assessment frameworks that most enterprises use were not designed for this. They evaluate uptime, security certifications, data residency, and pricing. They do not evaluate whether a vendor’s ethical commitments might make it a target of government action, or whether a vendor’s willingness to accept government contracts without restrictions might trigger a consumer backlash that affects product development velocity.

    Both of those scenarios played out in the space of a single week. Any vendor strategy that does not account for them is incomplete.

    The AI industry is discovering what the defence, pharmaceutical, and energy industries learned long ago: when your products touch questions of public safety and national security, your commercial strategy and your ethical positioning become the same thing. The companies that figure this out first – not just as a brand exercise, but as a genuine constraint on what they will and will not do – will be the ones that retain the trust of all their constituencies, not just the most powerful one.

    Anthropic’s stand cost it a $200 million contract. It may also have bought something more durable: a market position built on trust at a time when trust in AI companies is in short supply.

  • OpenAI’s Frontier Alliance Is Not Just About Consulting. It Is a Bet Against the Enterprise Software Stack.

    When maddaisy examined OpenAI’s Frontier Alliance in February, the focus was on what it meant for the consulting firms — McKinsey, BCG, Accenture, and Capgemini — and the admission that AI vendors cannot scale enterprise deployments alone. That story was about the consulting industry. This one is about the companies the alliance is quietly aimed at: the enterprise software vendors that have built trillion-dollar businesses on per-seat licensing.

    The per-seat model under pressure

    OpenAI’s Frontier platform, launched in early February, is designed as an enterprise operating layer — a unified system where AI agents can log into applications, execute workflows, and make decisions across an organisation’s entire technology stack. CRM systems, HR platforms, ticketing tools, internal databases. The ambition is not to replace any single application but to sit above all of them.

    The threat to SaaS vendors is structural, not incremental. If AI agents execute the tasks that human employees currently perform inside Salesforce, ServiceNow, or Workday, the justification for per-seat licensing weakens. Fewer human users logging in means fewer seats to sell. And if agents can orchestrate workflows across multiple systems from a single platform, the case for buying specialised point solutions — each with its own subscription — becomes harder to make.

    The market has not waited for proof. Investors wiped roughly $2 trillion in market value from technology stocks in a single week over AI displacement concerns. ServiceNow shares fell more than 20% year-to-date by mid-February. IBM suffered its largest single-day decline in 25 years after Anthropic’s Claude demonstrated competency with legacy COBOL systems — the very maintenance work that underpins a significant portion of IBM’s consulting revenue.

    The consulting conduit

    What makes the Frontier Alliance specifically dangerous for SaaS incumbents is not the technology. It is the distribution channel.

    McKinsey, BCG, Accenture, and Capgemini are not just consulting firms. They are the primary implementation partners for the very software companies that Frontier could displace. When a Fortune 500 company deploys Salesforce, it typically hires one of these firms to manage the rollout. When it migrates to ServiceNow’s IT service management platform, the same consulting firms handle the integration. The relationships are deep, multi-year, and built on trust.

    OpenAI has effectively enlisted those relationships as a distribution network. Each of the four firms has established dedicated OpenAI practice groups, certified their teams on Frontier, and committed to multi-year alliances. OpenAI’s own forward-deployed engineers will sit alongside consulting teams in client engagements — a model borrowed from Palantir’s playbook for embedding in enterprise accounts.

    The result is a direct-to-enterprise pipeline that does not need SaaS vendors as intermediaries. A consulting firm advising a client on AI strategy can now recommend Frontier agents that orchestrate existing systems, rather than recommending new SaaS products that require their own implementation projects. The consulting firm earns either way. The SaaS vendor may not.

    SaaS is not dying. But the economics are shifting.

    The counterarguments deserve a hearing. Fortune 500 companies will not abandon decades of enterprise software investment overnight. Compliance requirements, audit trails, data sovereignty obligations, and the sheer operational complexity of large organisations create friction that no AI platform can simply wave away. As one analyst put it, “we are simply not going to see a complete unwinding of the past 50 years of enterprise software development.”

    The incumbents are also adapting. Salesforce has pioneered what it calls the “Agentic Enterprise Licence Agreement” — a fixed-price, consumption-based model designed to decouple revenue from headcount. ServiceNow and Microsoft are shifting toward outcome-based pricing. These moves acknowledge the threat and attempt to neutralise it by changing the unit of value from the human user to the business outcome.

    But adaptation comes at a cost. Per-seat licensing has been the engine of SaaS margins for two decades. Moving to consumption or outcome-based models compresses revenue predictability and margins in the short term, even if it preserves relevance in the long term. The transition is not painless, and investors know it.

    Where this connects

    This development sits at the intersection of several threads maddaisy has been tracking. Capgemini’s CEO, Aiman Ezzat, argued last week that organisations are deploying AI capabilities ahead of their ability to absorb them. He is right — but that does not mean the structural pressure on SaaS pricing will wait for organisations to catch up. The market reprices on expectations, not on deployment maturity.

    And the consulting pyramid piece from yesterday noted that the industry’s base is being reshaped as AI compresses the need for junior analytical work. A similar compression is now visible in enterprise software: the middle layer of the stack — the specialised tools that automate individual workflows — faces pressure from platforms that automate across workflows.

    The question for enterprise software vendors is not whether AI agents will change their businesses. It is whether they can shift their pricing, their value propositions, and their competitive moats fast enough to remain the platform of choice — rather than becoming the legacy infrastructure that sits beneath someone else’s agent layer.

    For practitioners evaluating their own technology stacks, the practical implication is this: the next software audit should not just ask what each tool costs per seat. It should ask what happens to that cost when half the seats belong to agents that do not need a licence.

  • The Consulting Pyramid Is Not Collapsing. It Is Being Quietly Redesigned.

    The consulting industry’s foundational staffing model — a small number of partners supported by large cohorts of junior analysts — is facing its most serious structural challenge in decades. But the narrative that AI is “dismantling” the pyramid overstates the pace and understates the complexity of what is actually happening.

    Nick Pye, managing partner at Mangrove Consulting, argued in Consultancy.uk this month that the traditional pyramid is “becoming increasingly difficult to justify.” His core thesis is straightforward: AI can now perform the analytical heavy lifting — research, financial modelling, scenario analysis — that once required rooms of graduates billing at premium rates. Clients are noticing. They want senior judgement, not junior analysis. And they increasingly have their own data capabilities, reducing dependence on external advisers for the work that used to fill the pyramid’s base.

    The argument is sound in principle. Where it risks overreach is in assuming the transition is further along than the evidence suggests.

    The data tells a more complicated story

    If AI were genuinely dismantling the consulting pyramid, one would expect to see mass reductions in headcount at the bottom. The picture is more mixed than that.

    As maddaisy reported in February, Capgemini ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. The company simultaneously announced €700 million in restructuring charges. It is not shrinking. It is reshuffling: eliminating some roles while creating others, primarily in AI engineering, data science, and agentic AI delivery.

    McKinsey, meanwhile, has begun testing new recruits on AI capabilities, and more than half of graduate roles now reportedly require AI skills. The Big Four have reduced student intakes, but the UK consulting market’s difficulties in 2025 — its worst year since lockdown — owe as much to broader demand softness as to structural AI disruption.

    And the technology itself is not yet delivering at the scale the hype implies. Eden McCallum research published this year found that while excitement for generative AI remains high, revenue impact remains minimal. Ninety-five per cent of AI pilots have failed to deliver returns, according to industry data cited by Consultancy.uk. That is not a technology that has already displaced the analyst class.

    What is genuinely changing

    None of this means the pyramid is safe. The direction of travel is clear, even if the pace is slower than the most breathless accounts suggest.

    Three shifts are converging. First, clients increasingly own and understand their own data. The analytical monopoly that consulting firms once held — gathering, processing, and synthesising information that clients could not access themselves — has eroded as organisations have built internal data teams and deployed their own AI tools.

    Second, the economics of the base are deteriorating. When an AI system can produce a comparable market analysis in minutes rather than weeks, it becomes progressively harder to justify billing rates for the same work performed by junior staff. This does not eliminate the need for human analysis, but it compresses the time and headcount required.

    Third, client expectations have shifted from deliverables to outcomes. As Pye puts it: clients want “decisions and performance,” not “decks and processes.” That shift favours experienced practitioners who can navigate organisational politics and drive implementation — not the analysts who assemble the slides.

    The diamond, the inverted pyramid, and the graduate question

    The replacement models being discussed are instructive. The most conservative is the “diamond” — wider in the middle, thinner at the base, with fewer entry-level analysts but more mid-level orchestration roles. It preserves hierarchy while acknowledging that the bottom of the pyramid has less to do.

    The more radical option is what Pye calls “flipping the pyramid”: small teams of senior and mid-level consultants tackling specific challenges, supported by AI systems rather than junior staff. Boutique consultancies have operated variations of this model for years. What AI changes is the scale at which it becomes viable.

    But neither model addresses the question that should concern the industry most: if junior roles contract, where do future senior consultants come from?

    The traditional pyramid functioned as a training pipeline. Graduates entered, learned the craft through years of analytical work, and developed into the experienced practitioners clients now prize. Close that entry point, and the industry faces a slow-motion skills crisis — a generation of senior consultants with no successors trained in the discipline.

    Pye’s answer is that consulting firms will increasingly recruit mid-career professionals who have already developed sector expertise elsewhere. The career path inverts: specialise first in an industry, then move into consulting.

    This is plausible but raises its own problems. The consulting skill set — structured problem solving, client management, the ability to diagnose organisational dysfunction — is not the same as industry expertise. A decade in financial services does not automatically produce someone who can run a transformation programme. The two capabilities overlap, but they are not identical.

    The real risk is not speed — it is the talent pipeline

    The firms that are moving fastest on AI are not necessarily the ones best positioned for the long term. Accenture is tracking AI logins for promotion decisions. OpenAI has formed alliances with McKinsey, BCG, Accenture, and Capgemini to deploy its enterprise AI platform. Capgemini’s CEO is counselling patience, arguing that deploying AI ahead of organisational readiness wastes both money and credibility.

    Each response reflects a different bet on how quickly the pyramid will change — and how to navigate the transition without breaking the firm’s ability to develop talent.

    The consulting industry is not being dismantled by AI. It is being redesigned, unevenly, firm by firm, with no consensus on the target operating model. The firms that get the balance right — reducing the base without severing the pipeline that produces tomorrow’s senior partners — will define what the industry looks like in a decade. The ones that treat this as a simple cost-cutting exercise will find, in five years, that they have cut too deep in exactly the wrong place.

  • Capgemini’s CEO Makes the Unfashionable Case for Pacing Your AI Investment

    There is a particular kind of courage in telling a room full of executives to slow down. Aiman Ezzat, CEO of Capgemini, has been doing exactly that – and his reasoning deserves more attention than the typical “move fast or die” narrative that dominates AI strategy discussions.

    “You don’t want to be too ahead of the learning curve,” Ezzat told Fortune in February. “If you are, you’re investing and building capabilities that nobody wants.”

    Coming from the head of a €22.5 billion consultancy that has trained 310,000 employees on generative AI and is actively building labs for quantum computing, 6G, and robotics, this is not a counsel of inaction. It is a strategic position on pacing – one that puts Ezzat at odds with much of the technology industry’s current mood.

    The FOMO problem

    The fear of missing out on AI has become a boardroom affliction. Boston Consulting Group reports that half of CEOs now believe their job is at risk if AI investments fail to deliver returns. That pressure creates a predictable dynamic: spend big, move fast, worry about outcomes later.

    The data suggests the worry-later approach is not working. EY research shows that while 88% of employees report using AI at work, organisations are failing to capture up to 40% of the potential benefits. In the UK, only 21% of workers felt confident using AI as of January 2026. The tools are arriving faster than the capacity to use them well.

    Ezzat’s argument is that this gap is not a technology problem. It is a pacing problem. Companies are deploying AI capabilities ahead of their organisation’s ability to absorb them – and ahead of genuine customer demand for the outcomes those capabilities promise.

    AI is a business, not a technology

    The more substantive part of Ezzat’s case is about framing. Too many leadership teams, he argues, treat AI as “a black box that’s being managed separately” – a technology initiative bolted onto the existing business rather than a force reshaping how the business operates.

    “The question you have to focus on is: ‘How can your business be significantly disrupted by AI?’” Ezzat says. “Not ‘How is your finance team going to become more efficient?’ I’m sure your CFO will deal with that at the end of the day.”

    The distinction matters. Departmental efficiency projects – automating invoice processing, summarising meeting notes, generating marketing copy – are the low-hanging fruit that most enterprises are picking right now. They deliver incremental gains but rarely transform a business model. The harder question, the one Ezzat wants CEOs to sit with, is whether AI fundamentally changes what a company sells, how it competes, or what its customers expect.

    That question takes time to answer well. Rushing it produces expensive experiments that solve the wrong problems.

    The trust deficit

    Perhaps the most underexplored part of Ezzat’s argument is about human trust. “How do you get humans to trust the agent?” he asks. “The agent can trust the human, but the human doesn’t really trust the agent.”

    This cuts to a practical reality that technology roadmaps tend to gloss over. Agentic AI – systems designed to take autonomous actions rather than simply generate content – is the next wave of enterprise deployment. As maddaisy.com noted when covering Capgemini’s role in the OpenAI Frontier Alliance, the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. Trust is a significant part of why.

    Employees who do not trust AI agents will find ways to work around them. Managers who cannot explain AI-driven decisions to clients will revert to manual processes. Organisations that deploy autonomous systems faster than their culture can absorb them will create friction, not efficiency.

    Ezzat draws an analogy to ergonomics – the mid-twentieth century discipline of designing tools for humans rather than forcing humans to adapt to tools. “Bad chairs lead to bad backs,” he observes. “Bad AI is likely to be far more consequential.”

    Consistent with Capgemini’s own playbook

    What makes Ezzat’s position credible is that Capgemini’s recent actions align with it. The company’s approach has been to invest broadly but scale selectively.

    As maddaisy.com’s analysis of the company’s 2025 results highlighted, generative AI bookings rose above 10% of total bookings in Q4 – meaningful but not yet dominant. The company maintains labs for emerging technologies including quantum and 6G, keeping a foot in multiple possible futures without betting the firm on any single one.

    Meanwhile, the company added 82,300 offshore workers in 2025 – largely through the WNS acquisition – while simultaneously earmarking €700 million for workforce restructuring. The message is clear: AI changes the shape of the workforce, but it does not eliminate the need for one. Building the human infrastructure to deliver AI at scale takes as much investment as the technology itself.

    The metaverse lesson

    Ezzat’s most pointed comparison is to the metaverse – a technology that commanded billions in corporate investment before the market concluded that customer demand had been dramatically overstated. Capgemini itself experimented with a metaverse lab. Mark Zuckerberg renamed his company around it. Now, as Ezzat puts it, “like air fryers, its time may now have passed.”

    The parallel is not that AI will follow the metaverse into irrelevance – the use cases are far more concrete, and the enterprise adoption data is already stronger. The point is about the cost of overcommitment. Companies that invested heavily in metaverse capabilities before the market was ready wrote off those investments. The same risk exists with AI, particularly in areas like agentic systems where the technology’s capability is advancing faster than organisational readiness to use it.

    Ezzat’s prescription is agility over ambition: small pilots, constant monitoring, and the willingness to scale rapidly when adoption genuinely accelerates. “We have to be investing – but not too much – to be able to be aware of the technology, following at the speed to make sure that we are ready to scale when the adoption starts to accelerate.”

    What this means for practitioners

    For consultants advising clients on AI strategy, Ezzat’s framework offers a useful counterweight to the prevailing urgency. The question is not whether to invest in AI – that debate is settled. The question is how to pace that investment so that capability, demand, and organisational readiness move roughly in step.

    Companies that get the pacing right will avoid the twin traps of overinvestment (building capabilities nobody wants) and underinvestment (being caught flat-footed when adoption accelerates). In a market where half of CEOs fear for their jobs over AI outcomes, the discipline to move at the right speed – rather than the fastest speed – may prove to be the more valuable skill.

  • The Agentic AI Governance Playbook Is Taking Shape. Most Enterprises Are Not Ready for It.

    The governance playbook for agentic AI is starting to take shape — and it looks nothing like the frameworks most enterprises currently rely on.

    Over the past month, a cluster of regulatory bodies, law firms, and industry coalitions have published guidance specifically addressing agentic AI systems. The UK Information Commissioner’s Office released its first Tech Futures report on agentic AI in January. Mayer Brown published a comprehensive governance framework in February. The Partnership on AI identified agentic governance as the top priority among six for 2026. And FINRA’s 2026 oversight report now includes a dedicated section on AI agents as an emerging threat to financial markets.

    Taken individually, none of these publications is remarkable. Taken together, they represent something maddaisy has been tracking since mid-February: the governance conversation is finally catching up to the deployment reality.

    The problem these frameworks are trying to solve

    When maddaisy examined the agentic AI governance gap in February, the numbers were stark: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. Subsequent analysis of agentic drift — where systems degrade gradually without triggering alarms — and shadow AI adoption revealed that the risks are not hypothetical. They are already materialising in production environments.

    The core issue, as these new frameworks make explicit, is that agentic AI does not fit within existing oversight models. Traditional AI governance assumes a human reviews the output before acting on it. Agentic systems are the actor. They plan, execute, and adapt autonomously — booking appointments, approving procurement, triaging complaints, managing collections. Governance cannot happen after the fact. It must be embedded in real time.

    What the emerging frameworks have in common

    Despite coming from different jurisdictions and institutions, the recent guidance converges on several principles that practitioners should note.

    Least privilege by default. Mayer Brown’s framework emphasises restricting what an agent can access — not just what it can do. Agents should not have standing access to sensitive databases, trade secrets, or systems beyond their immediate task scope. This mirrors the zero-trust approach that cybersecurity teams have adopted over the past decade, now applied to autonomous software.

    Human checkpoints at decision boundaries, not everywhere. The emerging consensus rejects both extremes: fully autonomous operation with no oversight, and human approval for every action (which would negate the point of agentic AI). Instead, the frameworks advocate for defined boundaries — moments where human approval is required before the agent proceeds. These include irreversible actions, decisions in regulated domains such as healthcare or financial services, and any step that falls outside the agent’s defined scope.

    Real-time monitoring, not periodic audits. The ICO report and Mayer Brown both stress continuous behavioural monitoring after deployment. This addresses the drift problem directly: an agent that passed all review gates at launch may behave differently three months later after prompt adjustments, model updates, and tool changes. Logging the full chain of reasoning and actions — not just inputs and outputs — is becoming a baseline expectation.

    Transparency to users. Multiple frameworks now explicitly require that organisations disclose when a customer is interacting with an AI agent rather than a human. The ICO notes that agentic AI “magnifies existing risks from generative AI” because these systems can rapidly automate complex tasks and generate new personal information at scale. Users need to know what they are dealing with — and they need a route to a human at any point.

    Value-chain accountability. The Partnership on AI flags a gap that most enterprise governance programmes have not addressed: who is responsible when something goes wrong in a multi-agent system? If an agent calls another agent, which calls a third-party tool, which accesses a database — and the outcome is harmful — the liability chain is unclear. Their recommendation: establish an agreed taxonomy for the AI value chain before deployment, not after an incident.

    Where the frameworks fall short

    For all the convergence, there are notable gaps. None of the published guidance adequately addresses the measurement problem. Adobe’s 2026 AI and Digital Trends Report found that only 31% of organisations have implemented a measurement framework for agentic AI. Without clear metrics for what “good governance” looks like in practice, the frameworks risk becoming compliance theatre — policies that exist on paper but do not change how agents actually operate.

    There is also limited guidance on cross-border deployment. The Partnership on AI calls for international coordination, but the practical reality — as maddaisy’s analysis of America’s state-by-state regulatory fragmentation highlighted — is that even within a single country, compliance requirements vary dramatically. An agent deployed in New York now faces transparency requirements under the RAISE Act that do not apply in Texas, where the Responsible AI Governance Act takes a different approach. For multinational enterprises, the compliance surface is formidable.

    What this means for practitioners

    The practical takeaway is straightforward, if demanding. Organisations deploying or planning to deploy agentic AI should be doing four things now.

    First, audit existing governance frameworks against the agentic-specific requirements these publications outline. Most enterprises have AI policies designed for advisory systems. Those policies almost certainly do not cover autonomous execution, multi-agent coordination, or real-time behavioural monitoring.

    Second, define decision boundaries before deployment. Which actions require human approval? What constitutes an irreversible decision in your context? Where are the regulatory tripwires? These questions are easier to answer before an agent is in production than after it has been running for six months.

    Third, invest in observability infrastructure. As maddaisy noted in the agentic drift analysis, the systems that fail most dangerously are the ones that appear to be working. Full execution logging, behavioural baselines, and anomaly detection are not optional extras — they are the minimum viable governance stack for agentic systems.

    Fourth, assign clear ownership. Mayer Brown’s framework identifies four distinct governance roles: decision-makers who set policy, product teams who implement it, cybersecurity teams who integrate agents into security procedures, and frontline employees who can identify and escalate issues. Most organisations have not mapped these responsibilities for their agentic deployments.

    The governance race is just starting

    The gap between agentic AI deployment and agentic AI governance remains wide. But the direction of travel is now clear: regulators, industry bodies, and legal advisors are converging on a set of principles that will become the baseline expectation. Organisations that build these capabilities now — real-time monitoring, defined decision boundaries, value-chain accountability, and clear ownership — will be better positioned than those scrambling to retrofit governance after their first incident.

    The frameworks are not perfect. They will evolve. But the era of governing agentic AI with policies designed for chatbots is ending. For consultants and technology leaders advising enterprises on AI deployment, that shift should be shaping every engagement.

  • Accenture Will Track AI Logins for Promotions. The Risk Is Measuring Compliance, Not Competence.

    Accenture has begun tracking how often senior employees log into its AI tools — and will factor that usage into promotion decisions. An internal email, reported by the Financial Times, put it plainly: “Use of our key tools will be a visible input to talent discussions.” For a firm that sells AI transformation to the world’s largest organisations, the message to its own workforce is unmistakable: adopt or stall.

    The policy is not without logic. Accenture has invested heavily in AI readiness — 550,000 employees trained in generative AI, up from just 30 in 2022, backed by $1 billion in annual learning and development spending. But training people and getting them to change how they work are two very different problems. The promotion-linked tracking is an attempt to close that gap by force.

    The credibility problem

    This move arrives at a moment when Accenture’s external positioning makes internal adoption a matter of commercial credibility. As maddaisy.com reported last week, Accenture is one of four firms named in OpenAI’s Frontier Alliance — tasked with building the data architecture, cloud infrastructure, and systems integration work needed to deploy AI agents at enterprise scale. It is difficult to sell that capability convincingly if your own senior managers are not using the tools.

    The policy applies specifically to senior managers and associate directors, with leadership roles now requiring what Accenture calls “regular adoption” of AI. The firm is tracking weekly logins to its AI platforms for certain senior staff, though employees in 12 European countries and those on US federal contracts are excluded — a pragmatic nod to varying data protection regimes and security requirements.

    The resistance is the interesting part

    What makes this story more than a policy announcement is the reaction. Some senior employees have questioned the value of the tools outright, with one describing them as “broken slop generators.” Another told the Financial Times they would “quit immediately” if the tracking applied to them.

    That resistance is worth taking seriously, not dismissing. It maps directly onto a pattern maddaisy.com has been tracking. Research published earlier this month found that only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly” — a figure that drops to 12% among non-senior staff. When people do not understand why they are being asked to use a tool, mandating its use tends to produce compliance rather than competence.

    Harvard Business Review research, cited in that same analysis, identified three psychological needs that determine whether employees embrace or resist AI: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues). A policy that monitors logins and ties them to career progression addresses none of these. It measures activity. It says nothing about whether that activity is useful.

    Logins are not outcomes

    This is the core tension. Accenture’s leadership knows that senior adoption is a bottleneck — industry observers note that older managers are often “less comfortable with technology and more wedded to established working methods.” CEO Julie Sweet has framed AI adoption as existential, telling analysts that the company is “exiting employees” in areas where reskilling is not possible. The 11,000 layoffs announced in September reinforced the point.

    But tracking logins conflates presence with productivity. A senior manager who logs in weekly to check a dashboard is counted the same as one who has genuinely integrated AI into client delivery. The metric captures the floor, not the ceiling.

    This echoes a broader concern maddaisy.com has documented. UC Berkeley researchers found that employees using AI tools worked faster but not necessarily better — absorbing more tasks, blurring work-life boundaries, and entering a cycle of acceleration that resembled productivity but often was not. If Accenture’s policy drives more tool usage without more thoughtful tool usage, it risks producing exactly this outcome at scale.

    What this tells the rest of the industry

    Accenture is not alone in struggling with senior AI adoption. The challenge is structural across professional services. Deloitte’s 2026 CSO Survey found that while 95% of chief strategy officers expect AI to reshape their priorities, only 28% co-lead their organisation’s AI decisions. The people with the authority to mandate change are often the furthest from understanding it.

    Accenture’s approach is at least direct. Rather than hoping adoption trickles up from junior staff — who typically adopt new tools faster — it is applying pressure from the top. And the numbers suggest some urgency: with 750,000 employees and $70 billion in revenue, Accenture has grown enormously from its 275,000-person, $29 billion base in 2013. Maintaining that trajectory while its competitors embed AI into delivery models requires its own workforce to be fluent, not just trained.

    The risk, though, is that the policy optimises for the wrong signal. Organisations that have navigated AI adoption most effectively — and maddaisy.com has covered several — tend to share a common trait: they measure what AI enables people to do differently, not how often people open the application. Accenture’s policy would be considerably more compelling if it tracked client outcomes improved through AI-assisted work, or time freed for higher-value tasks, rather than weekly platform logins.

    The precedent matters more than the policy

    Whatever one makes of the specifics, Accenture has done something that most large organisations have avoided: it has made AI adoption an explicit, measurable condition of career advancement for senior leaders. That is a significant signal. It tells clients that Accenture is serious about practising what it sells. It tells employees that AI fluency is no longer optional at the leadership level.

    Whether it works depends on what happens next. If the tracking evolves toward measuring genuine integration — how AI changes the quality of work, not just the frequency of logins — Accenture could set a useful template for the industry. If it remains a blunt instrument that rewards compliance over competence, it will likely produce exactly the kind of performative adoption that gives AI transformation programmes a bad name.

    For consultants and enterprise leaders watching from outside, the lesson is practical: mandating AI adoption is easy; mandating it well is the hard part. The metric you choose to track will shape the behaviour you get.

  • The $5 Trillion Business Transfer That Has Nothing to Do with AI

    Maddaisy has spent the past fortnight examining McKinsey through the lens of AI — its 25,000 AI agents, its shift to outcome-based pricing, its OpenAI Frontier Alliance. But the firm’s most consequential publication this month may have nothing to do with artificial intelligence at all.

    A new report from McKinsey’s Institute for Economic Mobility identifies what it calls the “Great Ownership Transfer” — roughly six million small and medium-sized American businesses facing ownership transitions by 2035, representing up to $5 trillion in enterprise value. The cause is straightforward demography: baby boomer business owners are retiring, and the systems for handing their companies to the next generation of owners barely exist.

    A 92% Closure Rate

    The headline statistic is stark. Today, 92% of small-business market exits in the United States result in closure. Just 5% are completed as sales. Another 3% are transferred to new owners through other mechanisms. The remaining businesses simply shut their doors.

    This is not a failure of entrepreneurial ambition. Small businesses account for 99% of all US companies and employ nearly half the national workforce. The problem, as the report’s authors put it, is structural: “Buying and selling a small business is often harder than starting one because the systems that support entrepreneurship in the United States are currently built for founding companies, not transferring them.”

    Viable firms are closing not because they lack customers or revenue, but because the pathways to succession are limited, opaque, and prohibitively expensive for the buyers who would keep them running.

    The Missing Middle

    The risk concentrates in what McKinsey terms the “missing middle.” Nearly 80% of projected exits involve micro and emerging middle-market businesses valued at less than $2 million. These firms sit in an awkward no-man’s-land: too small to attract private equity or institutional acquirers, but too large and complex for a typical first-time buyer to finance without significant support.

    The consequences are not evenly distributed. Rural communities, where these smaller firms often serve as anchor employers and the primary local tax base, face disproportionate exposure. Labour-intensive industries essential to daily life — retail, construction, food services — account for roughly one-third of the businesses caught in this gap. When a town’s only plumbing contractor or hardware shop closes because there is no viable buyer, the impact extends well beyond the balance sheet.

    A Financing System Built for the Wrong Problem

    The report is pointed about where the infrastructure fails. The SBA 7(a) loan — the primary federal instrument for small-business acquisition financing — requires high equity contributions and full personal guarantees. For first-time buyers, particularly those without existing assets or family wealth to pledge, the barrier is effectively insurmountable.

    The broader ecosystem compounds the problem. Unlike residential property, where standardised appraisals, mortgage products, and regulatory frameworks have created a liquid market, small-business transfers remain bespoke transactions. Each deal requires its own valuation, its own legal structure, its own financing arrangement. There is no MLS for Main Street businesses, no standardised underwriting for a family-owned electrical contractor with $1.5 million in revenue.

    McKinsey’s prescription is to treat small-business acquisition as a scalable market rather than a series of one-off transactions. That means modernised underwriting standards, bundled advisory services, and coordinated market infrastructure — the kind of systemic plumbing that already exists for property and public equities but has never been built for SMB ownership transfer.

    The Equity Dimension

    The demographic profile of current small-business owners — overwhelmingly older, white, and male — means this transfer also carries significant implications for wealth distribution. Under current patterns, women, Black, and Latino individuals combined would capture only about 28% of the transferring $5 trillion in value.

    McKinsey’s modelling suggests the gap is not inevitable. If ownership participation reached demographic parity, Black individuals could see their wealth capture increase more than fourfold to approximately $369 billion. Parity for women could unlock roughly $700 billion. These are not aspirational targets plucked from a diversity statement — they are the arithmetic consequence of removing structural barriers to acquisition financing and deal flow.

    The growth of employee stock ownership plans (ESOPs) and cooperative conversions offers one partial pathway. ESOP participation has grown by more than one million participants over the past decade, creating models for equitable ownership transfer that do not depend solely on traditional private capital. New platforms — BizBuySell, Acquire.com, Baton — are beginning to chip away at the opacity of the SMB transaction market, though none yet operates at the scale the problem demands.

    What This Means for Consulting

    For the consulting industry, the report represents both a diagnosis and an opportunity. Advisory firms have spent the past two years racing to build AI practices and secure platform alliances. The $5 trillion ownership transfer is a reminder that some of the largest structural challenges in the economy are not technology problems at all — they are market design problems, financing problems, and coordination failures.

    Succession advisory, M&A support for sub-$2 million transactions, ESOP conversion guidance, and community economic development are not glamorous practice areas. They do not generate the headlines that AI partnerships attract. But if McKinsey’s numbers are even roughly correct, the demand for this kind of advisory work is about to grow substantially — and the firms that build credible practices now will find themselves serving a market that barely existed a decade ago.

    The question, as the report frames it, is whether the coming decade becomes “a story of tragic business loss” or “the inflection point when business ownership became a broader pathway to mobility.” The answer will depend less on any single technology and more on whether the market infrastructure catches up to the demographic reality already underway.

  • PwC Built an AI That Can Actually Read Enterprise Spreadsheets. Here Is Why That Matters.

    Most enterprise AI demonstrations involve chatbots, code generation, or image synthesis — capabilities that are impressive but often disconnected from the workflows where organisations actually make decisions. PwC has taken a different approach. On 19 February, the firm announced a frontier AI agent that can reliably reason across complex, multi-sheet enterprise spreadsheets — the kind of messy, formula-dense workbooks that underpin deals, risk assessments, and financial modelling across virtually every large organisation.

    The announcement would be easy to dismiss as incremental. It is, in fact, one of the more practically significant AI developments of the year so far.

    The Spreadsheet Problem No One Talks About

    AI has made rapid progress with text, images, and code. But enterprise spreadsheets have remained stubbornly resistant. The reason is structural: a typical enterprise workbook is not a neatly formatted data table. It is a sprawling, multi-sheet artefact containing hundreds of thousands of rows, cross-sheet formulas, hidden dependencies, embedded charts, and formatting inconsistencies accumulated over years of manual editing by multiple authors.

    Conventional AI systems — including the most advanced large language models — struggle with this complexity. They can process a clean CSV file or answer questions about a simple table. But ask them to trace a formula chain across five sheets in a workbook with 200,000 rows and inconsistent column headers, and accuracy collapses. For regulated industries where precision is non-negotiable — auditing, tax, financial due diligence — this limitation has kept spreadsheet analysis firmly in the domain of human practitioners.

    PwC’s agent addresses this directly. Combining multimodal pattern recognition with a retrieval-augmented architecture, the system can process up to 30 workbooks containing nearly four million cells. In internal benchmarks, it achieved roughly three times the accuracy of previously published methods while using 50% fewer computational tokens — a meaningful efficiency gain that reduces both cost and energy consumption.

    How It Works, Without the Hype

    The technical approach mirrors how experienced analysts actually work. Rather than attempting to ingest an entire workbook at once — a strategy that overwhelms even million-token context windows — the agent scans, indexes, and selectively retrieves relevant sections. It can jump across tabs, trace logic through formula chains, integrate visual elements like charts, and explain its reasoning with what PwC describes as “defensible precision.”

    Two internal use cases illustrate the practical impact. In engagement documentation, PwC teams work with large, nominally standardised workbooks that document business processes and controls. In practice, these files vary significantly — column names shift, fields appear in different orders, structures change between engagements. The agent handles this in two stages: first mapping the workbook’s structure, then extracting specific details using targeted retrieval rather than brute-force ingestion.

    In risk assessment, the agent replaces what was previously weeks of custom development work. Each new set of files could break existing programmatic approaches due to formatting variations. The agent indexes and extracts directly, regardless of these inconsistencies. PwC reports that what previously required weeks of configuration can now be completed in hours.

    The ROI Connection

    The timing of this announcement is worth noting. Earlier this month, maddaisy examined PwC’s own 2026 Global CEO Survey, which found that 56% of chief executives could not point to measurable revenue gains from their AI investments. Only 12% reported achieving both revenue growth and cost reduction from AI programmes.

    The spreadsheet agent is, in a sense, PwC’s answer to its own data. Rather than pursuing the kind of ambitious, organisation-wide AI transformation that the survey suggests most companies are failing at, this tool targets a specific, bounded problem: making AI useful where decisions actually get made. Spreadsheets are unglamorous, but they remain the substrate of enterprise decision-making across every industry. If AI cannot work reliably with them, the ROI gap that PwC’s own research documented will persist.

    Matt Wood, PwC’s Commercial Technology and Innovation Officer, was notably direct about the origin: “This didn’t start as a research project. It started because our teams were spending weeks manually tracing logic through workbooks that no existing tool could handle.”

    A Broader Pattern: Consulting Firms as Technology Builders

    This development fits a pattern that maddaisy has been tracking across the consulting industry. Firms are not merely advising clients on AI — they are building proprietary capabilities that change the economics of their own delivery. McKinsey’s 25,000 AI agents. Accenture’s ongoing automation of delivery operations. Now PwC, with a tool that converts weeks of manual work into hours.

    The competitive implications are significant. A firm that can process complex financial workbooks in hours rather than weeks can bid more aggressively on engagements, take on more work with the same headcount, and offer the outcome-based pricing models that clients increasingly prefer. The spreadsheet agent is not just a productivity tool — it is a structural advantage in the shifting economics of professional services.

    What Practitioners Should Watch

    For consultants and enterprise leaders, the PwC announcement carries a practical message: the AI value gap may start closing not through headline-grabbing deployments, but through targeted tools that tackle specific bottlenecks in existing workflows.

    The broader FP&A landscape is moving in the same direction. IBM’s 2026 analysis of financial planning trends highlights that 69% of CFOs now consider AI integral to their finance transformation strategy, with the primary applications centring on data ingestion, budget analysis, and narrative generation — precisely the kind of spreadsheet-adjacent work that PwC’s agent addresses.

    The question is no longer whether AI can handle enterprise data complexity. It is whether organisations will deploy these capabilities against the right problems — the mundane, time-intensive, precision-critical workflows where the return on investment is most measurable and most immediate.

    PwC appears to have started there. Given the firm’s own data on the AI ROI crisis, that is arguably the most credible place to begin.