Category: Consulting

  • CTOs and CHROs Are Climbing the Pay Table. That Tells You Where Boards Think Risk Lives Now.

    For decades, the path to being among a public company’s five highest-paid executives ran through operations, sales, or running a business unit. Technology leaders and HR chiefs were important, certainly — but they were support functions, not the positions that commanded top-tier compensation.

    That hierarchy is shifting. New research from The Conference Board, published this week, tracks the named executive officers (NEOs) — the five highest-paid executives — across the Russell 3000 from 2021 to 2025. The findings are striking: Chief Technology Officers appearing as NEOs increased by 61%, while Chief Human Resources Officers rose by 55%. In the same period, business unit leaders — historically the dominant non-CEO, non-CFO category — declined by 15%.

    The numbers are not subtle. They represent a structural revaluation of which functions boards consider most critical to company performance and risk.

    From support functions to enterprise risk owners

    The explanation is not difficult to locate. Two forces are converging: the AI boom is making technology capability an existential strategic question, and a persistent talent war is making the ability to attract, retain, and reskill workers a board-level concern rather than an HR department problem.

    “Growth in CHRO and CTO roles signals that talent, culture, and digital capability are now viewed as enterprise risks, not support functions,” said Andrew Jones, Principal Researcher at The Conference Board. “Boards are prioritising leaders who shape resilience and transformation across the organisation.”

    This tracks with what maddaisy has been reporting for months. The consulting pyramid piece in early March documented how firms are reshuffling roles rather than eliminating them — cutting some positions while creating others in AI engineering and data science. Accenture’s decision to track AI tool usage for promotions was another signal: when a firm ties career progression to technology adoption, the executive overseeing that technology becomes strategically indispensable.

    The specific numbers tell the story

    CTO NEO disclosures rose from 155 to 249 in the Russell 3000 between 2021 and 2025. CHRO disclosures went from 148 to 230. These are not marginal shifts — they represent boards deciding, through the bluntest mechanism available (compensation), that these roles belong at the top table.

    Meanwhile, business unit leaders fell from 1,734 to 1,475 disclosures. The decline suggests that boards are placing less emphasis on divisional performance and more on enterprise-wide capabilities: technology infrastructure, talent strategy, and the legal and regulatory architecture that governs both.

    That last point matters. Legal roles — including chief legal officers, corporate secretaries, and general counsels — saw the largest increase of any non-mandatory NEO category, rising 21% over the same period. As AI governance, data regulation, and compliance pressures mount, the lawyers are moving closer to the centre of power too.

    What this means for the talent market

    The compensation data carries implications well beyond boardroom politics. When CTOs and CHROs move into the highest-paid tier, it reshapes the talent pipeline for those roles. More ambitious executives will target those paths. Boards will demand different skill sets — not just technical competence for CTOs, but strategic vision for how AI and digital infrastructure create competitive advantage. Not just process management for CHROs, but the ability to navigate workforce transformation at scale.

    The Conference Board’s data also reveals an interesting gender dimension. Among S&P 500 NEOs, women CEOs earned 11% more than men in 2025. But male NEOs overall still earned 8% more in the S&P 500 and 12% more in the Russell 3000. The gap, as researcher Paul Hodgson noted, “largely reflects who holds which roles” — men remain more prevalent in higher-paid operational and commercial positions, with longer average tenure and concentration at larger firms.

    The COO plateau and what it signals

    One quieter finding deserves attention: COO representation rose just 6% over the period and has actually declined from a 2023 peak. The chief operating officer — once the natural second-in-command — appears to be losing ground to more specialised enterprise-wide roles. In an era where the critical operational questions are “how do we deploy AI safely” and “how do we retain the people who know how to do it,” a generalist operations mandate may no longer be enough to justify top-tier compensation.

    The board’s revealed preferences

    Compensation data has always been the most reliable indicator of what organisations actually value, as opposed to what they claim to value. Press releases can announce “people-first cultures” and “digital-first strategies” without consequence. Paying the CHRO and CTO as much as the head of your largest business unit is a commitment that shows up in proxy statements.

    The Conference Board’s findings confirm a pattern that has been building for several years: boards are redefining which risks are existential and which executives own them. The AI boom has made technology leadership a strategic imperative. The talent war — intensified by the very AI transformation companies are pursuing — has elevated workforce strategy from an administrative function to an enterprise risk.

    For consultants and practitioners, the practical implication is clear. The organisations they advise are restructuring their leadership hierarchies around technology and talent. Advisory work that once centred on operational efficiency and market strategy increasingly requires fluency in AI deployment, workforce transformation, and the regulatory landscape that governs both. The C-suite is telling you where the priorities are. The pay data just makes it impossible to ignore.

  • CX Metrics Are Broken and Enterprises Know It. The Question Is What Comes Next.

    One in three customer experience professionals cannot connect their organisation’s Net Promoter Score to financial performance. That is not the conclusion of a contrarian think piece – it is a finding from Medallia’s 2026 State of Customer Experience Report, which surveyed more than 1,500 consumers and 550 CX practitioners globally. The metric that has anchored boardroom CX conversations for the better part of two decades is, for a significant share of the profession, unmeasurably disconnected from the outcomes it is supposed to predict.

    And the profession knows it. According to the same report, 78 per cent of CX professionals plan to adopt at least one new metric in 2026.

    The perception gap that should worry everyone

    The most striking number in the Medallia research is not about NPS at all. It is the gap between how enterprises think they are performing and how their customers actually feel. Two-thirds of CX professionals believe their brand’s experience has improved over the past year. Only 16 per cent of consumers agree.

    That is not a minor calibration error. It is a 50-percentage-point disconnect between corporate self-assessment and market reality. And it has consequences: 40 per cent of consumers in the study reported switching brands within three months, often without the kind of dramatic service failure that would register on traditional satisfaction surveys. They simply drifted, quietly, toward something better – or at least less friction-filled.

    The implication is uncomfortable. Many enterprises are not just using the wrong metrics. They are using metrics that actively mislead them into believing things are going well when customers are already leaving.

    Too many metrics, too little insight

    The problem is not a shortage of measurement. If anything, it is the opposite. Research published in MIT Sloan Management Review found that large organisations routinely track between 50 and 200 CX metrics across multiple channels and touchpoints. Some track even more. The result is not better decision-making but what one report from Usan’s 2026 CX findings calls “data paralysis” – mountains of signals with no clear path from insight to action.

    MIT Sloan’s researchers, working with 14 subscription-services companies, found that most organisations collect metrics because they are industry-standard, not because they have been validated as useful for their specific customer journey. The recommendation is counterintuitive for an era obsessed with data: collect fewer metrics, but align the ones you keep to specific stages of the customer lifecycle where they can actually inform decisions.

    Medallia’s own data supports this. Among CX professionals using 10 or more data sources, 92 per cent said they could demonstrate return on investment. Among those using five or fewer, the figure dropped to 73 per cent. More data sources help – but only when they are deliberately chosen and connected, not accumulated by default.

    The structural problem underneath

    Even when CX teams produce meaningful insights, the organisational infrastructure often fails to act on them. Medallia’s survey found that 41 per cent of CX professionals worry their teams are too siloed from the rest of the business. One-third said entire departments take no action on CX findings. Half said their organisation does not respond often enough or with enough impact.

    Budget control compounds the issue. Fifty-eight per cent of CX practitioners said their initiatives are funded from budgets they do not directly oversee – a structural dependency that limits their ability to move quickly or invest in new approaches.

    This echoes a pattern maddaisy.com has tracked across several domains. When this site examined how enterprises are designing digital experiences for both humans and AI agents, the core finding was similar: organisations recognise that the landscape has shifted, but their internal structures have not caught up. The CX measurement problem is another instance of the same gap – not a lack of awareness, but a failure of organisational plumbing.

    Where the replacement metrics are coming from

    The 78 per cent of CX professionals looking for new metrics are not starting from scratch. A CX Network analysis of customer listening trends suggests three directions gaining traction.

    First, Customer Effort Score – a measure of how easy or difficult a specific interaction was – is increasingly favoured over NPS for transactional touchpoints, because it maps directly to a friction point rather than measuring a generalised sentiment. Second, frontline employee input is being formalised as a data source, recognising that the people handling customer interactions daily often have a more accurate picture of experience quality than any survey. Third, AI-powered analysis of unstructured data – support transcripts, social mentions, behavioural signals – is enabling what practitioners call “continuous listening”, replacing periodic surveys with ongoing, passive measurement.

    None of these are new ideas. What is new is the urgency. Transcom’s 2026 CX paradoxes report makes a pointed observation: many organisations are layering AI-driven CX tools on top of broken measurement foundations. Automating a process that was never validated does not fix it. It scales the dysfunction.

    What practitioners should be watching

    The shift away from legacy CX metrics is not a technology problem waiting for a technology solution. It is a governance problem. Organisations that want their CX measurement to mean something will need to do three things that are simple to describe and difficult to execute.

    They will need to audit their existing metrics ruthlessly – not asking “is this metric popular?” but “does this metric predict a financial outcome we care about?” They will need to connect whatever survives that audit to specific journey stages, so that insights map to decisions rather than floating in quarterly reports. And they will need to give CX teams the budget authority and cross-functional access to act on what they find, rather than producing dashboards that nobody is structurally empowered to respond to.

    The 50-point perception gap in Medallia’s data is not a measurement artefact. It is a mirror. Enterprises that cannot close it will keep telling themselves that customer experience is improving, right up until the churn numbers say otherwise.

  • Anthropic’s Pentagon Walkaway and the Price of AI Principles

    When maddaisy examined the Pentagon-Anthropic standoff earlier this month, the focus was on a new category of vendor risk – the possibility that political pressure could turn a trusted AI supplier into a regulatory liability overnight. That analysis ended with a warning: organisations in the defence supply chain needed to start modelling political risk alongside technical capability.

    Nine days later, the story has taken a turn that few risk models would have predicted. The company the US government tried to destroy is winning the consumer market.

    The market’s unexpected verdict

    Claude, Anthropic’s AI assistant, surged to the number one spot on Apple’s App Store in the days following the ban. The #QuitGPT movement saw 2.5 million users cancel ChatGPT subscriptions or publicly pledge to boycott OpenAI. Protesters gathered outside OpenAI’s San Francisco headquarters. Reddit and X filled with migration guides.

    The paradox is striking. Anthropic refused to remove ethical guardrails from its Pentagon contract – specifically, restrictions on mass surveillance and fully autonomous weapons. The government designated it a supply-chain risk, a label previously reserved for foreign adversaries. And the market responded by rewarding Anthropic with the most effective brand-building campaign in AI history – one the company never planned and could not have bought.

    What OpenAI’s ‘win’ actually cost

    OpenAI moved quickly to fill the gap. Sam Altman announced a deal to deploy models on the Pentagon’s classified networks, claiming the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human oversight requirements. The critical difference, as maddaisy noted at the time, is that OpenAI permits “all lawful uses” and relies on existing Pentagon policies rather than contractual restrictions.

    Altman defended the decision in Fortune, arguing that “the absence of responsible AI companies in the military space” would be worse than engagement on imperfect terms. It is a reasonable argument in isolation.

    But the commercial consequences arrived faster than the contracts. OpenAI’s head of robotics resigned over ethical concerns related to the Pentagon deal. Eleven OpenAI employees signed an open letter protesting the government’s treatment of Anthropic – even as their employer stood to benefit from it. OpenAI rushed out GPT-5.4 within 48 hours of GPT-5.3 Instant, in what looked less like a planned release and more like a company trying to change the subject.

    The talent cost may prove more significant than the user exodus. In an industry where the top researchers can choose where they work, being seen as the company that took the contract Anthropic refused is not a recruitment advantage.

    The strategic calculus of principles

    For decades, ethical positioning in technology was treated as a cost centre – a brand exercise that might prevent negative press but would never drive revenue. The Anthropic-Pentagon episode is the first major test of whether that assumption holds in the AI era.

    The early data suggests it does not. Anthropic is being rewarded by consumers and punished by government. OpenAI is being rewarded by government and punished by consumers. Both companies are learning, in real time, that the AI market has bifurcated into constituencies with fundamentally different values – and that serving one may come at the expense of the other.

    This is new territory for the technology industry. Previous ethical standoffs – Apple versus the FBI over iPhone encryption in 2016, Google’s Project Maven withdrawal in 2018 – were significant but did not produce the same kind of mass consumer response. The difference is that AI tools are personal. People interact with ChatGPT and Claude daily, often for sensitive tasks. When those tools become entangled with military use and surveillance, the reaction is visceral in a way that an encryption debate never was.

    The $60 billion question

    None of this means Anthropic’s position is comfortable. The company has received more than $60 billion in total investment from over 200 venture capital funds. If the supply-chain risk designation holds, enterprise clients in the defence ecosystem will be contractually barred from using Anthropic’s products. Anthropic has announced it will challenge the designation in court, but legal processes move slowly and government procurement decisions move fast.

    The Pro-Human Declaration, a framework released by a bipartisan coalition of researchers and former officials, was finalised before the standoff but published in its aftermath. Its signatories include Steve Bannon and Susan Rice – a pairing that underlines just how broadly the anxiety about unchecked AI development has spread. Among its provisions: mandatory pre-deployment testing, prohibitions on fully autonomous weapons, and an outright moratorium on superintelligence development until scientific consensus on safety is established.

    Whether such frameworks gain legislative traction remains an open question. As maddaisy has previously noted, more than 1,000 AI-related bills were introduced across all 50 US states in 2025 alone, while federal action has been conspicuously absent. The Pentagon-Anthropic episode may be the catalyst that changes that – or it may simply add another chapter to the regulatory vacuum.

    What this means for practitioners

    For organisations evaluating AI vendors, the lesson is not that principles are good and pragmatism is bad. It is that ethical positioning has become a material factor in AI vendor strategy – one that affects talent retention, consumer trust, regulatory exposure, and government access in ways that are increasingly difficult to hedge.

    The vendor assessment frameworks that most enterprises use were not designed for this. They evaluate uptime, security certifications, data residency, and pricing. They do not evaluate whether a vendor’s ethical commitments might make it a target of government action, or whether a vendor’s willingness to accept government contracts without restrictions might trigger a consumer backlash that affects product development velocity.

    Both of those scenarios played out in the space of a single week. Any vendor strategy that does not account for them is incomplete.

    The AI industry is discovering what the defence, pharmaceutical, and energy industries learned long ago: when your products touch questions of public safety and national security, your commercial strategy and your ethical positioning become the same thing. The companies that figure this out first – not just as a brand exercise, but as a genuine constraint on what they will and will not do – will be the ones that retain the trust of all their constituencies, not just the most powerful one.

    Anthropic’s stand cost it a $200 million contract. It may also have bought something more durable: a market position built on trust at a time when trust in AI companies is in short supply.

  • The Consulting Pyramid Is Not Collapsing. It Is Being Quietly Redesigned.

    The consulting industry’s foundational staffing model — a small number of partners supported by large cohorts of junior analysts — is facing its most serious structural challenge in decades. But the narrative that AI is “dismantling” the pyramid overstates the pace and understates the complexity of what is actually happening.

    Nick Pye, managing partner at Mangrove Consulting, argued in Consultancy.uk this month that the traditional pyramid is “becoming increasingly difficult to justify.” His core thesis is straightforward: AI can now perform the analytical heavy lifting — research, financial modelling, scenario analysis — that once required rooms of graduates billing at premium rates. Clients are noticing. They want senior judgement, not junior analysis. And they increasingly have their own data capabilities, reducing dependence on external advisers for the work that used to fill the pyramid’s base.

    The argument is sound in principle. Where it risks overreach is in assuming the transition is further along than the evidence suggests.

    The data tells a more complicated story

    If AI were genuinely dismantling the consulting pyramid, one would expect to see mass reductions in headcount at the bottom. The picture is more mixed than that.

    As maddaisy reported in February, Capgemini ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. The company simultaneously announced €700 million in restructuring charges. It is not shrinking. It is reshuffling: eliminating some roles while creating others, primarily in AI engineering, data science, and agentic AI delivery.

    McKinsey, meanwhile, has begun testing new recruits on AI capabilities, and more than half of graduate roles now reportedly require AI skills. The Big Four have reduced student intakes, but the UK consulting market’s difficulties in 2025 — its worst year since lockdown — owe as much to broader demand softness as to structural AI disruption.

    And the technology itself is not yet delivering at the scale the hype implies. Eden McCallum research published this year found that while excitement for generative AI remains high, revenue impact remains minimal. Ninety-five per cent of AI pilots have failed to deliver returns, according to industry data cited by Consultancy.uk. That is not a technology that has already displaced the analyst class.

    What is genuinely changing

    None of this means the pyramid is safe. The direction of travel is clear, even if the pace is slower than the most breathless accounts suggest.

    Three shifts are converging. First, clients increasingly own and understand their own data. The analytical monopoly that consulting firms once held — gathering, processing, and synthesising information that clients could not access themselves — has eroded as organisations have built internal data teams and deployed their own AI tools.

    Second, the economics of the base are deteriorating. When an AI system can produce a comparable market analysis in minutes rather than weeks, it becomes progressively harder to justify billing rates for the same work performed by junior staff. This does not eliminate the need for human analysis, but it compresses the time and headcount required.

    Third, client expectations have shifted from deliverables to outcomes. As Pye puts it: clients want “decisions and performance,” not “decks and processes.” That shift favours experienced practitioners who can navigate organisational politics and drive implementation — not the analysts who assemble the slides.

    The diamond, the inverted pyramid, and the graduate question

    The replacement models being discussed are instructive. The most conservative is the “diamond” — wider in the middle, thinner at the base, with fewer entry-level analysts but more mid-level orchestration roles. It preserves hierarchy while acknowledging that the bottom of the pyramid has less to do.

    The more radical option is what Pye calls “flipping the pyramid”: small teams of senior and mid-level consultants tackling specific challenges, supported by AI systems rather than junior staff. Boutique consultancies have operated variations of this model for years. What AI changes is the scale at which it becomes viable.

    But neither model addresses the question that should concern the industry most: if junior roles contract, where do future senior consultants come from?

    The traditional pyramid functioned as a training pipeline. Graduates entered, learned the craft through years of analytical work, and developed into the experienced practitioners clients now prize. Close that entry point, and the industry faces a slow-motion skills crisis — a generation of senior consultants with no successors trained in the discipline.

    Pye’s answer is that consulting firms will increasingly recruit mid-career professionals who have already developed sector expertise elsewhere. The career path inverts: specialise first in an industry, then move into consulting.

    This is plausible but raises its own problems. The consulting skill set — structured problem solving, client management, the ability to diagnose organisational dysfunction — is not the same as industry expertise. A decade in financial services does not automatically produce someone who can run a transformation programme. The two capabilities overlap, but they are not identical.

    The real risk is not speed — it is the talent pipeline

    The firms that are moving fastest on AI are not necessarily the ones best positioned for the long term. Accenture is tracking AI logins for promotion decisions. OpenAI has formed alliances with McKinsey, BCG, Accenture, and Capgemini to deploy its enterprise AI platform. Capgemini’s CEO is counselling patience, arguing that deploying AI ahead of organisational readiness wastes both money and credibility.

    Each response reflects a different bet on how quickly the pyramid will change — and how to navigate the transition without breaking the firm’s ability to develop talent.

    The consulting industry is not being dismantled by AI. It is being redesigned, unevenly, firm by firm, with no consensus on the target operating model. The firms that get the balance right — reducing the base without severing the pipeline that produces tomorrow’s senior partners — will define what the industry looks like in a decade. The ones that treat this as a simple cost-cutting exercise will find, in five years, that they have cut too deep in exactly the wrong place.

  • Capgemini’s CEO Makes the Unfashionable Case for Pacing Your AI Investment

    There is a particular kind of courage in telling a room full of executives to slow down. Aiman Ezzat, CEO of Capgemini, has been doing exactly that – and his reasoning deserves more attention than the typical “move fast or die” narrative that dominates AI strategy discussions.

    “You don’t want to be too ahead of the learning curve,” Ezzat told Fortune in February. “If you are, you’re investing and building capabilities that nobody wants.”

    Coming from the head of a €22.5 billion consultancy that has trained 310,000 employees on generative AI and is actively building labs for quantum computing, 6G, and robotics, this is not a counsel of inaction. It is a strategic position on pacing – one that puts Ezzat at odds with much of the technology industry’s current mood.

    The FOMO problem

    The fear of missing out on AI has become a boardroom affliction. Boston Consulting Group reports that half of CEOs now believe their job is at risk if AI investments fail to deliver returns. That pressure creates a predictable dynamic: spend big, move fast, worry about outcomes later.

    The data suggests the worry-later approach is not working. EY research shows that while 88% of employees report using AI at work, organisations are failing to capture up to 40% of the potential benefits. In the UK, only 21% of workers felt confident using AI as of January 2026. The tools are arriving faster than the capacity to use them well.

    Ezzat’s argument is that this gap is not a technology problem. It is a pacing problem. Companies are deploying AI capabilities ahead of their organisation’s ability to absorb them – and ahead of genuine customer demand for the outcomes those capabilities promise.

    AI is a business, not a technology

    The more substantive part of Ezzat’s case is about framing. Too many leadership teams, he argues, treat AI as “a black box that’s being managed separately” – a technology initiative bolted onto the existing business rather than a force reshaping how the business operates.

    “The question you have to focus on is: ‘How can your business be significantly disrupted by AI?’” Ezzat says. “Not ‘How is your finance team going to become more efficient?’ I’m sure your CFO will deal with that at the end of the day.”

    The distinction matters. Departmental efficiency projects – automating invoice processing, summarising meeting notes, generating marketing copy – are the low-hanging fruit that most enterprises are picking right now. They deliver incremental gains but rarely transform a business model. The harder question, the one Ezzat wants CEOs to sit with, is whether AI fundamentally changes what a company sells, how it competes, or what its customers expect.

    That question takes time to answer well. Rushing it produces expensive experiments that solve the wrong problems.

    The trust deficit

    Perhaps the most underexplored part of Ezzat’s argument is about human trust. “How do you get humans to trust the agent?” he asks. “The agent can trust the human, but the human doesn’t really trust the agent.”

    This cuts to a practical reality that technology roadmaps tend to gloss over. Agentic AI – systems designed to take autonomous actions rather than simply generate content – is the next wave of enterprise deployment. As maddaisy.com noted when covering Capgemini’s role in the OpenAI Frontier Alliance, the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. Trust is a significant part of why.

    Employees who do not trust AI agents will find ways to work around them. Managers who cannot explain AI-driven decisions to clients will revert to manual processes. Organisations that deploy autonomous systems faster than their culture can absorb them will create friction, not efficiency.

    Ezzat draws an analogy to ergonomics – the mid-twentieth century discipline of designing tools for humans rather than forcing humans to adapt to tools. “Bad chairs lead to bad backs,” he observes. “Bad AI is likely to be far more consequential.”

    Consistent with Capgemini’s own playbook

    What makes Ezzat’s position credible is that Capgemini’s recent actions align with it. The company’s approach has been to invest broadly but scale selectively.

    As maddaisy.com’s analysis of the company’s 2025 results highlighted, generative AI bookings rose above 10% of total bookings in Q4 – meaningful but not yet dominant. The company maintains labs for emerging technologies including quantum and 6G, keeping a foot in multiple possible futures without betting the firm on any single one.

    Meanwhile, the company added 82,300 offshore workers in 2025 – largely through the WNS acquisition – while simultaneously earmarking €700 million for workforce restructuring. The message is clear: AI changes the shape of the workforce, but it does not eliminate the need for one. Building the human infrastructure to deliver AI at scale takes as much investment as the technology itself.

    The metaverse lesson

    Ezzat’s most pointed comparison is to the metaverse – a technology that commanded billions in corporate investment before the market concluded that customer demand had been dramatically overstated. Capgemini itself experimented with a metaverse lab. Mark Zuckerberg renamed his company around it. Now, as Ezzat puts it, “like air fryers, its time may now have passed.”

    The parallel is not that AI will follow the metaverse into irrelevance – the use cases are far more concrete, and the enterprise adoption data is already stronger. The point is about the cost of overcommitment. Companies that invested heavily in metaverse capabilities before the market was ready wrote off those investments. The same risk exists with AI, particularly in areas like agentic systems where the technology’s capability is advancing faster than organisational readiness to use it.

    Ezzat’s prescription is agility over ambition: small pilots, constant monitoring, and the willingness to scale rapidly when adoption genuinely accelerates. “We have to be investing – but not too much – to be able to be aware of the technology, following at the speed to make sure that we are ready to scale when the adoption starts to accelerate.”

    What this means for practitioners

    For consultants advising clients on AI strategy, Ezzat’s framework offers a useful counterweight to the prevailing urgency. The question is not whether to invest in AI – that debate is settled. The question is how to pace that investment so that capability, demand, and organisational readiness move roughly in step.

    Companies that get the pacing right will avoid the twin traps of overinvestment (building capabilities nobody wants) and underinvestment (being caught flat-footed when adoption accelerates). In a market where half of CEOs fear for their jobs over AI outcomes, the discipline to move at the right speed – rather than the fastest speed – may prove to be the more valuable skill.

  • The Agentic AI Governance Playbook Is Taking Shape. Most Enterprises Are Not Ready for It.

    The governance playbook for agentic AI is starting to take shape — and it looks nothing like the frameworks most enterprises currently rely on.

    Over the past month, a cluster of regulatory bodies, law firms, and industry coalitions have published guidance specifically addressing agentic AI systems. The UK Information Commissioner’s Office released its first Tech Futures report on agentic AI in January. Mayer Brown published a comprehensive governance framework in February. The Partnership on AI identified agentic governance as the top priority among six for 2026. And FINRA’s 2026 oversight report now includes a dedicated section on AI agents as an emerging threat to financial markets.

    Taken individually, none of these publications is remarkable. Taken together, they represent something maddaisy has been tracking since mid-February: the governance conversation is finally catching up to the deployment reality.

    The problem these frameworks are trying to solve

    When maddaisy examined the agentic AI governance gap in February, the numbers were stark: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. Subsequent analysis of agentic drift — where systems degrade gradually without triggering alarms — and shadow AI adoption revealed that the risks are not hypothetical. They are already materialising in production environments.

    The core issue, as these new frameworks make explicit, is that agentic AI does not fit within existing oversight models. Traditional AI governance assumes a human reviews the output before acting on it. Agentic systems are the actor. They plan, execute, and adapt autonomously — booking appointments, approving procurement, triaging complaints, managing collections. Governance cannot happen after the fact. It must be embedded in real time.

    What the emerging frameworks have in common

    Despite coming from different jurisdictions and institutions, the recent guidance converges on several principles that practitioners should note.

    Least privilege by default. Mayer Brown’s framework emphasises restricting what an agent can access — not just what it can do. Agents should not have standing access to sensitive databases, trade secrets, or systems beyond their immediate task scope. This mirrors the zero-trust approach that cybersecurity teams have adopted over the past decade, now applied to autonomous software.

    Human checkpoints at decision boundaries, not everywhere. The emerging consensus rejects both extremes: fully autonomous operation with no oversight, and human approval for every action (which would negate the point of agentic AI). Instead, the frameworks advocate for defined boundaries — moments where human approval is required before the agent proceeds. These include irreversible actions, decisions in regulated domains such as healthcare or financial services, and any step that falls outside the agent’s defined scope.

    Real-time monitoring, not periodic audits. The ICO report and Mayer Brown both stress continuous behavioural monitoring after deployment. This addresses the drift problem directly: an agent that passed all review gates at launch may behave differently three months later after prompt adjustments, model updates, and tool changes. Logging the full chain of reasoning and actions — not just inputs and outputs — is becoming a baseline expectation.

    Transparency to users. Multiple frameworks now explicitly require that organisations disclose when a customer is interacting with an AI agent rather than a human. The ICO notes that agentic AI “magnifies existing risks from generative AI” because these systems can rapidly automate complex tasks and generate new personal information at scale. Users need to know what they are dealing with — and they need a route to a human at any point.

    Value-chain accountability. The Partnership on AI flags a gap that most enterprise governance programmes have not addressed: who is responsible when something goes wrong in a multi-agent system? If an agent calls another agent, which calls a third-party tool, which accesses a database — and the outcome is harmful — the liability chain is unclear. Their recommendation: establish an agreed taxonomy for the AI value chain before deployment, not after an incident.

    Where the frameworks fall short

    For all the convergence, there are notable gaps. None of the published guidance adequately addresses the measurement problem. Adobe’s 2026 AI and Digital Trends Report found that only 31% of organisations have implemented a measurement framework for agentic AI. Without clear metrics for what “good governance” looks like in practice, the frameworks risk becoming compliance theatre — policies that exist on paper but do not change how agents actually operate.

    There is also limited guidance on cross-border deployment. The Partnership on AI calls for international coordination, but the practical reality — as maddaisy’s analysis of America’s state-by-state regulatory fragmentation highlighted — is that even within a single country, compliance requirements vary dramatically. An agent deployed in New York now faces transparency requirements under the RAISE Act that do not apply in Texas, where the Responsible AI Governance Act takes a different approach. For multinational enterprises, the compliance surface is formidable.

    What this means for practitioners

    The practical takeaway is straightforward, if demanding. Organisations deploying or planning to deploy agentic AI should be doing four things now.

    First, audit existing governance frameworks against the agentic-specific requirements these publications outline. Most enterprises have AI policies designed for advisory systems. Those policies almost certainly do not cover autonomous execution, multi-agent coordination, or real-time behavioural monitoring.

    Second, define decision boundaries before deployment. Which actions require human approval? What constitutes an irreversible decision in your context? Where are the regulatory tripwires? These questions are easier to answer before an agent is in production than after it has been running for six months.

    Third, invest in observability infrastructure. As maddaisy noted in the agentic drift analysis, the systems that fail most dangerously are the ones that appear to be working. Full execution logging, behavioural baselines, and anomaly detection are not optional extras — they are the minimum viable governance stack for agentic systems.

    Fourth, assign clear ownership. Mayer Brown’s framework identifies four distinct governance roles: decision-makers who set policy, product teams who implement it, cybersecurity teams who integrate agents into security procedures, and frontline employees who can identify and escalate issues. Most organisations have not mapped these responsibilities for their agentic deployments.

    The governance race is just starting

    The gap between agentic AI deployment and agentic AI governance remains wide. But the direction of travel is now clear: regulators, industry bodies, and legal advisors are converging on a set of principles that will become the baseline expectation. Organisations that build these capabilities now — real-time monitoring, defined decision boundaries, value-chain accountability, and clear ownership — will be better positioned than those scrambling to retrofit governance after their first incident.

    The frameworks are not perfect. They will evolve. But the era of governing agentic AI with policies designed for chatbots is ending. For consultants and technology leaders advising enterprises on AI deployment, that shift should be shaping every engagement.

  • The $5 Trillion Business Transfer That Has Nothing to Do with AI

    Maddaisy has spent the past fortnight examining McKinsey through the lens of AI — its 25,000 AI agents, its shift to outcome-based pricing, its OpenAI Frontier Alliance. But the firm’s most consequential publication this month may have nothing to do with artificial intelligence at all.

    A new report from McKinsey’s Institute for Economic Mobility identifies what it calls the “Great Ownership Transfer” — roughly six million small and medium-sized American businesses facing ownership transitions by 2035, representing up to $5 trillion in enterprise value. The cause is straightforward demography: baby boomer business owners are retiring, and the systems for handing their companies to the next generation of owners barely exist.

    A 92% Closure Rate

    The headline statistic is stark. Today, 92% of small-business market exits in the United States result in closure. Just 5% are completed as sales. Another 3% are transferred to new owners through other mechanisms. The remaining businesses simply shut their doors.

    This is not a failure of entrepreneurial ambition. Small businesses account for 99% of all US companies and employ nearly half the national workforce. The problem, as the report’s authors put it, is structural: “Buying and selling a small business is often harder than starting one because the systems that support entrepreneurship in the United States are currently built for founding companies, not transferring them.”

    Viable firms are closing not because they lack customers or revenue, but because the pathways to succession are limited, opaque, and prohibitively expensive for the buyers who would keep them running.

    The Missing Middle

    The risk concentrates in what McKinsey terms the “missing middle.” Nearly 80% of projected exits involve micro and emerging middle-market businesses valued at less than $2 million. These firms sit in an awkward no-man’s-land: too small to attract private equity or institutional acquirers, but too large and complex for a typical first-time buyer to finance without significant support.

    The consequences are not evenly distributed. Rural communities, where these smaller firms often serve as anchor employers and the primary local tax base, face disproportionate exposure. Labour-intensive industries essential to daily life — retail, construction, food services — account for roughly one-third of the businesses caught in this gap. When a town’s only plumbing contractor or hardware shop closes because there is no viable buyer, the impact extends well beyond the balance sheet.

    A Financing System Built for the Wrong Problem

    The report is pointed about where the infrastructure fails. The SBA 7(a) loan — the primary federal instrument for small-business acquisition financing — requires high equity contributions and full personal guarantees. For first-time buyers, particularly those without existing assets or family wealth to pledge, the barrier is effectively insurmountable.

    The broader ecosystem compounds the problem. Unlike residential property, where standardised appraisals, mortgage products, and regulatory frameworks have created a liquid market, small-business transfers remain bespoke transactions. Each deal requires its own valuation, its own legal structure, its own financing arrangement. There is no MLS for Main Street businesses, no standardised underwriting for a family-owned electrical contractor with $1.5 million in revenue.

    McKinsey’s prescription is to treat small-business acquisition as a scalable market rather than a series of one-off transactions. That means modernised underwriting standards, bundled advisory services, and coordinated market infrastructure — the kind of systemic plumbing that already exists for property and public equities but has never been built for SMB ownership transfer.

    The Equity Dimension

    The demographic profile of current small-business owners — overwhelmingly older, white, and male — means this transfer also carries significant implications for wealth distribution. Under current patterns, women, Black, and Latino individuals combined would capture only about 28% of the transferring $5 trillion in value.

    McKinsey’s modelling suggests the gap is not inevitable. If ownership participation reached demographic parity, Black individuals could see their wealth capture increase more than fourfold to approximately $369 billion. Parity for women could unlock roughly $700 billion. These are not aspirational targets plucked from a diversity statement — they are the arithmetic consequence of removing structural barriers to acquisition financing and deal flow.

    The growth of employee stock ownership plans (ESOPs) and cooperative conversions offers one partial pathway. ESOP participation has grown by more than one million participants over the past decade, creating models for equitable ownership transfer that do not depend solely on traditional private capital. New platforms — BizBuySell, Acquire.com, Baton — are beginning to chip away at the opacity of the SMB transaction market, though none yet operates at the scale the problem demands.

    What This Means for Consulting

    For the consulting industry, the report represents both a diagnosis and an opportunity. Advisory firms have spent the past two years racing to build AI practices and secure platform alliances. The $5 trillion ownership transfer is a reminder that some of the largest structural challenges in the economy are not technology problems at all — they are market design problems, financing problems, and coordination failures.

    Succession advisory, M&A support for sub-$2 million transactions, ESOP conversion guidance, and community economic development are not glamorous practice areas. They do not generate the headlines that AI partnerships attract. But if McKinsey’s numbers are even roughly correct, the demand for this kind of advisory work is about to grow substantially — and the firms that build credible practices now will find themselves serving a market that barely existed a decade ago.

    The question, as the report frames it, is whether the coming decade becomes “a story of tragic business loss” or “the inflection point when business ownership became a broader pathway to mobility.” The answer will depend less on any single technology and more on whether the market infrastructure catches up to the demographic reality already underway.

  • OpenAI’s Frontier Alliance Confirms What Consultants Already Knew: AI Vendors Cannot Scale Alone

    OpenAI announced on 23 February that it has formed multi-year “Frontier Alliances” with McKinsey, Boston Consulting Group, Accenture, and Capgemini. The four firms will help sell, implement, and scale OpenAI’s Frontier platform — an enterprise system for building, deploying, and governing AI agents across an organisation’s technology stack.

    For readers who have been following maddaisy’s coverage of the consulting industry’s AI pivot, this is not a surprise. It is the logical next step in a pattern that has been building for months — and it tells us more about the limits of AI vendors than about the ambitions of consulting firms.

    The vendor cannot scale alone

    The most revealing line in the announcement came from Capgemini’s chief strategy officer, Fernando Alvarez: “If it was a walk in the park, OpenAI would have done it by themselves, so it’s recognition that it takes a village.”

    That candour is worth pausing on. OpenAI’s enterprise business accounts for roughly 40% of revenue, with expectations of reaching 50% by the end of the year. The company has already signed enterprise deals with Snowflake and ServiceNow this year and appointed Barret Zoph to lead enterprise sales. Yet it still needs consulting firms — with their existing client relationships, implementation expertise, and organisational change capabilities — to get its technology into production at scale.

    This is not a story about OpenAI’s generosity in sharing the enterprise market. It is an admission that the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. As maddaisy reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. The technology is not the bottleneck. Integration, governance, and organisational readiness are.

    A clear division of labour

    The alliance structure reveals how OpenAI sees the enterprise AI value chain. McKinsey and BCG are positioned as strategy and operating model partners — helping leadership teams determine where agents should be deployed and how workflows need to be redesigned. BCG CEO Christoph Schweizer noted that AI must be “linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives.”

    Accenture and Capgemini take the systems integration role: data architecture, cloud infrastructure, security, and the unglamorous work of connecting Frontier to the CRM platforms, HR systems, and internal tools that enterprises actually run on. Each firm is building dedicated practice groups and certifying teams on OpenAI technology. OpenAI’s own forward-deployed engineers will sit alongside them in client engagements.

    This two-tier model — strategy at the top, integration at the bottom — maps neatly onto the consulting industry’s existing hierarchy. It also creates a clear dependency: OpenAI provides the platform, the consultancies provide the last mile.

    The maddaisy continuity thread

    This announcement intersects with several stories maddaisy has been tracking. When we examined McKinsey’s 25,000 AI agent deployment, the question was whether the firm’s aggressive internal build-out was a first-mover advantage or an expensive experiment. The Frontier Alliance suggests McKinsey is now positioning that internal capability as a credential — evidence that it can deploy agentic AI at scale, which it can now offer to clients through the OpenAI partnership.

    Similarly, when maddaisy covered the shift from billable hours to outcome-based consulting, the question was how firms would make the economics work. Vendor alliances like this provide part of the answer: the consulting firm brings the implementation expertise, the AI vendor provides the platform, and the client pays for outcomes rather than hours. The risk is shared across the chain.

    And Capgemini’s dual bet — adding 82,300 offshore workers while simultaneously investing in AI — now makes more strategic sense. The offshore delivery capacity is precisely what is needed to operationalise Frontier at enterprise scale. The bodies and the bots are not competing; they are complementary.

    The SaaS vendors should be nervous

    As Fortune noted, the Frontier Alliance creates a specific tension for established software-as-a-service vendors. Salesforce, Microsoft, Workday, and ServiceNow all depend on these same consulting firms to market and deploy their products. Now those consultants will also be actively promoting an alternative platform — one that positions itself as a “semantic layer” sitting above the traditional SaaS stack.

    The consulting firms are not choosing sides. They are hedging. Accenture, for instance, signed a multi-year partnership with Anthropic in December 2025 and is now a Frontier Alliance member. The firms will sell whichever platform best fits a given client’s needs, which gives them leverage over the AI vendors rather than the other way around.

    For the SaaS incumbents, however, having McKinsey and BCG actively evangelise an AI-native alternative to C-suite buyers is a development they will not welcome. Investor anxiety in this space is already elevated — shares of several enterprise software companies have been punished over concerns that customers will choose AI-native platforms over traditional offerings.

    What to watch

    The Frontier Alliance is a partnership announcement, not a set of outcomes. The real test is whether this model — AI vendor plus consulting firm — can close the deployment gap that has kept enterprise AI adoption stubbornly below expectations.

    Three things matter from here. First, whether the certified practice groups produce measurably better outcomes than the piecemeal implementations enterprises have been attempting on their own. Second, whether Frontier’s “semantic layer” architecture genuinely simplifies agent deployment or simply adds another platform layer to an already complex stack. And third, whether the consulting firms’ simultaneous alliances with competing AI vendors — OpenAI, Anthropic, Google — create genuine client value or just a more complicated sales cycle.

    For practitioners, the immediate signal is clear: the enterprise AI market is consolidating around a vendor-plus-integrator model. If your organisation is planning an agentic AI deployment, the question is no longer which model to use. It is which combination of platform, integrator, and operating model redesign will actually get agents into production — and keep them there.

  • The Strategy-Bandwidth Gap: Why CSOs Are Being Sidelined on Their Own AI Agendas

    Ninety-five per cent of chief strategy officers expect AI and technology-led disruption to materially reshape their strategic priorities this year. Yet only 28 per cent co-lead their organisation’s AI-related decisions. That gap — between expecting AI to change everything and actually having the authority to steer it — is the central finding of Deloitte’s 2026 Chief Strategy Officer Survey, and it has implications well beyond the strategy function.

    For maddaisy.com’s consulting audience, the data confirms a pattern that has been building for months. The executives whose job description is literally to chart the company’s future are being outpaced by the very transformation they are supposed to lead.

    The bandwidth problem is structural, not personal

    Deloitte’s survey paints a picture of a role under strain. More than half of CSOs report managing too many priorities with too little time. Approximately half have five or fewer direct reports. Many rely on rotating external support — often consultants — to advance critical initiatives, which ironically consumes more coordination time and leaves less room for the strategic thinking the role demands.

    Despite this, expectations continue to expand. Nearly two-thirds of CSOs now lead cross-functional transformation efforts, and more than half drive enterprise-wide agendas that stretch well beyond the role’s traditional remit. The mandate is growing; the resources are not.

    Only 35 per cent of CSOs say they co-lead or fully own decision-making for their organisation’s top priorities. As Gagan Chawla, Deloitte’s US Business Strategy Practice leader, puts it: strategy leaders should “reimagine their mandate and champion governance that aligns decision-making power with enterprise priorities to help ensure strategy drives value, not just insight.”

    That is a polite way of saying many CSOs are producing analysis that nobody with execution authority is acting on.

    The AI authority gap

    The most striking disconnect is on AI specifically. Nearly every CSO surveyed expects AI to reshape competitive dynamics. Yet the survey finds that only 28 per cent co-lead enterprise AI decisions, and just 16 per cent say their organisation uses AI to fundamentally reimagine lines of business or create new competitive advantages.

    This sits uncomfortably alongside data maddaisy.com has been tracking. PwC’s 2026 CEO Survey found that 56 per cent of chief executives cannot point to measurable revenue gains from AI. If the executives responsible for enterprise strategy are not materially involved in AI decisions, the ROI gap starts to look less like a technology problem and more like an organisational design failure. AI investments are being made by technology teams, approved by finance, and executed by operations — while the people tasked with ensuring these investments align with long-term competitive positioning watch from the sidelines.

    The Deloitte data suggests momentum is building — 51 per cent of CSOs now view AI as a strategic partner that enhances insight and accelerates execution, and 61 per cent report investing in AI literacy. But viewing AI as useful and having authority over its deployment are very different things.

    Where this connects to consulting

    The CSO bandwidth gap has direct consequences for the consulting industry. Strategy firms have traditionally sold to the strategy function — providing the research, analysis, and frameworks that CSOs use to set direction. If those CSOs lack the authority to act on strategic recommendations, the value proposition of strategy consulting shifts.

    This helps explain the trend maddaisy.com examined last week in the shift from billable hours to outcome-based consulting. When McKinsey ties a third of its revenue to measurable business outcomes rather than advisory engagements, it is partly responding to a reality where clients’ strategy functions cannot translate advice into action without hands-on support. The consulting engagement is moving from “here is what you should do” to “let us do it with you” — and that shift is driven not just by AI capabilities but by the operational reality that internal strategy teams are stretched too thin to execute.

    Accenture’s recent disclosure reinforces the point from the technology side. The firm announced in December that it would stop separately reporting advanced AI bookings because AI is now “embedded in some way across nearly everything we do.” With $11.5 billion in AI bookings across 11,000 projects and revenue of $4.8 billion, Accenture has reached a point where AI is infrastructure, not a standalone initiative. For CSOs trying to “own” the AI strategy, this creates an additional challenge: when AI permeates every function, there is no single strategy to own.

    Confidence without control

    Perhaps the most telling number in the Deloitte survey is the confidence gap. Seventy-two per cent of CSOs are optimistic about their organisation’s prospects, compared with just 24 per cent who feel the same about the global economy. Strategy leaders believe their companies can win despite external uncertainty — but this confidence sits alongside an admission that they lack the bandwidth and authority to ensure it happens.

    Adam Giblin, Deloitte’s US Chief Strategy Officer Programme lead, frames it directly: “CSOs are navigating a world where uncertainty isn’t episodic, it’s the status quo.” The implication is that strategy can no longer be a periodic exercise — an annual planning cycle punctuated by quarterly reviews. It needs to be continuous, embedded, and authoritative.

    For organisations that have not yet reconciled the CSO’s expanding mandate with actual decision-making power, the path forward is not complicated but it is uncomfortable. It means giving strategy leaders a seat at the AI governance table — not as observers, but as co-owners. It means aligning resource allocation with strategic priorities rather than expecting a team of five to drive enterprise-wide transformation. And it means recognising that the AI ROI gap is, in significant part, a strategy execution gap.

    The companies that close it will not be the ones with the most advanced AI models. They will be the ones where the person responsible for competitive positioning actually has the authority to shape how AI is deployed.

  • From Billable Hours to Shared Risk: Consulting’s AI-Driven Business Model Shift

    McKinsey’s 25,000 AI agents grabbed the headlines, but the more consequential number is the roughly one-third of the firm’s revenue now tied to outcome-based engagements. Across the industry, AI is not just changing how consultants work – it is rewriting how they get paid.

    When maddaisy.com examined McKinsey’s 25,000 AI agent deployment last week, the focus was on scale: was deploying one digital agent for every 1.6 human employees a first-mover advantage or an expensive experiment? The answer may depend less on the agent count and more on a quieter transformation happening in parallel – the shift from selling time to selling outcomes.

    The Model That Built an Industry Is Under Pressure

    For decades, consulting has run on a straightforward exchange: expertise for time, billed by the hour or the project. It is a model that has produced extraordinary margins, but it carries an inherent misalignment. Consultants profit from the complexity of a problem, not necessarily from solving it quickly.

    AI agents threaten that dynamic directly. When an algorithm can synthesise research in minutes that previously took analysts weeks, the hours-based model starts to look exposed. McKinsey’s own data – 1.5 million hours saved on search and synthesis work – quantifies exactly how much billable time AI has already removed from the equation.

    Rather than watching margins erode, McKinsey is pivoting. Speaking on the HBR IdeaCast in February, CEO Bob Sternfels confirmed that outcome-based engagements – where McKinsey co-invests alongside clients and ties fees to measurable business results – now account for roughly a third of the firm’s revenue. Two years ago, that figure was negligible.

    The Economics Only Work with AI

    This is where the 25,000 agents become strategically coherent. Outcome-based consulting is inherently riskier than fee-for-service; the firm only earns if the client succeeds. To make the economics work, you need two things: lower delivery costs and higher confidence in results.

    AI agents address both. QuantumBlack, McKinsey’s 1,700-person AI division, now drives 40% of the firm’s total work. Non-client-facing headcount has fallen 25%, while output from those teams has risen 10% – the “25 squared” model. The savings create the margin headroom needed to absorb the risk of outcome-based pricing.

    It is not just McKinsey making this calculation. BCG has deployed “forward-deployed consultants” who build AI tools directly within client organisations, effectively embedding methodology as software rather than slides. Capgemini has trained 310,000 employees on generative AI, though its agentic AI bookings only reached 10% of quarterly pipeline by Q4 2025. Accenture has stopped reporting AI bookings separately because, as the firm noted in its Q1 fiscal 2026 results, AI is now embedded in virtually every engagement.

    The Client Side of the Equation

    The timing is not coincidental. As maddaisy.com reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot demonstrate revenue gains from AI. When the client cannot prove value, the consultancy offering to underwrite outcomes holds a powerful negotiating position – essentially saying, “we believe in this enough to stake our fees on it.”

    The irony is that consultancies are asking clients to trust their AI capabilities while the industry’s own track record on AI delivery remains uneven. Deloitte’s own 2026 State of AI report found that 42% of organisations consider their AI strategy “highly prepared” but feel markedly less ready on infrastructure, data governance, and talent.

    McKinsey faces this credibility gap from a different direction. The firm’s State of AI research found that only 5% of companies globally see AI hitting their bottom line. Positioning itself as the firm that can deliver measurable outcomes means McKinsey is, in effect, claiming to solve a problem that its own research says almost no one has solved.

    What Changes for Practitioners

    For consultants and the organisations that hire them, three practical implications stand out.

    Procurement shifts. If outcome-based pricing becomes the norm, procurement teams will need to evaluate consulting engagements more like joint ventures than service contracts. That means assessing the firm’s AI capabilities, data infrastructure, and delivery methodology – not just the partner’s credentials and the day rate.

    The talent model is splitting. Sternfels has been explicit that McKinsey wants “great consultants and/or great technologists, groomed to be both.” The traditional path – from analyst to associate to engagement manager – now runs alongside a technical track where consultants build and deploy AI systems. BCG’s vibe-coding consultants are an early version of this hybrid role.

    Governance becomes shared. When a consultancy co-invests in outcomes and deploys AI agents within a client’s operations, the governance question becomes bilateral. As maddaisy.com has covered extensively, the gap between AI deployment speed and governance maturity is already the defining risk of 2026. Outcome-based models widen this gap further, because neither party has clear precedent for who owns the risk when an AI agent produces flawed analysis that drives a business decision.

    The Bigger Picture

    The consulting industry has weathered previous disruptions – offshoring, automation, the rise of in-house strategy teams – by evolving its value proposition. This time, the change is structural. AI agents do not just reduce the cost of delivering advice; they make it possible to charge for results instead.

    McKinsey’s 25,000 agents are best understood not as a technology deployment but as a financial instrument – the infrastructure that underwrites a new revenue model. Whether the model works depends on something no agent can automate: whether clients actually achieve the outcomes both parties are betting on.