Author: Connie Botelli

  • OpenAI’s Frontier Alliance Is Not Just About Consulting. It Is a Bet Against the Enterprise Software Stack.

    When maddaisy examined OpenAI’s Frontier Alliance in February, the focus was on what it meant for the consulting firms — McKinsey, BCG, Accenture, and Capgemini — and the admission that AI vendors cannot scale enterprise deployments alone. That story was about the consulting industry. This one is about the companies the alliance is quietly aimed at: the enterprise software vendors that have built trillion-dollar businesses on per-seat licensing.

    The per-seat model under pressure

    OpenAI’s Frontier platform, launched in early February, is designed as an enterprise operating layer — a unified system where AI agents can log into applications, execute workflows, and make decisions across an organisation’s entire technology stack. CRM systems, HR platforms, ticketing tools, internal databases. The ambition is not to replace any single application but to sit above all of them.

    The threat to SaaS vendors is structural, not incremental. If AI agents execute the tasks that human employees currently perform inside Salesforce, ServiceNow, or Workday, the justification for per-seat licensing weakens. Fewer human users logging in means fewer seats to sell. And if agents can orchestrate workflows across multiple systems from a single platform, the case for buying specialised point solutions — each with its own subscription — becomes harder to make.

    The market has not waited for proof. Investors wiped roughly $2 trillion in market value from technology stocks in a single week over AI displacement concerns. ServiceNow shares fell more than 20% year-to-date by mid-February. IBM suffered its largest single-day decline in 25 years after Anthropic’s Claude demonstrated competency with legacy COBOL systems — the very maintenance work that underpins a significant portion of IBM’s consulting revenue.

    The consulting conduit

    What makes the Frontier Alliance specifically dangerous for SaaS incumbents is not the technology. It is the distribution channel.

    McKinsey, BCG, Accenture, and Capgemini are not just consulting firms. They are the primary implementation partners for the very software companies that Frontier could displace. When a Fortune 500 company deploys Salesforce, it typically hires one of these firms to manage the rollout. When it migrates to ServiceNow’s IT service management platform, the same consulting firms handle the integration. The relationships are deep, multi-year, and built on trust.

    OpenAI has effectively enlisted those relationships as a distribution network. Each of the four firms has established dedicated OpenAI practice groups, certified their teams on Frontier, and committed to multi-year alliances. OpenAI’s own forward-deployed engineers will sit alongside consulting teams in client engagements — a model borrowed from Palantir’s playbook for embedding in enterprise accounts.

    The result is a direct-to-enterprise pipeline that does not need SaaS vendors as intermediaries. A consulting firm advising a client on AI strategy can now recommend Frontier agents that orchestrate existing systems, rather than recommending new SaaS products that require their own implementation projects. The consulting firm earns either way. The SaaS vendor may not.

    SaaS is not dying. But the economics are shifting.

    The counterarguments deserve a hearing. Fortune 500 companies will not abandon decades of enterprise software investment overnight. Compliance requirements, audit trails, data sovereignty obligations, and the sheer operational complexity of large organisations create friction that no AI platform can simply wave away. As one analyst put it, “we are simply not going to see a complete unwinding of the past 50 years of enterprise software development.”

    The incumbents are also adapting. Salesforce has pioneered what it calls the “Agentic Enterprise Licence Agreement” — a fixed-price, consumption-based model designed to decouple revenue from headcount. ServiceNow and Microsoft are shifting toward outcome-based pricing. These moves acknowledge the threat and attempt to neutralise it by changing the unit of value from the human user to the business outcome.

    But adaptation comes at a cost. Per-seat licensing has been the engine of SaaS margins for two decades. Moving to consumption or outcome-based models compresses revenue predictability and margins in the short term, even if it preserves relevance in the long term. The transition is not painless, and investors know it.

    Where this connects

    This development sits at the intersection of several threads maddaisy has been tracking. Capgemini’s CEO, Aiman Ezzat, argued last week that organisations are deploying AI capabilities ahead of their ability to absorb them. He is right — but that does not mean the structural pressure on SaaS pricing will wait for organisations to catch up. The market reprices on expectations, not on deployment maturity.

    And the consulting pyramid piece from yesterday noted that the industry’s base is being reshaped as AI compresses the need for junior analytical work. A similar compression is now visible in enterprise software: the middle layer of the stack — the specialised tools that automate individual workflows — faces pressure from platforms that automate across workflows.

    The question for enterprise software vendors is not whether AI agents will change their businesses. It is whether they can shift their pricing, their value propositions, and their competitive moats fast enough to remain the platform of choice — rather than becoming the legacy infrastructure that sits beneath someone else’s agent layer.

    For practitioners evaluating their own technology stacks, the practical implication is this: the next software audit should not just ask what each tool costs per seat. It should ask what happens to that cost when half the seats belong to agents that do not need a licence.

  • The Consulting Pyramid Is Not Collapsing. It Is Being Quietly Redesigned.

    The consulting industry’s foundational staffing model — a small number of partners supported by large cohorts of junior analysts — is facing its most serious structural challenge in decades. But the narrative that AI is “dismantling” the pyramid overstates the pace and understates the complexity of what is actually happening.

    Nick Pye, managing partner at Mangrove Consulting, argued in Consultancy.uk this month that the traditional pyramid is “becoming increasingly difficult to justify.” His core thesis is straightforward: AI can now perform the analytical heavy lifting — research, financial modelling, scenario analysis — that once required rooms of graduates billing at premium rates. Clients are noticing. They want senior judgement, not junior analysis. And they increasingly have their own data capabilities, reducing dependence on external advisers for the work that used to fill the pyramid’s base.

    The argument is sound in principle. Where it risks overreach is in assuming the transition is further along than the evidence suggests.

    The data tells a more complicated story

    If AI were genuinely dismantling the consulting pyramid, one would expect to see mass reductions in headcount at the bottom. The picture is more mixed than that.

    As maddaisy reported in February, Capgemini ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. The company simultaneously announced €700 million in restructuring charges. It is not shrinking. It is reshuffling: eliminating some roles while creating others, primarily in AI engineering, data science, and agentic AI delivery.

    McKinsey, meanwhile, has begun testing new recruits on AI capabilities, and more than half of graduate roles now reportedly require AI skills. The Big Four have reduced student intakes, but the UK consulting market’s difficulties in 2025 — its worst year since lockdown — owe as much to broader demand softness as to structural AI disruption.

    And the technology itself is not yet delivering at the scale the hype implies. Eden McCallum research published this year found that while excitement for generative AI remains high, revenue impact remains minimal. Ninety-five per cent of AI pilots have failed to deliver returns, according to industry data cited by Consultancy.uk. That is not a technology that has already displaced the analyst class.

    What is genuinely changing

    None of this means the pyramid is safe. The direction of travel is clear, even if the pace is slower than the most breathless accounts suggest.

    Three shifts are converging. First, clients increasingly own and understand their own data. The analytical monopoly that consulting firms once held — gathering, processing, and synthesising information that clients could not access themselves — has eroded as organisations have built internal data teams and deployed their own AI tools.

    Second, the economics of the base are deteriorating. When an AI system can produce a comparable market analysis in minutes rather than weeks, it becomes progressively harder to justify billing rates for the same work performed by junior staff. This does not eliminate the need for human analysis, but it compresses the time and headcount required.

    Third, client expectations have shifted from deliverables to outcomes. As Pye puts it: clients want “decisions and performance,” not “decks and processes.” That shift favours experienced practitioners who can navigate organisational politics and drive implementation — not the analysts who assemble the slides.

    The diamond, the inverted pyramid, and the graduate question

    The replacement models being discussed are instructive. The most conservative is the “diamond” — wider in the middle, thinner at the base, with fewer entry-level analysts but more mid-level orchestration roles. It preserves hierarchy while acknowledging that the bottom of the pyramid has less to do.

    The more radical option is what Pye calls “flipping the pyramid”: small teams of senior and mid-level consultants tackling specific challenges, supported by AI systems rather than junior staff. Boutique consultancies have operated variations of this model for years. What AI changes is the scale at which it becomes viable.

    But neither model addresses the question that should concern the industry most: if junior roles contract, where do future senior consultants come from?

    The traditional pyramid functioned as a training pipeline. Graduates entered, learned the craft through years of analytical work, and developed into the experienced practitioners clients now prize. Close that entry point, and the industry faces a slow-motion skills crisis — a generation of senior consultants with no successors trained in the discipline.

    Pye’s answer is that consulting firms will increasingly recruit mid-career professionals who have already developed sector expertise elsewhere. The career path inverts: specialise first in an industry, then move into consulting.

    This is plausible but raises its own problems. The consulting skill set — structured problem solving, client management, the ability to diagnose organisational dysfunction — is not the same as industry expertise. A decade in financial services does not automatically produce someone who can run a transformation programme. The two capabilities overlap, but they are not identical.

    The real risk is not speed — it is the talent pipeline

    The firms that are moving fastest on AI are not necessarily the ones best positioned for the long term. Accenture is tracking AI logins for promotion decisions. OpenAI has formed alliances with McKinsey, BCG, Accenture, and Capgemini to deploy its enterprise AI platform. Capgemini’s CEO is counselling patience, arguing that deploying AI ahead of organisational readiness wastes both money and credibility.

    Each response reflects a different bet on how quickly the pyramid will change — and how to navigate the transition without breaking the firm’s ability to develop talent.

    The consulting industry is not being dismantled by AI. It is being redesigned, unevenly, firm by firm, with no consensus on the target operating model. The firms that get the balance right — reducing the base without severing the pipeline that produces tomorrow’s senior partners — will define what the industry looks like in a decade. The ones that treat this as a simple cost-cutting exercise will find, in five years, that they have cut too deep in exactly the wrong place.

  • Capgemini’s CEO Makes the Unfashionable Case for Pacing Your AI Investment

    There is a particular kind of courage in telling a room full of executives to slow down. Aiman Ezzat, CEO of Capgemini, has been doing exactly that – and his reasoning deserves more attention than the typical “move fast or die” narrative that dominates AI strategy discussions.

    “You don’t want to be too ahead of the learning curve,” Ezzat told Fortune in February. “If you are, you’re investing and building capabilities that nobody wants.”

    Coming from the head of a €22.5 billion consultancy that has trained 310,000 employees on generative AI and is actively building labs for quantum computing, 6G, and robotics, this is not a counsel of inaction. It is a strategic position on pacing – one that puts Ezzat at odds with much of the technology industry’s current mood.

    The FOMO problem

    The fear of missing out on AI has become a boardroom affliction. Boston Consulting Group reports that half of CEOs now believe their job is at risk if AI investments fail to deliver returns. That pressure creates a predictable dynamic: spend big, move fast, worry about outcomes later.

    The data suggests the worry-later approach is not working. EY research shows that while 88% of employees report using AI at work, organisations are failing to capture up to 40% of the potential benefits. In the UK, only 21% of workers felt confident using AI as of January 2026. The tools are arriving faster than the capacity to use them well.

    Ezzat’s argument is that this gap is not a technology problem. It is a pacing problem. Companies are deploying AI capabilities ahead of their organisation’s ability to absorb them – and ahead of genuine customer demand for the outcomes those capabilities promise.

    AI is a business, not a technology

    The more substantive part of Ezzat’s case is about framing. Too many leadership teams, he argues, treat AI as “a black box that’s being managed separately” – a technology initiative bolted onto the existing business rather than a force reshaping how the business operates.

    “The question you have to focus on is: ‘How can your business be significantly disrupted by AI?’” Ezzat says. “Not ‘How is your finance team going to become more efficient?’ I’m sure your CFO will deal with that at the end of the day.”

    The distinction matters. Departmental efficiency projects – automating invoice processing, summarising meeting notes, generating marketing copy – are the low-hanging fruit that most enterprises are picking right now. They deliver incremental gains but rarely transform a business model. The harder question, the one Ezzat wants CEOs to sit with, is whether AI fundamentally changes what a company sells, how it competes, or what its customers expect.

    That question takes time to answer well. Rushing it produces expensive experiments that solve the wrong problems.

    The trust deficit

    Perhaps the most underexplored part of Ezzat’s argument is about human trust. “How do you get humans to trust the agent?” he asks. “The agent can trust the human, but the human doesn’t really trust the agent.”

    This cuts to a practical reality that technology roadmaps tend to gloss over. Agentic AI – systems designed to take autonomous actions rather than simply generate content – is the next wave of enterprise deployment. As maddaisy.com noted when covering Capgemini’s role in the OpenAI Frontier Alliance, the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. Trust is a significant part of why.

    Employees who do not trust AI agents will find ways to work around them. Managers who cannot explain AI-driven decisions to clients will revert to manual processes. Organisations that deploy autonomous systems faster than their culture can absorb them will create friction, not efficiency.

    Ezzat draws an analogy to ergonomics – the mid-twentieth century discipline of designing tools for humans rather than forcing humans to adapt to tools. “Bad chairs lead to bad backs,” he observes. “Bad AI is likely to be far more consequential.”

    Consistent with Capgemini’s own playbook

    What makes Ezzat’s position credible is that Capgemini’s recent actions align with it. The company’s approach has been to invest broadly but scale selectively.

    As maddaisy.com’s analysis of the company’s 2025 results highlighted, generative AI bookings rose above 10% of total bookings in Q4 – meaningful but not yet dominant. The company maintains labs for emerging technologies including quantum and 6G, keeping a foot in multiple possible futures without betting the firm on any single one.

    Meanwhile, the company added 82,300 offshore workers in 2025 – largely through the WNS acquisition – while simultaneously earmarking €700 million for workforce restructuring. The message is clear: AI changes the shape of the workforce, but it does not eliminate the need for one. Building the human infrastructure to deliver AI at scale takes as much investment as the technology itself.

    The metaverse lesson

    Ezzat’s most pointed comparison is to the metaverse – a technology that commanded billions in corporate investment before the market concluded that customer demand had been dramatically overstated. Capgemini itself experimented with a metaverse lab. Mark Zuckerberg renamed his company around it. Now, as Ezzat puts it, “like air fryers, its time may now have passed.”

    The parallel is not that AI will follow the metaverse into irrelevance – the use cases are far more concrete, and the enterprise adoption data is already stronger. The point is about the cost of overcommitment. Companies that invested heavily in metaverse capabilities before the market was ready wrote off those investments. The same risk exists with AI, particularly in areas like agentic systems where the technology’s capability is advancing faster than organisational readiness to use it.

    Ezzat’s prescription is agility over ambition: small pilots, constant monitoring, and the willingness to scale rapidly when adoption genuinely accelerates. “We have to be investing – but not too much – to be able to be aware of the technology, following at the speed to make sure that we are ready to scale when the adoption starts to accelerate.”

    What this means for practitioners

    For consultants advising clients on AI strategy, Ezzat’s framework offers a useful counterweight to the prevailing urgency. The question is not whether to invest in AI – that debate is settled. The question is how to pace that investment so that capability, demand, and organisational readiness move roughly in step.

    Companies that get the pacing right will avoid the twin traps of overinvestment (building capabilities nobody wants) and underinvestment (being caught flat-footed when adoption accelerates). In a market where half of CEOs fear for their jobs over AI outcomes, the discipline to move at the right speed – rather than the fastest speed – may prove to be the more valuable skill.

  • Vibe Coding Enters the Enterprise. The Governance Question Follows It In.

    When Andrej Karpathy coined the term “vibe coding” in early 2025, he was describing something informal — a developer giving in to the flow of conversation with an AI assistant, accepting whatever code it generated, and iterating by feel rather than by specification. It was a shorthand for a new way of working that felt more like directing than engineering.

    Fourteen months later, the term has migrated from developer Twitter into enterprise press releases. Pegasystems announced this week that its Blueprint platform now offers an “end-to-end vibe coding experience” for designing mission-critical workflow applications. Salesforce has embedded similar capabilities into Agentforce. Gartner, in a May 2025 report titled Why Vibe Coding Needs to Be Taken Seriously, predicted that 40 per cent of new enterprise production software will be created using vibe coding techniques by 2028. What started as a solo developer’s guilty pleasure is being repackaged as an enterprise strategy.

    The question is whether the repackaging addresses the risks, or merely relabels them.

    From Slang to Sales Pitch

    The appeal of vibe coding in an enterprise context is straightforward. Natural language replaces formal specification. Business users can describe what they want in conversational terms — a workflow, an approval chain, a customer-facing process — and an AI assistant translates that intent into a working application. Development cycles that previously took months collapse into days or hours. Stakeholder alignment happens at the prototype stage rather than after months of requirements gathering.

    Pega’s implementation illustrates the model. Users converse with an AI assistant using text or speech to design applications, refine workflows, define data models, and build interfaces. They can switch between conversational input and traditional drag-and-drop modelling at any point. Completed designs deploy directly into Pega’s platform as live, governed workflows. The company’s chief product officer, Kerim Akgonul, framed it as “the excitement and speed of vibe coding” combined with “enterprise-grade governance, security, and predictability.”

    That framing is telling. Enterprise vendors are not adopting vibe coding wholesale — they are domesticating it. The original concept involved a developer accepting AI-generated code on trust, with minimal review. The enterprise version keeps the conversational interface but routes the output through structured frameworks, predefined best practices, and platform-level guardrails. Whether that still qualifies as vibe coding or is simply a new marketing label for low-code development with an AI front end is an open question.

    The Numbers Behind the Hype

    Gartner’s 40 per cent prediction is eye-catching, but it deserves scrutiny. The firm also projects that 90 per cent of enterprise software engineers will use AI coding assistants by 2028, up from under 14 per cent in early 2024. These are not niche forecasts — they describe a wholesale transformation of how software gets built.

    The market signals support the direction. Y Combinator reported that a quarter of its Winter 2025 startup cohort had codebases that were 95 per cent AI-generated. AI-native SaaS companies are achieving 100 per cent year-on-year growth rates compared with 23 per cent for traditional SaaS. Pega’s own Q4 2025 results showed 17 per cent annual contract value growth and a 33 per cent surge in cloud revenue, with management attributing much of the acceleration to Blueprint adoption.

    But there is a less comfortable set of numbers. A Veracode report from 2025 found that nearly 45 per cent of AI-generated code introduced at least one security vulnerability. Linus Torvalds, creator of Linux, publicly cautioned that vibe coding “may be a horrible idea from a maintenance standpoint” for production systems requiring long-term support. And Gartner’s own research acknowledges that only six per cent of organisations implementing AI become “high performers” achieving significant financial returns.

    The Shadow Already Has a Name

    For regular readers of maddaisy, these risks will sound familiar. When we examined shadow AI in February, the data showed 37 per cent of employees had already used AI tools without organisational permission — including coding assistants plugged into development environments without security review. Vibe coding, in its original ungoverned form, is essentially shadow AI with a better name.

    The enterprise vendors’ pitch — governed vibe coding, with guardrails — is a direct response to this problem. Rather than fighting the tide of developers and business users reaching for AI-assisted tools, platforms like Pega and Salesforce are channelling that energy through controlled environments. It is the same pattern that played out with cloud computing a decade ago: shadow IT became sanctioned cloud adoption once the governance frameworks caught up.

    The difference this time is speed. Cloud adoption played out over years. Vibe coding is moving in months. And as maddaisy’s coverage of agentic AI drift highlighted, AI-generated systems do not fail suddenly — they degrade gradually, in ways that are harder to detect than traditional software failures. An application built through conversational prompts, where the development team may not fully understand the underlying logic, amplifies that risk considerably.

    The Governance Gap Is the Real Story

    The enterprise vibe coding pitch rests on a critical assumption: that platform-level guardrails can substitute for developer-level understanding. In regulated industries — financial services, healthcare, government — this assumption will be tested quickly and publicly.

    The immediate challenge is not whether vibe coding works in a demo. It clearly does. The challenge is what happens six months into production, when the original conversational prompts have been refined dozens of times, the underlying models have been updated, and the people who designed the workflows have moved on. That is the maintenance problem Torvalds flagged, and it maps directly onto the agentic drift pattern: small, individually reasonable changes accumulating into a system whose behaviour no longer matches its original intent.

    Consultants and technology leaders evaluating vibe coding platforms should be asking three questions. First, can you audit the reasoning chain — not just the output, but why the system built what it built? Second, what happens when the AI model underneath is updated — does the application need to be revalidated? Third, who owns the maintenance burden when the person who “vibe coded” the application is no longer available?

    What to Watch

    Enterprise vibe coding is not a fad. The productivity gains are real, the vendor investment is substantial, and the Gartner forecasts — even if directionally approximate — point to a genuine shift in how software gets built. PegaWorld 2026, scheduled for June in Las Vegas, will likely showcase dozens of enterprise vibe coding implementations.

    But the narrative developing around it echoes the early days of every enterprise technology wave: speed first, governance second. The organisations that get this right will be those that treat vibe coding as a development interface, not a development shortcut — using the conversational speed to accelerate design while maintaining the engineering discipline to ensure what gets built can be understood, audited, and maintained over time.

    The vibes are entering the enterprise. The question is whether the rigour follows them in.

  • AI Inference Is the Enterprise Security Risk Most Organisations Are Not Addressing

    Most enterprise AI security conversations still focus on training — how models are built, what data goes in, how to prevent poisoning. But the greater operational exposure sits elsewhere: in inference, the moment a trained model processes a live query and produces an output. That is where proprietary logic, sensitive prompts, and business strategy become visible to anyone watching the traffic.

    A recent panel hosted by The Quantum Insider, featuring leaders from BMO, CGI, and 01Quantum, put the point bluntly: inference is AI working, and AI working is where risk accumulates. Nearly half of the audience polled during the session admitted they lack confidence that their AI systems meet anticipated 2026 security standards. That number is consistent with broader industry data: a Cloud Security Alliance survey found that only 27 per cent of organisations feel confident they can secure AI used in core business operations.

    This is not an abstract concern. It is the practical, operational end of the governance conversation maddaisy has been tracking for weeks.

    Why inference, not training, is the exposure point

    Training happens once (or periodically). Inference happens continuously — every API call, every chatbot interaction, every agentic workflow execution. As Tyson Macaulay of 01Quantum explained during the panel, inference models often contain the distilled intellectual property of an organisation. In expert systems, the model itself reflects proprietary training data, domain knowledge, and internal logic. Reverse engineering an inference endpoint can reveal insights about what the organisation knows and how it thinks.

    But the exposure runs in both directions. Prompts themselves reveal information — about individuals, strategy, and operational priorities. A medical query reveals personal health data. A corporate query may signal product development direction. The question, in other words, can be as sensitive as the model.

    When maddaisy examined CIOs’ non-AI priorities in February, cybersecurity topped the list — precisely because AI adoption was expanding the attack surface. Dmitry Nazarevich, CTO at Innowise, described security spending increases as “directly related to the increase in exposure and risk to data associated with the increased attack surface resulting from the introduction of generative AI.” Inference security is where that expanding surface is most exposed — and most neglected.

    The shadow AI dimension

    The problem is compounded by what organisations cannot see. Research suggests that roughly 70 per cent of organisations have shadow AI in use — employees running unauthorised tools outside IT oversight. Every unsanctioned ChatGPT or Claude query involving company data is an unmonitored inference event, pushing proprietary information through systems the organisation does not control.

    JetStream Security, a startup founded by veterans of CrowdStrike and SentinelOne, raised $34 million in seed funding last week to address precisely this gap. The company’s product, AI Blueprints, maps AI activity in real time — which agents are running, which models they use, what data they access. The premise is straightforward: you cannot secure what you cannot see.

    When maddaisy covered shadow AI in February, the focus was on governance and policy. Inference security adds a harder technical dimension. It is not enough to write policies about acceptable AI use if the organisation has no visibility into what models are being queried, by whom, and with what data.

    Real-world vulnerabilities are already surfacing

    The risks are not hypothetical. In February, LayerX Security published a report describing a critical vulnerability in Anthropic’s Claude Desktop Extensions — a malicious calendar invite could silently execute arbitrary code with full system privileges. The issue stemmed from an architectural choice: extensions ran unsandboxed with direct file system access, enabling tools to chain actions autonomously without user consent.

    The debate that followed was instructive. Anthropic argued the onus was on users to configure permissions properly. Security researchers countered that competitors like OpenAI and Microsoft restricted similar capabilities through sandboxing and permission gates. The real lesson for enterprises is that inference-layer vulnerabilities are architectural, not incidental — and they require controls before deployment, not after.

    As Rock Lambros of RockCyber put it: “Every enterprise deploying agents right now needs to answer — did we restrict tool chaining privileges before activation, or did we hand the intern the master key and go to lunch?”

    The governance gap has a security-shaped hole

    Maddaisy has covered the emerging agentic AI governance playbook extensively — the frameworks from regulators, the principles converging around least-privilege access and real-time monitoring. But frameworks are policy instruments. Inference security is the engineering layer that makes those policies enforceable.

    The numbers illustrate the disconnect. According to the latest governance statistics compiled from major 2025-26 surveys, 75 per cent of organisations report having a dedicated AI governance process — but only 26 per cent have comprehensive AI security policies. Fewer than one in 10 UK enterprises integrate AI risk reviews directly into development pipelines. Governance without security controls is aspiration without implementation.

    The financial services sector offers a partial model. Kristin Milchanowski, Chief AI and Data Officer at BMO, described her bank’s approach during the Quantum Insider panel: bringing large language models in-house where possible, ensuring that additional training on proprietary data remains contained, and treating responsible AI as a board-level cultural priority rather than a compliance exercise. But BMO operates under some of the strictest regulatory regimes globally. Most enterprises do not face equivalent pressure — yet.

    What practitioners should be doing now

    The practical agenda emerging from this convergence of research is specific and actionable:

    Audit inference endpoints. Map every production AI system, including shadow deployments. The JetStream model — real-time visibility into which models are running, what data they touch, and who is responsible — is becoming table stakes.

    Apply least-privilege to AI agents. The agentic governance frameworks maddaisy covered last week prescribe this. At the inference layer, it means restricting tool chaining, sandboxing execution environments, and requiring explicit permission gates for cross-system actions.

    Build cryptographic agility into procurement. The Quantum Insider panel raised a forward-looking point: “harvest now, decrypt later” attacks — where encrypted inference traffic is collected today for decryption once quantum computing matures — are overtaking model drift as the top digital trust concern among infrastructure leaders. Embedding post-quantum cryptography expectations into vendor contracts now is practical and low-cost.

    Treat inference security as infrastructure. Not as a feature, not as an add-on. As the panel concluded: critical infrastructure must be secured before it is tested by failure.

    The operational layer matters most

    The governance conversation has matured rapidly. Frameworks exist. Principles are converging. Regulation is arriving. But between the policy layer and the production environment sits inference — the operational layer where AI actually works, where data flows through models, where prompts reveal strategy, and where the absence of controls creates the exposure that governance documents are supposed to prevent.

    Gartner projects spending on AI governance platforms will reach $492 million this year and surpass $1 billion by 2030. That money will be wasted if it funds policies without the engineering to enforce them. The organisations pulling ahead will be those that treat inference security not as a technical detail for the security team, but as the operational foundation on which their entire AI strategy depends.

  • The Agentic AI Governance Playbook Is Taking Shape. Most Enterprises Are Not Ready for It.

    The governance playbook for agentic AI is starting to take shape — and it looks nothing like the frameworks most enterprises currently rely on.

    Over the past month, a cluster of regulatory bodies, law firms, and industry coalitions have published guidance specifically addressing agentic AI systems. The UK Information Commissioner’s Office released its first Tech Futures report on agentic AI in January. Mayer Brown published a comprehensive governance framework in February. The Partnership on AI identified agentic governance as the top priority among six for 2026. And FINRA’s 2026 oversight report now includes a dedicated section on AI agents as an emerging threat to financial markets.

    Taken individually, none of these publications is remarkable. Taken together, they represent something maddaisy has been tracking since mid-February: the governance conversation is finally catching up to the deployment reality.

    The problem these frameworks are trying to solve

    When maddaisy examined the agentic AI governance gap in February, the numbers were stark: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. Subsequent analysis of agentic drift — where systems degrade gradually without triggering alarms — and shadow AI adoption revealed that the risks are not hypothetical. They are already materialising in production environments.

    The core issue, as these new frameworks make explicit, is that agentic AI does not fit within existing oversight models. Traditional AI governance assumes a human reviews the output before acting on it. Agentic systems are the actor. They plan, execute, and adapt autonomously — booking appointments, approving procurement, triaging complaints, managing collections. Governance cannot happen after the fact. It must be embedded in real time.

    What the emerging frameworks have in common

    Despite coming from different jurisdictions and institutions, the recent guidance converges on several principles that practitioners should note.

    Least privilege by default. Mayer Brown’s framework emphasises restricting what an agent can access — not just what it can do. Agents should not have standing access to sensitive databases, trade secrets, or systems beyond their immediate task scope. This mirrors the zero-trust approach that cybersecurity teams have adopted over the past decade, now applied to autonomous software.

    Human checkpoints at decision boundaries, not everywhere. The emerging consensus rejects both extremes: fully autonomous operation with no oversight, and human approval for every action (which would negate the point of agentic AI). Instead, the frameworks advocate for defined boundaries — moments where human approval is required before the agent proceeds. These include irreversible actions, decisions in regulated domains such as healthcare or financial services, and any step that falls outside the agent’s defined scope.

    Real-time monitoring, not periodic audits. The ICO report and Mayer Brown both stress continuous behavioural monitoring after deployment. This addresses the drift problem directly: an agent that passed all review gates at launch may behave differently three months later after prompt adjustments, model updates, and tool changes. Logging the full chain of reasoning and actions — not just inputs and outputs — is becoming a baseline expectation.

    Transparency to users. Multiple frameworks now explicitly require that organisations disclose when a customer is interacting with an AI agent rather than a human. The ICO notes that agentic AI “magnifies existing risks from generative AI” because these systems can rapidly automate complex tasks and generate new personal information at scale. Users need to know what they are dealing with — and they need a route to a human at any point.

    Value-chain accountability. The Partnership on AI flags a gap that most enterprise governance programmes have not addressed: who is responsible when something goes wrong in a multi-agent system? If an agent calls another agent, which calls a third-party tool, which accesses a database — and the outcome is harmful — the liability chain is unclear. Their recommendation: establish an agreed taxonomy for the AI value chain before deployment, not after an incident.

    Where the frameworks fall short

    For all the convergence, there are notable gaps. None of the published guidance adequately addresses the measurement problem. Adobe’s 2026 AI and Digital Trends Report found that only 31% of organisations have implemented a measurement framework for agentic AI. Without clear metrics for what “good governance” looks like in practice, the frameworks risk becoming compliance theatre — policies that exist on paper but do not change how agents actually operate.

    There is also limited guidance on cross-border deployment. The Partnership on AI calls for international coordination, but the practical reality — as maddaisy’s analysis of America’s state-by-state regulatory fragmentation highlighted — is that even within a single country, compliance requirements vary dramatically. An agent deployed in New York now faces transparency requirements under the RAISE Act that do not apply in Texas, where the Responsible AI Governance Act takes a different approach. For multinational enterprises, the compliance surface is formidable.

    What this means for practitioners

    The practical takeaway is straightforward, if demanding. Organisations deploying or planning to deploy agentic AI should be doing four things now.

    First, audit existing governance frameworks against the agentic-specific requirements these publications outline. Most enterprises have AI policies designed for advisory systems. Those policies almost certainly do not cover autonomous execution, multi-agent coordination, or real-time behavioural monitoring.

    Second, define decision boundaries before deployment. Which actions require human approval? What constitutes an irreversible decision in your context? Where are the regulatory tripwires? These questions are easier to answer before an agent is in production than after it has been running for six months.

    Third, invest in observability infrastructure. As maddaisy noted in the agentic drift analysis, the systems that fail most dangerously are the ones that appear to be working. Full execution logging, behavioural baselines, and anomaly detection are not optional extras — they are the minimum viable governance stack for agentic systems.

    Fourth, assign clear ownership. Mayer Brown’s framework identifies four distinct governance roles: decision-makers who set policy, product teams who implement it, cybersecurity teams who integrate agents into security procedures, and frontline employees who can identify and escalate issues. Most organisations have not mapped these responsibilities for their agentic deployments.

    The governance race is just starting

    The gap between agentic AI deployment and agentic AI governance remains wide. But the direction of travel is now clear: regulators, industry bodies, and legal advisors are converging on a set of principles that will become the baseline expectation. Organisations that build these capabilities now — real-time monitoring, defined decision boundaries, value-chain accountability, and clear ownership — will be better positioned than those scrambling to retrofit governance after their first incident.

    The frameworks are not perfect. They will evolve. But the era of governing agentic AI with policies designed for chatbots is ending. For consultants and technology leaders advising enterprises on AI deployment, that shift should be shaping every engagement.

  • The Pentagon-Anthropic Standoff Exposes a New Category of AI Vendor Risk

    When maddaisy examined America’s fragmented AI regulation landscape last week, the focus was on states pulling in different directions while the federal government tried to impose order from above. That piece ended with the observation that organisations face a compliance labyrinth with no clear exit. Five days later, the labyrinth got a new wing — and this one has armed guards.

    On 27 February, President Trump ordered all federal agencies to immediately cease using Anthropic’s AI systems. Hours later, Defence Secretary Pete Hegseth moved to designate the company a “supply-chain risk to national security” — a label previously reserved for foreign adversaries. The same evening, OpenAI CEO Sam Altman announced that his company had struck a deal to deploy its models on the Pentagon’s classified networks.

    The sequence of events was not subtle. An American AI company refused to remove ethical guardrails from its military contract. The government threatened to destroy it. A rival stepped in to take the work. For anyone who manages AI vendor relationships — and that now includes most enterprise technology leaders — the implications are significant and immediate.

    What actually happened

    The conflict had been building for months. Anthropic holds a contract worth up to $200 million with the Pentagon and, through its partnership with Palantir, was one of only two frontier AI models cleared for use on classified defence networks. The arrangement worked — until the Pentagon insisted on access to Claude for “all lawful purposes,” without the ethical restrictions Anthropic had built into its acceptable use policy.

    Anthropic’s red lines were specific: no mass domestic surveillance and no fully autonomous weapons without human oversight. CEO Dario Amodei argued that current AI systems “are simply not reliable enough to power fully autonomous weapons” and that “using these systems for mass domestic surveillance is incompatible with democratic values.”

    The Pentagon disagreed, and the situation escalated rapidly. Defence Secretary Hegseth gave Anthropic an ultimatum: agree to the government’s terms by 5:01 p.m. on 27 February or face designation as a supply-chain risk and potential invocation of the Cold War-era Defence Production Act. Anthropic refused. Trump’s ban and Hegseth’s designation followed within hours.

    What makes this more than a contract dispute is the weapon the government chose. As former Trump AI policy adviser Dean Ball wrote, the supply-chain risk designation would cut Anthropic off from hardware and hosting partners — “effectively destroying the company.” Ball, hardly an opponent of the administration, called it “attempted corporate murder.”

    OpenAI steps in — with familiar language and different terms

    OpenAI’s Pentagon deal arrived with careful framing. Altman claimed the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human responsibility for the use of force. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote.

    The critical difference is what “agreement” means in practice. Anthropic sought contractual guarantees — binding restrictions written into the terms of service. OpenAI’s approach permits “all lawful uses” and relies on the Pentagon’s existing policies and legal frameworks rather than company-imposed limitations. As TechCrunch noted, it remains unclear how — or whether — the safety measures in OpenAI’s deal differ substantively from the terms Anthropic rejected.

    When maddaisy covered OpenAI’s Frontier Alliance with McKinsey, BCG, Accenture, and Capgemini last week, the story was about a vendor that needed consulting partners to scale its enterprise business. The Pentagon deal adds a wholly different dimension to OpenAI’s ambitions — and a wholly different category of risk.

    The institutional gap

    The deeper issue is structural. For decades, Pentagon technology contracts were dominated by slow-moving, heavily regulated defence contractors — Raytheon, Lockheed Martin, Northrop Grumman. These companies built institutional muscle for navigating political transitions, managing classified programmes, and absorbing the long-term volatility of government work. They were not exciting. They were durable.

    AI startups are neither slow-moving nor heavily regulated. They operate on venture capital timelines, consumer brand logic, and talent markets where a single ethical controversy can trigger an employee exodus. OpenAI has already seen 11 of its own employees sign an open letter protesting the government’s treatment of Anthropic — even as their employer benefits from it.

    This is the mismatch that matters for practitioners. The AI companies building the tools that enterprises depend on are now also becoming national security infrastructure — but they have none of the institutional frameworks that role demands. They lack the political risk management, the bipartisan relationship-building, and the organisational resilience to weather what comes next.

    The vendor risk no one modelled

    For organisations that have built their AI strategies around Anthropic or OpenAI, the past week introduced a category of risk that does not appear in most vendor assessment frameworks: political risk.

    Anthropic’s designation as a supply-chain risk, if upheld, would prevent any military contractor or supplier from doing business with the company. Given how deeply Anthropic’s Claude is integrated with Palantir’s systems — which are themselves critical Pentagon infrastructure — the practical implications cascade well beyond the original dispute. The CIO analysis of the situation compared it to the FBI-Apple standoff over iPhone encryption in 2015, but noted that the current administration “seems less willing to be patient.”

    For enterprises in the defence supply chain, the risk is direct: continued use of Anthropic’s technology may become a contractual liability. For everyone else, the risk is precedential. If the federal government can threaten to destroy an American company for negotiating contract terms, the calculus changes for every AI vendor evaluating government work — and for every enterprise evaluating those vendors.

    What consultants and technology leaders should watch

    Three things matter in the weeks ahead.

    First, the legal challenge. Anthropic has stated it will contest the supply-chain designation in court. Legal analysts suggest the designation is unlikely to survive judicial scrutiny — it was designed for foreign adversaries, not domestic companies in active contract negotiations. But legal proceedings take time, and the commercial damage from even a temporary designation could be severe.

    Second, the talent signal. AI companies compete fiercely for a small pool of researchers and engineers. The Pentagon standoff has made the ethical positioning of AI labs a hiring issue, not just a branding one. OpenAI’s internal tension — benefiting commercially from the deal while employees publicly protest the government’s tactics — is a dynamic that affects product roadmaps, not just press coverage.

    Third, the precedent for vendor governance. As the Council on Foreign Relations observed, the Anthropic standoff raises a fundamental question about AI sovereignty: can a private firm constrain the government’s use of a decisive military technology, and should it? Whichever way that question resolves, it will reshape the terms on which AI vendors provide their products — to governments and to enterprises alike.

    The irony is hard to miss. Just days after maddaisy reported on America’s fragmented state-level AI regulation, the federal government demonstrated that its own approach to AI governance is no less chaotic — merely higher-stakes. For organisations navigating vendor relationships in this environment, the lesson is uncomfortable: the AI companies they depend on are now players in a political contest they did not design and cannot control.

  • Insurtech’s AI-Fuelled Five Billion Dollar Comeback — And the Question the Industry Has Not Answered

    Global insurtech funding reached $5.08 billion in 2025, up 19.5% from $4.25 billion the year before. It is the first annual increase since 2021 — and, according to Gallagher Re’s latest quarterly report, it marks a fundamentally different kind of recovery from the one the sector last enjoyed.

    The 2021 boom was driven by venture capital chasing consumer-facing disruptors. The 2025 comeback is driven by insurers and reinsurers themselves investing in operational AI. That distinction matters far more than the headline number.

    The money is coming from inside the house

    In 2025, insurers and reinsurers made 162 private technology investments into insurtechs — more than in any prior year on record. This is not outside capital speculating on disruption. It is the industry itself funding its own modernisation, a shift Gallagher Re describes as a “changing of the guard” in the insurtech investor community.

    The fourth quarter was particularly striking. Funding hit $1.68 billion — a 66.8% increase over Q3 and the strongest quarterly figure since mid-2022. More than 100 insurtechs raised capital for the first time since early 2024, and mega-rounds (deals exceeding $100 million) returned in force, with 11 such rounds totalling $1.43 billion for the full year, up from six in 2024.

    Property and casualty insurtech funding rebounded 34.9% to $3.49 billion, driven by companies like CyberCube, ICEYE, Creditas, Federato, and Nirvana, which collectively secured $663 million in Q4 alone. Life and health insurtech, by contrast, declined slightly — a 4.6% dip that underlines where the industry sees its most pressing operational gaps.

    Two-thirds of the money follows AI

    The most telling statistic in the report is this: two-thirds of all insurtech funding in 2025 — $3.35 billion across 227 deals — went to AI-focused firms. By Q4, that share had climbed to 78%.

    Andrew Johnston, Gallagher Re’s global head of insurtech, frames this as convergence rather than a trend: “Over time, we see AI becoming so integrated into insurtech that the two may well become synonymous — in much the same way as we could already argue that ‘insurtech’ is itself a meaningless label, because all insurers are technology businesses now.”

    That trajectory is visible in the deals themselves. mea, an AI-native insurtech, raised $50 million from growth equity firm SEP in February — its first external capital after years of profitable organic growth. The company’s platform, already processing more than $400 billion in gross written premium across 21 countries, automates end-to-end operations for carriers, brokers, and managing general agents. mea claims its AI can cut operating costs by up to 60%, targeting the roughly $2 trillion in annual industry operating expenses where manual workflows persist.

    At the seed stage, General Magic raised $7.2 million for AI agents that automate administrative tasks for insurance teams — reducing quote generation time from approximately 30 minutes to under three in early deployments with major insurers.

    Profitability, not just growth

    What separates the 2025 wave from the 2021 boom is that several insurtechs are now proving they can make money, not just raise it.

    Kin Insurance, which focuses on high-catastrophe-risk regions, reported $201.6 million in revenue for 2025 — a 29% increase — with a 49% operating margin and a 20.7% adjusted loss ratio. Hippo, another property-focused insurtech, reversed its 2024 net loss with $58 million in net income, driven by improved underwriting and a deliberate shift away from homeowners insurance toward more profitable lines.

    These are not unicorn-valuation stories. They are companies demonstrating operational discipline — the kind of results that explain why insurers and reinsurers, rather than venture capitalists, are now leading the investment.

    The B2B shift

    Gallagher Re’s data reveals another structural change worth watching. Nearly 60% of property and casualty deals in 2025 went to business-to-business insurtechs — a 12 percentage point increase from 2021’s funding boom. Meanwhile, the deal share for lead generators, brokers, and managing general agents fell to 35%, the lowest on record.

    The implication is clear: capital is flowing toward technology that improves how existing insurers operate, not toward new entrants trying to replace them. The disruptor narrative of the early 2020s has given way to something more pragmatic — and, arguably, more durable.

    This parallels a pattern visible across financial services. As maddaisy noted when examining Lloyds Banking Group’s AI programme, established institutions are increasingly treating AI not as an innovation experiment but as core operational infrastructure — and measuring it accordingly.

    The question the industry has not answered

    For all the funding momentum, Johnston raises a challenge that the sector has yet to confront seriously: the “so what” problem.

    “As the implementation of AI starts to deliver efficiency gains, it is imperative that the industry works out how to best use all of this newly freed up time and resource,” he writes.

    This is not a hypothetical. If mea can genuinely reduce operating costs by 60% for a carrier, that frees up a substantial portion of the 14 percentage points of combined ratio currently consumed by operations. The question is whether that freed capacity translates into better underwriting, deeper risk analysis, and improved customer outcomes — or whether it simply gets absorbed into margin without changing how insurance fundamentally works.

    The broker market is already feeling the tension. In February, insurance broker stocks dropped roughly 9% after OpenAI approved the first AI-powered insurance apps on ChatGPT, enabling consumers to receive quotes and purchase policies within the conversation. Most analysts called the selloff overdone — commercial broking remains complex enough to resist near-term disintermediation — but the episode illustrated how quickly market sentiment can shift when AI moves from back-office tooling to customer-facing distribution.

    What to watch

    The $5 billion figure is a milestone, but the real signal is in its composition. Insurtech funding is no longer a venture capital bet on disruption. It is the insurance industry’s own investment in operational AI — led by incumbents, focused on B2B infrastructure, and increasingly backed by profitability rather than just promise.

    Whether that investment translates into genuinely better insurance — not just cheaper operations — depends on how the industry answers Johnston’s question. The money is flowing. The efficiency gains are materialising. What the sector does with them will determine whether this comeback is a lasting structural shift or just the next chapter of doing the same things with fewer people.

  • Accenture Will Track AI Logins for Promotions. The Risk Is Measuring Compliance, Not Competence.

    Accenture has begun tracking how often senior employees log into its AI tools — and will factor that usage into promotion decisions. An internal email, reported by the Financial Times, put it plainly: “Use of our key tools will be a visible input to talent discussions.” For a firm that sells AI transformation to the world’s largest organisations, the message to its own workforce is unmistakable: adopt or stall.

    The policy is not without logic. Accenture has invested heavily in AI readiness — 550,000 employees trained in generative AI, up from just 30 in 2022, backed by $1 billion in annual learning and development spending. But training people and getting them to change how they work are two very different problems. The promotion-linked tracking is an attempt to close that gap by force.

    The credibility problem

    This move arrives at a moment when Accenture’s external positioning makes internal adoption a matter of commercial credibility. As maddaisy.com reported last week, Accenture is one of four firms named in OpenAI’s Frontier Alliance — tasked with building the data architecture, cloud infrastructure, and systems integration work needed to deploy AI agents at enterprise scale. It is difficult to sell that capability convincingly if your own senior managers are not using the tools.

    The policy applies specifically to senior managers and associate directors, with leadership roles now requiring what Accenture calls “regular adoption” of AI. The firm is tracking weekly logins to its AI platforms for certain senior staff, though employees in 12 European countries and those on US federal contracts are excluded — a pragmatic nod to varying data protection regimes and security requirements.

    The resistance is the interesting part

    What makes this story more than a policy announcement is the reaction. Some senior employees have questioned the value of the tools outright, with one describing them as “broken slop generators.” Another told the Financial Times they would “quit immediately” if the tracking applied to them.

    That resistance is worth taking seriously, not dismissing. It maps directly onto a pattern maddaisy.com has been tracking. Research published earlier this month found that only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly” — a figure that drops to 12% among non-senior staff. When people do not understand why they are being asked to use a tool, mandating its use tends to produce compliance rather than competence.

    Harvard Business Review research, cited in that same analysis, identified three psychological needs that determine whether employees embrace or resist AI: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues). A policy that monitors logins and ties them to career progression addresses none of these. It measures activity. It says nothing about whether that activity is useful.

    Logins are not outcomes

    This is the core tension. Accenture’s leadership knows that senior adoption is a bottleneck — industry observers note that older managers are often “less comfortable with technology and more wedded to established working methods.” CEO Julie Sweet has framed AI adoption as existential, telling analysts that the company is “exiting employees” in areas where reskilling is not possible. The 11,000 layoffs announced in September reinforced the point.

    But tracking logins conflates presence with productivity. A senior manager who logs in weekly to check a dashboard is counted the same as one who has genuinely integrated AI into client delivery. The metric captures the floor, not the ceiling.

    This echoes a broader concern maddaisy.com has documented. UC Berkeley researchers found that employees using AI tools worked faster but not necessarily better — absorbing more tasks, blurring work-life boundaries, and entering a cycle of acceleration that resembled productivity but often was not. If Accenture’s policy drives more tool usage without more thoughtful tool usage, it risks producing exactly this outcome at scale.

    What this tells the rest of the industry

    Accenture is not alone in struggling with senior AI adoption. The challenge is structural across professional services. Deloitte’s 2026 CSO Survey found that while 95% of chief strategy officers expect AI to reshape their priorities, only 28% co-lead their organisation’s AI decisions. The people with the authority to mandate change are often the furthest from understanding it.

    Accenture’s approach is at least direct. Rather than hoping adoption trickles up from junior staff — who typically adopt new tools faster — it is applying pressure from the top. And the numbers suggest some urgency: with 750,000 employees and $70 billion in revenue, Accenture has grown enormously from its 275,000-person, $29 billion base in 2013. Maintaining that trajectory while its competitors embed AI into delivery models requires its own workforce to be fluent, not just trained.

    The risk, though, is that the policy optimises for the wrong signal. Organisations that have navigated AI adoption most effectively — and maddaisy.com has covered several — tend to share a common trait: they measure what AI enables people to do differently, not how often people open the application. Accenture’s policy would be considerably more compelling if it tracked client outcomes improved through AI-assisted work, or time freed for higher-value tasks, rather than weekly platform logins.

    The precedent matters more than the policy

    Whatever one makes of the specifics, Accenture has done something that most large organisations have avoided: it has made AI adoption an explicit, measurable condition of career advancement for senior leaders. That is a significant signal. It tells clients that Accenture is serious about practising what it sells. It tells employees that AI fluency is no longer optional at the leadership level.

    Whether it works depends on what happens next. If the tracking evolves toward measuring genuine integration — how AI changes the quality of work, not just the frequency of logins — Accenture could set a useful template for the industry. If it remains a blunt instrument that rewards compliance over competence, it will likely produce exactly the kind of performative adoption that gives AI transformation programmes a bad name.

    For consultants and enterprise leaders watching from outside, the lesson is practical: mandating AI adoption is easy; mandating it well is the hard part. The metric you choose to track will shape the behaviour you get.

  • The $5 Trillion Business Transfer That Has Nothing to Do with AI

    Maddaisy has spent the past fortnight examining McKinsey through the lens of AI — its 25,000 AI agents, its shift to outcome-based pricing, its OpenAI Frontier Alliance. But the firm’s most consequential publication this month may have nothing to do with artificial intelligence at all.

    A new report from McKinsey’s Institute for Economic Mobility identifies what it calls the “Great Ownership Transfer” — roughly six million small and medium-sized American businesses facing ownership transitions by 2035, representing up to $5 trillion in enterprise value. The cause is straightforward demography: baby boomer business owners are retiring, and the systems for handing their companies to the next generation of owners barely exist.

    A 92% Closure Rate

    The headline statistic is stark. Today, 92% of small-business market exits in the United States result in closure. Just 5% are completed as sales. Another 3% are transferred to new owners through other mechanisms. The remaining businesses simply shut their doors.

    This is not a failure of entrepreneurial ambition. Small businesses account for 99% of all US companies and employ nearly half the national workforce. The problem, as the report’s authors put it, is structural: “Buying and selling a small business is often harder than starting one because the systems that support entrepreneurship in the United States are currently built for founding companies, not transferring them.”

    Viable firms are closing not because they lack customers or revenue, but because the pathways to succession are limited, opaque, and prohibitively expensive for the buyers who would keep them running.

    The Missing Middle

    The risk concentrates in what McKinsey terms the “missing middle.” Nearly 80% of projected exits involve micro and emerging middle-market businesses valued at less than $2 million. These firms sit in an awkward no-man’s-land: too small to attract private equity or institutional acquirers, but too large and complex for a typical first-time buyer to finance without significant support.

    The consequences are not evenly distributed. Rural communities, where these smaller firms often serve as anchor employers and the primary local tax base, face disproportionate exposure. Labour-intensive industries essential to daily life — retail, construction, food services — account for roughly one-third of the businesses caught in this gap. When a town’s only plumbing contractor or hardware shop closes because there is no viable buyer, the impact extends well beyond the balance sheet.

    A Financing System Built for the Wrong Problem

    The report is pointed about where the infrastructure fails. The SBA 7(a) loan — the primary federal instrument for small-business acquisition financing — requires high equity contributions and full personal guarantees. For first-time buyers, particularly those without existing assets or family wealth to pledge, the barrier is effectively insurmountable.

    The broader ecosystem compounds the problem. Unlike residential property, where standardised appraisals, mortgage products, and regulatory frameworks have created a liquid market, small-business transfers remain bespoke transactions. Each deal requires its own valuation, its own legal structure, its own financing arrangement. There is no MLS for Main Street businesses, no standardised underwriting for a family-owned electrical contractor with $1.5 million in revenue.

    McKinsey’s prescription is to treat small-business acquisition as a scalable market rather than a series of one-off transactions. That means modernised underwriting standards, bundled advisory services, and coordinated market infrastructure — the kind of systemic plumbing that already exists for property and public equities but has never been built for SMB ownership transfer.

    The Equity Dimension

    The demographic profile of current small-business owners — overwhelmingly older, white, and male — means this transfer also carries significant implications for wealth distribution. Under current patterns, women, Black, and Latino individuals combined would capture only about 28% of the transferring $5 trillion in value.

    McKinsey’s modelling suggests the gap is not inevitable. If ownership participation reached demographic parity, Black individuals could see their wealth capture increase more than fourfold to approximately $369 billion. Parity for women could unlock roughly $700 billion. These are not aspirational targets plucked from a diversity statement — they are the arithmetic consequence of removing structural barriers to acquisition financing and deal flow.

    The growth of employee stock ownership plans (ESOPs) and cooperative conversions offers one partial pathway. ESOP participation has grown by more than one million participants over the past decade, creating models for equitable ownership transfer that do not depend solely on traditional private capital. New platforms — BizBuySell, Acquire.com, Baton — are beginning to chip away at the opacity of the SMB transaction market, though none yet operates at the scale the problem demands.

    What This Means for Consulting

    For the consulting industry, the report represents both a diagnosis and an opportunity. Advisory firms have spent the past two years racing to build AI practices and secure platform alliances. The $5 trillion ownership transfer is a reminder that some of the largest structural challenges in the economy are not technology problems at all — they are market design problems, financing problems, and coordination failures.

    Succession advisory, M&A support for sub-$2 million transactions, ESOP conversion guidance, and community economic development are not glamorous practice areas. They do not generate the headlines that AI partnerships attract. But if McKinsey’s numbers are even roughly correct, the demand for this kind of advisory work is about to grow substantially — and the firms that build credible practices now will find themselves serving a market that barely existed a decade ago.

    The question, as the report frames it, is whether the coming decade becomes “a story of tragic business loss” or “the inflection point when business ownership became a broader pathway to mobility.” The answer will depend less on any single technology and more on whether the market infrastructure catches up to the demographic reality already underway.