Tag: ai-strategy

  • CTOs and CHROs Are Climbing the Pay Table. That Tells You Where Boards Think Risk Lives Now.

    For decades, the path to being among a public company’s five highest-paid executives ran through operations, sales, or running a business unit. Technology leaders and HR chiefs were important, certainly — but they were support functions, not the positions that commanded top-tier compensation.

    That hierarchy is shifting. New research from The Conference Board, published this week, tracks the named executive officers (NEOs) — the five highest-paid executives — across the Russell 3000 from 2021 to 2025. The findings are striking: Chief Technology Officers appearing as NEOs increased by 61%, while Chief Human Resources Officers rose by 55%. In the same period, business unit leaders — historically the dominant non-CEO, non-CFO category — declined by 15%.

    The numbers are not subtle. They represent a structural revaluation of which functions boards consider most critical to company performance and risk.

    From support functions to enterprise risk owners

    The explanation is not difficult to locate. Two forces are converging: the AI boom is making technology capability an existential strategic question, and a persistent talent war is making the ability to attract, retain, and reskill workers a board-level concern rather than an HR department problem.

    “Growth in CHRO and CTO roles signals that talent, culture, and digital capability are now viewed as enterprise risks, not support functions,” said Andrew Jones, Principal Researcher at The Conference Board. “Boards are prioritising leaders who shape resilience and transformation across the organisation.”

    This tracks with what maddaisy has been reporting for months. The consulting pyramid piece in early March documented how firms are reshuffling roles rather than eliminating them — cutting some positions while creating others in AI engineering and data science. Accenture’s decision to track AI tool usage for promotions was another signal: when a firm ties career progression to technology adoption, the executive overseeing that technology becomes strategically indispensable.

    The specific numbers tell the story

    CTO NEO disclosures rose from 155 to 249 in the Russell 3000 between 2021 and 2025. CHRO disclosures went from 148 to 230. These are not marginal shifts — they represent boards deciding, through the bluntest mechanism available (compensation), that these roles belong at the top table.

    Meanwhile, business unit leaders fell from 1,734 to 1,475 disclosures. The decline suggests that boards are placing less emphasis on divisional performance and more on enterprise-wide capabilities: technology infrastructure, talent strategy, and the legal and regulatory architecture that governs both.

    That last point matters. Legal roles — including chief legal officers, corporate secretaries, and general counsels — saw the largest increase of any non-mandatory NEO category, rising 21% over the same period. As AI governance, data regulation, and compliance pressures mount, the lawyers are moving closer to the centre of power too.

    What this means for the talent market

    The compensation data carries implications well beyond boardroom politics. When CTOs and CHROs move into the highest-paid tier, it reshapes the talent pipeline for those roles. More ambitious executives will target those paths. Boards will demand different skill sets — not just technical competence for CTOs, but strategic vision for how AI and digital infrastructure create competitive advantage. Not just process management for CHROs, but the ability to navigate workforce transformation at scale.

    The Conference Board’s data also reveals an interesting gender dimension. Among S&P 500 NEOs, women CEOs earned 11% more than men in 2025. But male NEOs overall still earned 8% more in the S&P 500 and 12% more in the Russell 3000. The gap, as researcher Paul Hodgson noted, “largely reflects who holds which roles” — men remain more prevalent in higher-paid operational and commercial positions, with longer average tenure and concentration at larger firms.

    The COO plateau and what it signals

    One quieter finding deserves attention: COO representation rose just 6% over the period and has actually declined from a 2023 peak. The chief operating officer — once the natural second-in-command — appears to be losing ground to more specialised enterprise-wide roles. In an era where the critical operational questions are “how do we deploy AI safely” and “how do we retain the people who know how to do it,” a generalist operations mandate may no longer be enough to justify top-tier compensation.

    The board’s revealed preferences

    Compensation data has always been the most reliable indicator of what organisations actually value, as opposed to what they claim to value. Press releases can announce “people-first cultures” and “digital-first strategies” without consequence. Paying the CHRO and CTO as much as the head of your largest business unit is a commitment that shows up in proxy statements.

    The Conference Board’s findings confirm a pattern that has been building for several years: boards are redefining which risks are existential and which executives own them. The AI boom has made technology leadership a strategic imperative. The talent war — intensified by the very AI transformation companies are pursuing — has elevated workforce strategy from an administrative function to an enterprise risk.

    For consultants and practitioners, the practical implication is clear. The organisations they advise are restructuring their leadership hierarchies around technology and talent. Advisory work that once centred on operational efficiency and market strategy increasingly requires fluency in AI deployment, workforce transformation, and the regulatory landscape that governs both. The C-suite is telling you where the priorities are. The pay data just makes it impossible to ignore.

  • The Three-Tool Threshold: BCG Research Reveals Where AI Productivity Gains Turn Into Cognitive Overload

    For months, the evidence that AI tools are intensifying work rather than simplifying it has been accumulating. maddaisy has tracked this story from the UC Berkeley research showing employees absorbing more tasks under AI, through the organisational failures that leave workers unsupported, to the implementation problems that bake burnout into the system from day one. What was missing was a specific threshold — a number that tells enterprises where the gains end and the damage begins.

    Boston Consulting Group has now supplied one. In a study published in Harvard Business Review this month, researchers surveyed 1,488 full-time US workers and found a clean break point: employees using three or fewer AI tools reported genuine productivity gains. Those using four or more reported the opposite — declining productivity, increased mental fatigue, and higher error rates. BCG calls the phenomenon “AI brain fry.”

    The finding is not just academic. Among workers reporting brain fry, 34% expressed active intention to leave their employer, compared with 25% of those who did not. For a workforce already under pressure from rapid technology deployment, that nine-percentage-point gap represents a tangible retention risk.

    The cognitive cost no one budgeted for

    The BCG research puts numbers to something the UC Berkeley study identified in qualitative terms earlier this year. When AI tools require high levels of oversight — reading, interpreting, and verifying LLM-generated content rather than simply delegating administrative tasks — workers expend 14% more mental effort. They experience 12% greater mental fatigue and 19% more information overload.

    Many respondents described a “fog” or “buzzing” sensation that forced them to step away from their screens. Others reported an increase in small mistakes — exactly the kind of errors that compound in professional services, financial analysis, and other high-stakes environments.

    “People were using the tool and getting a lot more done, but also feeling like they were reaching the limits of their brain power,” Julie Bedard, the study’s lead author and a managing director at BCG, told Fortune. “Things were moving too fast, and they didn’t have the cognitive ability to process all the information and make all the decisions.”

    This aligns with what maddaisy has previously described as the task expansion pattern: when AI makes certain tasks faster, employees do not use the freed-up time for strategic thinking. They absorb more work. The BCG data now suggests the breaking point arrives sooner than most organisations assume — at the fourth tool, not the tenth.

    The macro picture is equally sobering

    The three-tool threshold sits against a broader backdrop of underwhelming AI productivity data at scale. A Goldman Sachs analysis published this month found “no meaningful relationship between productivity and AI adoption at the economy-wide level,” with measurable gains confined to just two domains: customer service and software development.

    Separately, a survey of 6,000 C-suite executives found that 90% saw no evidence of AI impacting productivity or employment in their workplaces over the past three years. Their median forecast: a 1.4% productivity increase over the next three. That is hardly the transformation narrative that justified billions in enterprise AI spending.

    These findings do not mean AI is useless. The Federal Reserve Bank of St. Louis estimated a 33% hourly productivity boost for workers during the specific hours they use generative AI. The problem is that this micro-level gain does not scale linearly. Adding more tools, more prompts, and more AI-generated outputs does not multiply the benefit — it multiplies the cognitive overhead.

    What the threshold means for enterprises

    The practical implications are straightforward, even if they run against the instincts of most technology procurement processes.

    First, fewer tools, better deployed. The BCG data suggests that organisations would get better results from consolidating around two or three well-integrated AI tools than from giving every team access to every available platform. This runs counter to the current market dynamic, where vendors push specialised AI tools for every function — writing, coding, data analysis, scheduling, customer interaction — and enterprises buy them all to avoid falling behind.

    Second, oversight design matters as much as tool selection. The highest cognitive costs were associated with tasks requiring workers to interpret and verify AI output, not with AI performing autonomous background work. Enterprises that can shift more AI usage toward the latter — automated workflows, pre-verified data processing, agent-completed administrative tasks — will impose less cognitive strain on their people.

    Third, training needs to include when not to use AI. As maddaisy has previously noted, most organisations treat AI capability-building as a deployment event rather than a sustained practice. The BCG researchers found that when managers provided ongoing training and support, brain-fry symptoms decreased. The Berkeley team suggested batching AI-intensive work into specific time blocks rather than leaving it on all day — a scheduling discipline that few organisations currently enforce.

    The next chapter in a familiar story

    The AI-productivity narrative is following a pattern that technology historians will recognise. Early adopters see real gains. Organisations rush to scale. The gains plateau or reverse as implementation complexity outpaces human capacity to manage it. Eventually, a more measured approach emerges — not abandoning the technology, but deploying it with greater discipline.

    The BCG three-tool threshold may turn out to be an early data point rather than a universal law. But it offers something that has been missing from the AI-adoption conversation: a concrete starting point for right-sizing the technology stack to what human cognition can actually sustain.

    For consultants advising on AI transformation, that is a message worth delivering — even when it runs counter to the vendor pitch deck.

  • The Venture Subsidy Era for AI Is Ending. Enterprise Budgets Are Not Ready.

    For the past three years, enterprises have been building their AI strategies on pricing that does not reflect reality. The era of venture-subsidised AI — where a ChatGPT query costs pennies despite burning roughly ten times the energy of a Google search — is approaching its expiry date. The question is not whether prices will rise. It is whether organisations have budgeted for what comes next.

    The subsidy model, laid bare

    The numbers tell the story clearly enough. OpenAI’s own internal projections show $14 billion in losses for 2026, against roughly $13 billion in revenue. Total spending is expected to reach approximately $22 billion this year. Across the 2023–2028 period, the company expects to lose $44 billion before turning cash-flow positive sometime around 2029 or 2030.

    Anthropic’s trajectory looks different but carries the same structural tension. The company hit $19 billion in annualised revenue by March 2026, growing more than tenfold annually. But its gross margins sit at around 40% — a long way from the 77% it needs to justify its $380 billion valuation. That gap has to close, and it will not close through efficiency gains alone.

    Both companies have raised staggering sums to sustain the current pricing. OpenAI’s $110 billion round in February valued it at $730 billion. Anthropic’s $30 billion Series G came from a coalition including GIC, Microsoft, and Nvidia. This is venture capital on a scale that makes the ride-hailing subsidy wars look modest — and, like those wars, it is designed to capture market share before the real pricing arrives.

    The millennial lifestyle subsidy, enterprise edition

    The pattern is familiar. Uber and DoorDash used investor capital to underwrite artificially cheap services, building habits and dependencies before gradually raising prices toward sustainable levels. AI providers are running the same playbook, but the stakes are larger. When Uber raised fares, consumers grumbled and occasionally took the bus. When AI API costs increase threefold — which industry analysts suggest may be the minimum adjustment needed for sustainable economics — enterprises will face a different kind of reckoning.

    The reckoning is already starting at the platform level. Microsoft will raise commercial pricing across its entire 365 suite from July 2026, with increases of 8–17% depending on the tier. The company attributed the rises to AI capabilities such as Copilot Chat being embedded into standard subscriptions. Its new $99-per-user E7 tier bundles Copilot, identity management, and agent orchestration tools — positioning AI not as an optional add-on but as a cost baked into the platform itself.

    The broader enterprise software market is following the same trajectory. Gartner forecasts enterprise software spend rising at least 40% by 2027, with generative AI as the primary accelerant. Average annual SaaS price increases now range from 8–12%, with aggressive movers implementing hikes of 15–25% at renewal.

    The budget gap nobody is discussing

    The disconnect between AI ambition and AI economics is widening. Organisations now spend an average of $7,900 per employee annually on SaaS tools — a 27% increase over two years. AI-native application spend has surged 108% year-on-year, reaching an average of $1.2 million per organisation. And these figures reflect the subsidised era.

    As Axios reported this week, the unusually low cost of many AI services will not survive the transition from venture-funded growth to public-market accountability. As OpenAI and Anthropic pursue potential IPOs, investors will demand the margins that current pricing cannot deliver. Subscription prices and usage-based costs are expected to rise across the industry.

    For enterprises that have been scaling AI adoption on the assumption that current costs are permanent, this represents a planning failure in the making. A consumer application with $2 in AI costs per user per month looks viable. The same application at $10 per user does not. High-volume automation workflows — precisely the use cases enterprises are most excited about — are the most vulnerable to cost increases.

    The pacing argument gains new weight

    This pricing trajectory adds a new dimension to arguments maddaisy has previously explored around pacing AI investment. When Capgemini’s CEO Aiman Ezzat cautioned against getting “too ahead of the learning curve,” his concern was primarily about deploying capabilities ahead of organisational readiness. The pricing question strengthens that case. Organisations that rush to embed AI across every workflow at subsidised rates may find themselves locked into architectures whose economics no longer work when the real costs arrive.

    Similarly, the enterprise scaling gap reported last week — where two-thirds of organisations cannot move AI past pilot stage — takes on a different character when viewed through an economic lens. The skills shortage and governance deficits that constrain scaling today may prove less urgent than the budget constraints that arrive tomorrow. Organisations struggling to scale AI at subsidised prices will find it considerably harder at market rates.

    What prudent organisations should do now

    The adjustment does not need to be dramatic, but it does need to start. Three measures stand out.

    First, stress-test AI budgets against realistic pricing. If API costs tripled tomorrow, which workflows would still deliver positive returns? The answer reveals which AI investments are genuinely valuable and which are artefacts of artificially cheap compute.

    Second, build multi-provider flexibility into the architecture. Vendor lock-in has always been a risk in enterprise technology. In AI, where pricing models are still evolving and open-source alternatives like Llama and Mistral are improving rapidly, flexibility is not just prudent — it is a hedge against the cost increases that are coming.

    Third, watch the open-source floor. The existence of capable open models creates a price ceiling that limits how aggressively commercial providers can raise rates. Organisations that invest in the capability to run open models on their own infrastructure — or through commodity inference services — will have negotiating leverage that others will not.

    The correction, not the crisis

    None of this means AI is overvalued or that enterprise adoption will stall. The technology works. The productivity gains are real. But the current pricing does not reflect the true cost of delivering those gains, and the correction will arrive gradually over the next two to four years as the industry’s largest players transition from growth-at-all-costs to sustainable economics.

    The organisations best positioned for that transition will be those that treated the subsidy era as a window for experimentation — learning which AI applications genuinely transform their operations — rather than a permanent baseline for their technology budgets. The window is closing. The question is whether the planning has already begun.

  • Anthropic’s Pentagon Walkaway and the Price of AI Principles

    When maddaisy examined the Pentagon-Anthropic standoff earlier this month, the focus was on a new category of vendor risk – the possibility that political pressure could turn a trusted AI supplier into a regulatory liability overnight. That analysis ended with a warning: organisations in the defence supply chain needed to start modelling political risk alongside technical capability.

    Nine days later, the story has taken a turn that few risk models would have predicted. The company the US government tried to destroy is winning the consumer market.

    The market’s unexpected verdict

    Claude, Anthropic’s AI assistant, surged to the number one spot on Apple’s App Store in the days following the ban. The #QuitGPT movement saw 2.5 million users cancel ChatGPT subscriptions or publicly pledge to boycott OpenAI. Protesters gathered outside OpenAI’s San Francisco headquarters. Reddit and X filled with migration guides.

    The paradox is striking. Anthropic refused to remove ethical guardrails from its Pentagon contract – specifically, restrictions on mass surveillance and fully autonomous weapons. The government designated it a supply-chain risk, a label previously reserved for foreign adversaries. And the market responded by rewarding Anthropic with the most effective brand-building campaign in AI history – one the company never planned and could not have bought.

    What OpenAI’s ‘win’ actually cost

    OpenAI moved quickly to fill the gap. Sam Altman announced a deal to deploy models on the Pentagon’s classified networks, claiming the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human oversight requirements. The critical difference, as maddaisy noted at the time, is that OpenAI permits “all lawful uses” and relies on existing Pentagon policies rather than contractual restrictions.

    Altman defended the decision in Fortune, arguing that “the absence of responsible AI companies in the military space” would be worse than engagement on imperfect terms. It is a reasonable argument in isolation.

    But the commercial consequences arrived faster than the contracts. OpenAI’s head of robotics resigned over ethical concerns related to the Pentagon deal. Eleven OpenAI employees signed an open letter protesting the government’s treatment of Anthropic – even as their employer stood to benefit from it. OpenAI rushed out GPT-5.4 within 48 hours of GPT-5.3 Instant, in what looked less like a planned release and more like a company trying to change the subject.

    The talent cost may prove more significant than the user exodus. In an industry where the top researchers can choose where they work, being seen as the company that took the contract Anthropic refused is not a recruitment advantage.

    The strategic calculus of principles

    For decades, ethical positioning in technology was treated as a cost centre – a brand exercise that might prevent negative press but would never drive revenue. The Anthropic-Pentagon episode is the first major test of whether that assumption holds in the AI era.

    The early data suggests it does not. Anthropic is being rewarded by consumers and punished by government. OpenAI is being rewarded by government and punished by consumers. Both companies are learning, in real time, that the AI market has bifurcated into constituencies with fundamentally different values – and that serving one may come at the expense of the other.

    This is new territory for the technology industry. Previous ethical standoffs – Apple versus the FBI over iPhone encryption in 2016, Google’s Project Maven withdrawal in 2018 – were significant but did not produce the same kind of mass consumer response. The difference is that AI tools are personal. People interact with ChatGPT and Claude daily, often for sensitive tasks. When those tools become entangled with military use and surveillance, the reaction is visceral in a way that an encryption debate never was.

    The $60 billion question

    None of this means Anthropic’s position is comfortable. The company has received more than $60 billion in total investment from over 200 venture capital funds. If the supply-chain risk designation holds, enterprise clients in the defence ecosystem will be contractually barred from using Anthropic’s products. Anthropic has announced it will challenge the designation in court, but legal processes move slowly and government procurement decisions move fast.

    The Pro-Human Declaration, a framework released by a bipartisan coalition of researchers and former officials, was finalised before the standoff but published in its aftermath. Its signatories include Steve Bannon and Susan Rice – a pairing that underlines just how broadly the anxiety about unchecked AI development has spread. Among its provisions: mandatory pre-deployment testing, prohibitions on fully autonomous weapons, and an outright moratorium on superintelligence development until scientific consensus on safety is established.

    Whether such frameworks gain legislative traction remains an open question. As maddaisy has previously noted, more than 1,000 AI-related bills were introduced across all 50 US states in 2025 alone, while federal action has been conspicuously absent. The Pentagon-Anthropic episode may be the catalyst that changes that – or it may simply add another chapter to the regulatory vacuum.

    What this means for practitioners

    For organisations evaluating AI vendors, the lesson is not that principles are good and pragmatism is bad. It is that ethical positioning has become a material factor in AI vendor strategy – one that affects talent retention, consumer trust, regulatory exposure, and government access in ways that are increasingly difficult to hedge.

    The vendor assessment frameworks that most enterprises use were not designed for this. They evaluate uptime, security certifications, data residency, and pricing. They do not evaluate whether a vendor’s ethical commitments might make it a target of government action, or whether a vendor’s willingness to accept government contracts without restrictions might trigger a consumer backlash that affects product development velocity.

    Both of those scenarios played out in the space of a single week. Any vendor strategy that does not account for them is incomplete.

    The AI industry is discovering what the defence, pharmaceutical, and energy industries learned long ago: when your products touch questions of public safety and national security, your commercial strategy and your ethical positioning become the same thing. The companies that figure this out first – not just as a brand exercise, but as a genuine constraint on what they will and will not do – will be the ones that retain the trust of all their constituencies, not just the most powerful one.

    Anthropic’s stand cost it a $200 million contract. It may also have bought something more durable: a market position built on trust at a time when trust in AI companies is in short supply.

  • Nearly Every Enterprise Wants AI. Two-Thirds Cannot Scale It Past a Pilot.

    The Appetite Is Not the Problem

    Nearly every large organisation wants AI. The technology works. The budgets are approved. The pilots are running. And yet, when it comes to deploying AI at scale – moving from a successful proof of concept to a production capability that changes how the business operates – two-thirds of enterprises are stuck.

    That is the central finding of Logicalis’s 2026 CIO Report, which surveyed more than 1,000 chief information officers globally. The numbers paint a stark picture of an industry that has solved the belief problem but not the execution one: 94% of CIOs report growing organisational appetite for AI. Over a third have accelerated AI initiatives based on early proof-of-concept results. But two-thirds say they cannot scale AI beyond those initial deployments.

    The gap between wanting AI and running AI at enterprise scale has become the defining challenge of 2026. And the constraints holding organisations back are not the ones most boardrooms are discussing.

    Skills, Not Budgets, Are the Bottleneck

    The most striking finding in the Logicalis data is what CIOs identify as their primary constraint. It is not funding. It is not technology. It is skills.

    A lack of internal technical capability is holding back AI ambitions in nearly nine out of ten organisations. This is not a shortage of data scientists or machine learning engineers in the abstract – it is a shortage of people who understand how to integrate AI into existing business processes, manage its outputs, and govern its behaviour in production environments.

    The skills gap becomes more consequential as AI moves from experimentation to operation. A pilot needs a small team of enthusiasts and a sandbox. A production deployment needs data engineers, integration specialists, change managers, and – critically – people who understand both the technology and the business domain well enough to know when the AI is wrong.

    This connects directly to what Harvard Business Review reported in February: 88% of companies now report regular AI use, yet adoption is stalling because employees experiment with tools without integrating them into how work actually gets done. The tools are present. The capability to use them well is not.

    Governance Is Being Compromised, Not Solved

    If the skills gap explains why organisations cannot scale, the governance gap explains why scaling carries risk. The Logicalis report found that 62% of CIOs have compromised on AI governance due to limited knowledge, and only 44% say they fully grasp the risks of AI adoption. Meanwhile, 76% describe unchecked AI as a serious concern.

    This is not a theoretical problem. As maddaisy.com has reported extensively, the governance question follows AI into every new domain – from agentic systems that drift in production to vibe coding tools that generate enterprise software without conventional oversight. Organisations are deploying AI faster than they can build the frameworks to manage it, and most acknowledge this openly. An overwhelming 89% of CIOs in the Logicalis survey describe their current approach as “learning as we go.”

    That phrase deserves attention. It means that the majority of enterprise AI programmes are operating without mature risk management, without clear accountability structures, and without the monitoring infrastructure needed to catch problems before they compound. For regulated industries – financial services, healthcare, defence – this is not a growing pain. It is an exposure.

    The Pattern maddaisy.com Has Been Tracking

    The Logicalis data does not exist in isolation. It quantifies a pattern that has been building across multiple data points this year.

    In February, maddaisy.com reported on PwC’s Global CEO Survey, which found that 56% of chief executives could not point to measurable revenue gains from their AI investments. The diagnosis then was a measurement problem as much as a technology one – organisations deploying AI without redesigning workflows or building the instrumentation to track outcomes.

    Earlier this month, Capgemini’s CEO Aiman Ezzat made the case for pacing AI investment, arguing that companies are deploying capabilities ahead of their organisation’s ability to absorb them. EY research cited in that analysis showed organisations failing to capture up to 40% of potential AI benefits – not because the technology underperformed, but because the surrounding processes, skills, and culture were not ready.

    And in maddaisy.com’s analysis of the consulting pyramid, Eden McCallum research revealed that 95% of AI pilots have failed to deliver returns – a figure that aligns precisely with the Logicalis finding that two-thirds of organisations cannot move past the pilot stage.

    These are not isolated reports reaching coincidentally similar conclusions. They are different measurements of the same underlying problem: the bottleneck in enterprise AI has moved from technology capability to organisational readiness.

    The Managed Services Pivot

    One of the more telling details in the Logicalis report is that 94% of CIOs plan to lean on managed service providers over the next two to three years to help navigate AI governance, scaling, and sustainability. This is a quiet but significant shift in how enterprises relate to their technology infrastructure.

    It suggests that many CIOs have concluded they cannot build the required skills and governance frameworks internally – at least not at the pace the technology demands. Rather than owning and operating AI capabilities directly, they are moving toward orchestrating a network of external providers. The CIO role, in this model, becomes less about technology ownership and more about vendor management, risk oversight, and strategic coordination.

    For consulting and technology services firms, this creates a substantial market opportunity. But it also raises a question that the industry has not yet answered convincingly: if the clients cannot scale AI internally because they lack the skills and governance frameworks, and they outsource to service providers who are themselves still working out how to embed AI into their own operations, where does the actual expertise reside?

    What Practitioners Should Watch

    The adoption gap is unlikely to close quickly. Skills take time to develop. Governance frameworks take time to mature. Organisational change – the kind that turns a pilot into a production capability – is measured in quarters and years, not weeks.

    Three things are worth tracking. First, whether the 89% “learning as we go” figure starts to decline in subsequent surveys – that will be the clearest signal that enterprises are moving from experimentation to operational maturity. Second, whether the managed services pivot produces measurable outcomes or simply moves the scaling problem from one organisation to another. And third, whether the 67% of CIOs who expressed concern about an “AI bubble” translate that concern into more disciplined investment, or whether competitive pressure continues to override caution.

    The technology has arrived. The appetite is not in question. What remains unresolved is whether organisations can build the human and structural foundations fast enough to use what they have already bought.

  • OpenAI’s Frontier Alliance Is Not Just About Consulting. It Is a Bet Against the Enterprise Software Stack.

    When maddaisy examined OpenAI’s Frontier Alliance in February, the focus was on what it meant for the consulting firms — McKinsey, BCG, Accenture, and Capgemini — and the admission that AI vendors cannot scale enterprise deployments alone. That story was about the consulting industry. This one is about the companies the alliance is quietly aimed at: the enterprise software vendors that have built trillion-dollar businesses on per-seat licensing.

    The per-seat model under pressure

    OpenAI’s Frontier platform, launched in early February, is designed as an enterprise operating layer — a unified system where AI agents can log into applications, execute workflows, and make decisions across an organisation’s entire technology stack. CRM systems, HR platforms, ticketing tools, internal databases. The ambition is not to replace any single application but to sit above all of them.

    The threat to SaaS vendors is structural, not incremental. If AI agents execute the tasks that human employees currently perform inside Salesforce, ServiceNow, or Workday, the justification for per-seat licensing weakens. Fewer human users logging in means fewer seats to sell. And if agents can orchestrate workflows across multiple systems from a single platform, the case for buying specialised point solutions — each with its own subscription — becomes harder to make.

    The market has not waited for proof. Investors wiped roughly $2 trillion in market value from technology stocks in a single week over AI displacement concerns. ServiceNow shares fell more than 20% year-to-date by mid-February. IBM suffered its largest single-day decline in 25 years after Anthropic’s Claude demonstrated competency with legacy COBOL systems — the very maintenance work that underpins a significant portion of IBM’s consulting revenue.

    The consulting conduit

    What makes the Frontier Alliance specifically dangerous for SaaS incumbents is not the technology. It is the distribution channel.

    McKinsey, BCG, Accenture, and Capgemini are not just consulting firms. They are the primary implementation partners for the very software companies that Frontier could displace. When a Fortune 500 company deploys Salesforce, it typically hires one of these firms to manage the rollout. When it migrates to ServiceNow’s IT service management platform, the same consulting firms handle the integration. The relationships are deep, multi-year, and built on trust.

    OpenAI has effectively enlisted those relationships as a distribution network. Each of the four firms has established dedicated OpenAI practice groups, certified their teams on Frontier, and committed to multi-year alliances. OpenAI’s own forward-deployed engineers will sit alongside consulting teams in client engagements — a model borrowed from Palantir’s playbook for embedding in enterprise accounts.

    The result is a direct-to-enterprise pipeline that does not need SaaS vendors as intermediaries. A consulting firm advising a client on AI strategy can now recommend Frontier agents that orchestrate existing systems, rather than recommending new SaaS products that require their own implementation projects. The consulting firm earns either way. The SaaS vendor may not.

    SaaS is not dying. But the economics are shifting.

    The counterarguments deserve a hearing. Fortune 500 companies will not abandon decades of enterprise software investment overnight. Compliance requirements, audit trails, data sovereignty obligations, and the sheer operational complexity of large organisations create friction that no AI platform can simply wave away. As one analyst put it, “we are simply not going to see a complete unwinding of the past 50 years of enterprise software development.”

    The incumbents are also adapting. Salesforce has pioneered what it calls the “Agentic Enterprise Licence Agreement” — a fixed-price, consumption-based model designed to decouple revenue from headcount. ServiceNow and Microsoft are shifting toward outcome-based pricing. These moves acknowledge the threat and attempt to neutralise it by changing the unit of value from the human user to the business outcome.

    But adaptation comes at a cost. Per-seat licensing has been the engine of SaaS margins for two decades. Moving to consumption or outcome-based models compresses revenue predictability and margins in the short term, even if it preserves relevance in the long term. The transition is not painless, and investors know it.

    Where this connects

    This development sits at the intersection of several threads maddaisy has been tracking. Capgemini’s CEO, Aiman Ezzat, argued last week that organisations are deploying AI capabilities ahead of their ability to absorb them. He is right — but that does not mean the structural pressure on SaaS pricing will wait for organisations to catch up. The market reprices on expectations, not on deployment maturity.

    And the consulting pyramid piece from yesterday noted that the industry’s base is being reshaped as AI compresses the need for junior analytical work. A similar compression is now visible in enterprise software: the middle layer of the stack — the specialised tools that automate individual workflows — faces pressure from platforms that automate across workflows.

    The question for enterprise software vendors is not whether AI agents will change their businesses. It is whether they can shift their pricing, their value propositions, and their competitive moats fast enough to remain the platform of choice — rather than becoming the legacy infrastructure that sits beneath someone else’s agent layer.

    For practitioners evaluating their own technology stacks, the practical implication is this: the next software audit should not just ask what each tool costs per seat. It should ask what happens to that cost when half the seats belong to agents that do not need a licence.

  • Capgemini’s CEO Makes the Unfashionable Case for Pacing Your AI Investment

    There is a particular kind of courage in telling a room full of executives to slow down. Aiman Ezzat, CEO of Capgemini, has been doing exactly that – and his reasoning deserves more attention than the typical “move fast or die” narrative that dominates AI strategy discussions.

    “You don’t want to be too ahead of the learning curve,” Ezzat told Fortune in February. “If you are, you’re investing and building capabilities that nobody wants.”

    Coming from the head of a €22.5 billion consultancy that has trained 310,000 employees on generative AI and is actively building labs for quantum computing, 6G, and robotics, this is not a counsel of inaction. It is a strategic position on pacing – one that puts Ezzat at odds with much of the technology industry’s current mood.

    The FOMO problem

    The fear of missing out on AI has become a boardroom affliction. Boston Consulting Group reports that half of CEOs now believe their job is at risk if AI investments fail to deliver returns. That pressure creates a predictable dynamic: spend big, move fast, worry about outcomes later.

    The data suggests the worry-later approach is not working. EY research shows that while 88% of employees report using AI at work, organisations are failing to capture up to 40% of the potential benefits. In the UK, only 21% of workers felt confident using AI as of January 2026. The tools are arriving faster than the capacity to use them well.

    Ezzat’s argument is that this gap is not a technology problem. It is a pacing problem. Companies are deploying AI capabilities ahead of their organisation’s ability to absorb them – and ahead of genuine customer demand for the outcomes those capabilities promise.

    AI is a business, not a technology

    The more substantive part of Ezzat’s case is about framing. Too many leadership teams, he argues, treat AI as “a black box that’s being managed separately” – a technology initiative bolted onto the existing business rather than a force reshaping how the business operates.

    “The question you have to focus on is: ‘How can your business be significantly disrupted by AI?’” Ezzat says. “Not ‘How is your finance team going to become more efficient?’ I’m sure your CFO will deal with that at the end of the day.”

    The distinction matters. Departmental efficiency projects – automating invoice processing, summarising meeting notes, generating marketing copy – are the low-hanging fruit that most enterprises are picking right now. They deliver incremental gains but rarely transform a business model. The harder question, the one Ezzat wants CEOs to sit with, is whether AI fundamentally changes what a company sells, how it competes, or what its customers expect.

    That question takes time to answer well. Rushing it produces expensive experiments that solve the wrong problems.

    The trust deficit

    Perhaps the most underexplored part of Ezzat’s argument is about human trust. “How do you get humans to trust the agent?” he asks. “The agent can trust the human, but the human doesn’t really trust the agent.”

    This cuts to a practical reality that technology roadmaps tend to gloss over. Agentic AI – systems designed to take autonomous actions rather than simply generate content – is the next wave of enterprise deployment. As maddaisy.com noted when covering Capgemini’s role in the OpenAI Frontier Alliance, the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. Trust is a significant part of why.

    Employees who do not trust AI agents will find ways to work around them. Managers who cannot explain AI-driven decisions to clients will revert to manual processes. Organisations that deploy autonomous systems faster than their culture can absorb them will create friction, not efficiency.

    Ezzat draws an analogy to ergonomics – the mid-twentieth century discipline of designing tools for humans rather than forcing humans to adapt to tools. “Bad chairs lead to bad backs,” he observes. “Bad AI is likely to be far more consequential.”

    Consistent with Capgemini’s own playbook

    What makes Ezzat’s position credible is that Capgemini’s recent actions align with it. The company’s approach has been to invest broadly but scale selectively.

    As maddaisy.com’s analysis of the company’s 2025 results highlighted, generative AI bookings rose above 10% of total bookings in Q4 – meaningful but not yet dominant. The company maintains labs for emerging technologies including quantum and 6G, keeping a foot in multiple possible futures without betting the firm on any single one.

    Meanwhile, the company added 82,300 offshore workers in 2025 – largely through the WNS acquisition – while simultaneously earmarking €700 million for workforce restructuring. The message is clear: AI changes the shape of the workforce, but it does not eliminate the need for one. Building the human infrastructure to deliver AI at scale takes as much investment as the technology itself.

    The metaverse lesson

    Ezzat’s most pointed comparison is to the metaverse – a technology that commanded billions in corporate investment before the market concluded that customer demand had been dramatically overstated. Capgemini itself experimented with a metaverse lab. Mark Zuckerberg renamed his company around it. Now, as Ezzat puts it, “like air fryers, its time may now have passed.”

    The parallel is not that AI will follow the metaverse into irrelevance – the use cases are far more concrete, and the enterprise adoption data is already stronger. The point is about the cost of overcommitment. Companies that invested heavily in metaverse capabilities before the market was ready wrote off those investments. The same risk exists with AI, particularly in areas like agentic systems where the technology’s capability is advancing faster than organisational readiness to use it.

    Ezzat’s prescription is agility over ambition: small pilots, constant monitoring, and the willingness to scale rapidly when adoption genuinely accelerates. “We have to be investing – but not too much – to be able to be aware of the technology, following at the speed to make sure that we are ready to scale when the adoption starts to accelerate.”

    What this means for practitioners

    For consultants advising clients on AI strategy, Ezzat’s framework offers a useful counterweight to the prevailing urgency. The question is not whether to invest in AI – that debate is settled. The question is how to pace that investment so that capability, demand, and organisational readiness move roughly in step.

    Companies that get the pacing right will avoid the twin traps of overinvestment (building capabilities nobody wants) and underinvestment (being caught flat-footed when adoption accelerates). In a market where half of CEOs fear for their jobs over AI outcomes, the discipline to move at the right speed – rather than the fastest speed – may prove to be the more valuable skill.

  • AI Inference Is the Enterprise Security Risk Most Organisations Are Not Addressing

    Most enterprise AI security conversations still focus on training — how models are built, what data goes in, how to prevent poisoning. But the greater operational exposure sits elsewhere: in inference, the moment a trained model processes a live query and produces an output. That is where proprietary logic, sensitive prompts, and business strategy become visible to anyone watching the traffic.

    A recent panel hosted by The Quantum Insider, featuring leaders from BMO, CGI, and 01Quantum, put the point bluntly: inference is AI working, and AI working is where risk accumulates. Nearly half of the audience polled during the session admitted they lack confidence that their AI systems meet anticipated 2026 security standards. That number is consistent with broader industry data: a Cloud Security Alliance survey found that only 27 per cent of organisations feel confident they can secure AI used in core business operations.

    This is not an abstract concern. It is the practical, operational end of the governance conversation maddaisy has been tracking for weeks.

    Why inference, not training, is the exposure point

    Training happens once (or periodically). Inference happens continuously — every API call, every chatbot interaction, every agentic workflow execution. As Tyson Macaulay of 01Quantum explained during the panel, inference models often contain the distilled intellectual property of an organisation. In expert systems, the model itself reflects proprietary training data, domain knowledge, and internal logic. Reverse engineering an inference endpoint can reveal insights about what the organisation knows and how it thinks.

    But the exposure runs in both directions. Prompts themselves reveal information — about individuals, strategy, and operational priorities. A medical query reveals personal health data. A corporate query may signal product development direction. The question, in other words, can be as sensitive as the model.

    When maddaisy examined CIOs’ non-AI priorities in February, cybersecurity topped the list — precisely because AI adoption was expanding the attack surface. Dmitry Nazarevich, CTO at Innowise, described security spending increases as “directly related to the increase in exposure and risk to data associated with the increased attack surface resulting from the introduction of generative AI.” Inference security is where that expanding surface is most exposed — and most neglected.

    The shadow AI dimension

    The problem is compounded by what organisations cannot see. Research suggests that roughly 70 per cent of organisations have shadow AI in use — employees running unauthorised tools outside IT oversight. Every unsanctioned ChatGPT or Claude query involving company data is an unmonitored inference event, pushing proprietary information through systems the organisation does not control.

    JetStream Security, a startup founded by veterans of CrowdStrike and SentinelOne, raised $34 million in seed funding last week to address precisely this gap. The company’s product, AI Blueprints, maps AI activity in real time — which agents are running, which models they use, what data they access. The premise is straightforward: you cannot secure what you cannot see.

    When maddaisy covered shadow AI in February, the focus was on governance and policy. Inference security adds a harder technical dimension. It is not enough to write policies about acceptable AI use if the organisation has no visibility into what models are being queried, by whom, and with what data.

    Real-world vulnerabilities are already surfacing

    The risks are not hypothetical. In February, LayerX Security published a report describing a critical vulnerability in Anthropic’s Claude Desktop Extensions — a malicious calendar invite could silently execute arbitrary code with full system privileges. The issue stemmed from an architectural choice: extensions ran unsandboxed with direct file system access, enabling tools to chain actions autonomously without user consent.

    The debate that followed was instructive. Anthropic argued the onus was on users to configure permissions properly. Security researchers countered that competitors like OpenAI and Microsoft restricted similar capabilities through sandboxing and permission gates. The real lesson for enterprises is that inference-layer vulnerabilities are architectural, not incidental — and they require controls before deployment, not after.

    As Rock Lambros of RockCyber put it: “Every enterprise deploying agents right now needs to answer — did we restrict tool chaining privileges before activation, or did we hand the intern the master key and go to lunch?”

    The governance gap has a security-shaped hole

    Maddaisy has covered the emerging agentic AI governance playbook extensively — the frameworks from regulators, the principles converging around least-privilege access and real-time monitoring. But frameworks are policy instruments. Inference security is the engineering layer that makes those policies enforceable.

    The numbers illustrate the disconnect. According to the latest governance statistics compiled from major 2025-26 surveys, 75 per cent of organisations report having a dedicated AI governance process — but only 26 per cent have comprehensive AI security policies. Fewer than one in 10 UK enterprises integrate AI risk reviews directly into development pipelines. Governance without security controls is aspiration without implementation.

    The financial services sector offers a partial model. Kristin Milchanowski, Chief AI and Data Officer at BMO, described her bank’s approach during the Quantum Insider panel: bringing large language models in-house where possible, ensuring that additional training on proprietary data remains contained, and treating responsible AI as a board-level cultural priority rather than a compliance exercise. But BMO operates under some of the strictest regulatory regimes globally. Most enterprises do not face equivalent pressure — yet.

    What practitioners should be doing now

    The practical agenda emerging from this convergence of research is specific and actionable:

    Audit inference endpoints. Map every production AI system, including shadow deployments. The JetStream model — real-time visibility into which models are running, what data they touch, and who is responsible — is becoming table stakes.

    Apply least-privilege to AI agents. The agentic governance frameworks maddaisy covered last week prescribe this. At the inference layer, it means restricting tool chaining, sandboxing execution environments, and requiring explicit permission gates for cross-system actions.

    Build cryptographic agility into procurement. The Quantum Insider panel raised a forward-looking point: “harvest now, decrypt later” attacks — where encrypted inference traffic is collected today for decryption once quantum computing matures — are overtaking model drift as the top digital trust concern among infrastructure leaders. Embedding post-quantum cryptography expectations into vendor contracts now is practical and low-cost.

    Treat inference security as infrastructure. Not as a feature, not as an add-on. As the panel concluded: critical infrastructure must be secured before it is tested by failure.

    The operational layer matters most

    The governance conversation has matured rapidly. Frameworks exist. Principles are converging. Regulation is arriving. But between the policy layer and the production environment sits inference — the operational layer where AI actually works, where data flows through models, where prompts reveal strategy, and where the absence of controls creates the exposure that governance documents are supposed to prevent.

    Gartner projects spending on AI governance platforms will reach $492 million this year and surpass $1 billion by 2030. That money will be wasted if it funds policies without the engineering to enforce them. The organisations pulling ahead will be those that treat inference security not as a technical detail for the security team, but as the operational foundation on which their entire AI strategy depends.

  • Insurtech’s AI-Fuelled Five Billion Dollar Comeback — And the Question the Industry Has Not Answered

    Global insurtech funding reached $5.08 billion in 2025, up 19.5% from $4.25 billion the year before. It is the first annual increase since 2021 — and, according to Gallagher Re’s latest quarterly report, it marks a fundamentally different kind of recovery from the one the sector last enjoyed.

    The 2021 boom was driven by venture capital chasing consumer-facing disruptors. The 2025 comeback is driven by insurers and reinsurers themselves investing in operational AI. That distinction matters far more than the headline number.

    The money is coming from inside the house

    In 2025, insurers and reinsurers made 162 private technology investments into insurtechs — more than in any prior year on record. This is not outside capital speculating on disruption. It is the industry itself funding its own modernisation, a shift Gallagher Re describes as a “changing of the guard” in the insurtech investor community.

    The fourth quarter was particularly striking. Funding hit $1.68 billion — a 66.8% increase over Q3 and the strongest quarterly figure since mid-2022. More than 100 insurtechs raised capital for the first time since early 2024, and mega-rounds (deals exceeding $100 million) returned in force, with 11 such rounds totalling $1.43 billion for the full year, up from six in 2024.

    Property and casualty insurtech funding rebounded 34.9% to $3.49 billion, driven by companies like CyberCube, ICEYE, Creditas, Federato, and Nirvana, which collectively secured $663 million in Q4 alone. Life and health insurtech, by contrast, declined slightly — a 4.6% dip that underlines where the industry sees its most pressing operational gaps.

    Two-thirds of the money follows AI

    The most telling statistic in the report is this: two-thirds of all insurtech funding in 2025 — $3.35 billion across 227 deals — went to AI-focused firms. By Q4, that share had climbed to 78%.

    Andrew Johnston, Gallagher Re’s global head of insurtech, frames this as convergence rather than a trend: “Over time, we see AI becoming so integrated into insurtech that the two may well become synonymous — in much the same way as we could already argue that ‘insurtech’ is itself a meaningless label, because all insurers are technology businesses now.”

    That trajectory is visible in the deals themselves. mea, an AI-native insurtech, raised $50 million from growth equity firm SEP in February — its first external capital after years of profitable organic growth. The company’s platform, already processing more than $400 billion in gross written premium across 21 countries, automates end-to-end operations for carriers, brokers, and managing general agents. mea claims its AI can cut operating costs by up to 60%, targeting the roughly $2 trillion in annual industry operating expenses where manual workflows persist.

    At the seed stage, General Magic raised $7.2 million for AI agents that automate administrative tasks for insurance teams — reducing quote generation time from approximately 30 minutes to under three in early deployments with major insurers.

    Profitability, not just growth

    What separates the 2025 wave from the 2021 boom is that several insurtechs are now proving they can make money, not just raise it.

    Kin Insurance, which focuses on high-catastrophe-risk regions, reported $201.6 million in revenue for 2025 — a 29% increase — with a 49% operating margin and a 20.7% adjusted loss ratio. Hippo, another property-focused insurtech, reversed its 2024 net loss with $58 million in net income, driven by improved underwriting and a deliberate shift away from homeowners insurance toward more profitable lines.

    These are not unicorn-valuation stories. They are companies demonstrating operational discipline — the kind of results that explain why insurers and reinsurers, rather than venture capitalists, are now leading the investment.

    The B2B shift

    Gallagher Re’s data reveals another structural change worth watching. Nearly 60% of property and casualty deals in 2025 went to business-to-business insurtechs — a 12 percentage point increase from 2021’s funding boom. Meanwhile, the deal share for lead generators, brokers, and managing general agents fell to 35%, the lowest on record.

    The implication is clear: capital is flowing toward technology that improves how existing insurers operate, not toward new entrants trying to replace them. The disruptor narrative of the early 2020s has given way to something more pragmatic — and, arguably, more durable.

    This parallels a pattern visible across financial services. As maddaisy noted when examining Lloyds Banking Group’s AI programme, established institutions are increasingly treating AI not as an innovation experiment but as core operational infrastructure — and measuring it accordingly.

    The question the industry has not answered

    For all the funding momentum, Johnston raises a challenge that the sector has yet to confront seriously: the “so what” problem.

    “As the implementation of AI starts to deliver efficiency gains, it is imperative that the industry works out how to best use all of this newly freed up time and resource,” he writes.

    This is not a hypothetical. If mea can genuinely reduce operating costs by 60% for a carrier, that frees up a substantial portion of the 14 percentage points of combined ratio currently consumed by operations. The question is whether that freed capacity translates into better underwriting, deeper risk analysis, and improved customer outcomes — or whether it simply gets absorbed into margin without changing how insurance fundamentally works.

    The broker market is already feeling the tension. In February, insurance broker stocks dropped roughly 9% after OpenAI approved the first AI-powered insurance apps on ChatGPT, enabling consumers to receive quotes and purchase policies within the conversation. Most analysts called the selloff overdone — commercial broking remains complex enough to resist near-term disintermediation — but the episode illustrated how quickly market sentiment can shift when AI moves from back-office tooling to customer-facing distribution.

    What to watch

    The $5 billion figure is a milestone, but the real signal is in its composition. Insurtech funding is no longer a venture capital bet on disruption. It is the insurance industry’s own investment in operational AI — led by incumbents, focused on B2B infrastructure, and increasingly backed by profitability rather than just promise.

    Whether that investment translates into genuinely better insurance — not just cheaper operations — depends on how the industry answers Johnston’s question. The money is flowing. The efficiency gains are materialising. What the sector does with them will determine whether this comeback is a lasting structural shift or just the next chapter of doing the same things with fewer people.

  • OpenAI’s Frontier Alliance Confirms What Consultants Already Knew: AI Vendors Cannot Scale Alone

    OpenAI announced on 23 February that it has formed multi-year “Frontier Alliances” with McKinsey, Boston Consulting Group, Accenture, and Capgemini. The four firms will help sell, implement, and scale OpenAI’s Frontier platform — an enterprise system for building, deploying, and governing AI agents across an organisation’s technology stack.

    For readers who have been following maddaisy’s coverage of the consulting industry’s AI pivot, this is not a surprise. It is the logical next step in a pattern that has been building for months — and it tells us more about the limits of AI vendors than about the ambitions of consulting firms.

    The vendor cannot scale alone

    The most revealing line in the announcement came from Capgemini’s chief strategy officer, Fernando Alvarez: “If it was a walk in the park, OpenAI would have done it by themselves, so it’s recognition that it takes a village.”

    That candour is worth pausing on. OpenAI’s enterprise business accounts for roughly 40% of revenue, with expectations of reaching 50% by the end of the year. The company has already signed enterprise deals with Snowflake and ServiceNow this year and appointed Barret Zoph to lead enterprise sales. Yet it still needs consulting firms — with their existing client relationships, implementation expertise, and organisational change capabilities — to get its technology into production at scale.

    This is not a story about OpenAI’s generosity in sharing the enterprise market. It is an admission that the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. As maddaisy reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. The technology is not the bottleneck. Integration, governance, and organisational readiness are.

    A clear division of labour

    The alliance structure reveals how OpenAI sees the enterprise AI value chain. McKinsey and BCG are positioned as strategy and operating model partners — helping leadership teams determine where agents should be deployed and how workflows need to be redesigned. BCG CEO Christoph Schweizer noted that AI must be “linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives.”

    Accenture and Capgemini take the systems integration role: data architecture, cloud infrastructure, security, and the unglamorous work of connecting Frontier to the CRM platforms, HR systems, and internal tools that enterprises actually run on. Each firm is building dedicated practice groups and certifying teams on OpenAI technology. OpenAI’s own forward-deployed engineers will sit alongside them in client engagements.

    This two-tier model — strategy at the top, integration at the bottom — maps neatly onto the consulting industry’s existing hierarchy. It also creates a clear dependency: OpenAI provides the platform, the consultancies provide the last mile.

    The maddaisy continuity thread

    This announcement intersects with several stories maddaisy has been tracking. When we examined McKinsey’s 25,000 AI agent deployment, the question was whether the firm’s aggressive internal build-out was a first-mover advantage or an expensive experiment. The Frontier Alliance suggests McKinsey is now positioning that internal capability as a credential — evidence that it can deploy agentic AI at scale, which it can now offer to clients through the OpenAI partnership.

    Similarly, when maddaisy covered the shift from billable hours to outcome-based consulting, the question was how firms would make the economics work. Vendor alliances like this provide part of the answer: the consulting firm brings the implementation expertise, the AI vendor provides the platform, and the client pays for outcomes rather than hours. The risk is shared across the chain.

    And Capgemini’s dual bet — adding 82,300 offshore workers while simultaneously investing in AI — now makes more strategic sense. The offshore delivery capacity is precisely what is needed to operationalise Frontier at enterprise scale. The bodies and the bots are not competing; they are complementary.

    The SaaS vendors should be nervous

    As Fortune noted, the Frontier Alliance creates a specific tension for established software-as-a-service vendors. Salesforce, Microsoft, Workday, and ServiceNow all depend on these same consulting firms to market and deploy their products. Now those consultants will also be actively promoting an alternative platform — one that positions itself as a “semantic layer” sitting above the traditional SaaS stack.

    The consulting firms are not choosing sides. They are hedging. Accenture, for instance, signed a multi-year partnership with Anthropic in December 2025 and is now a Frontier Alliance member. The firms will sell whichever platform best fits a given client’s needs, which gives them leverage over the AI vendors rather than the other way around.

    For the SaaS incumbents, however, having McKinsey and BCG actively evangelise an AI-native alternative to C-suite buyers is a development they will not welcome. Investor anxiety in this space is already elevated — shares of several enterprise software companies have been punished over concerns that customers will choose AI-native platforms over traditional offerings.

    What to watch

    The Frontier Alliance is a partnership announcement, not a set of outcomes. The real test is whether this model — AI vendor plus consulting firm — can close the deployment gap that has kept enterprise AI adoption stubbornly below expectations.

    Three things matter from here. First, whether the certified practice groups produce measurably better outcomes than the piecemeal implementations enterprises have been attempting on their own. Second, whether Frontier’s “semantic layer” architecture genuinely simplifies agent deployment or simply adds another platform layer to an already complex stack. And third, whether the consulting firms’ simultaneous alliances with competing AI vendors — OpenAI, Anthropic, Google — create genuine client value or just a more complicated sales cycle.

    For practitioners, the immediate signal is clear: the enterprise AI market is consolidating around a vendor-plus-integrator model. If your organisation is planning an agentic AI deployment, the question is no longer which model to use. It is which combination of platform, integrator, and operating model redesign will actually get agents into production — and keep them there.