Tag: openai

  • The Venture Subsidy Era for AI Is Ending. Enterprise Budgets Are Not Ready.

    For the past three years, enterprises have been building their AI strategies on pricing that does not reflect reality. The era of venture-subsidised AI — where a ChatGPT query costs pennies despite burning roughly ten times the energy of a Google search — is approaching its expiry date. The question is not whether prices will rise. It is whether organisations have budgeted for what comes next.

    The subsidy model, laid bare

    The numbers tell the story clearly enough. OpenAI’s own internal projections show $14 billion in losses for 2026, against roughly $13 billion in revenue. Total spending is expected to reach approximately $22 billion this year. Across the 2023–2028 period, the company expects to lose $44 billion before turning cash-flow positive sometime around 2029 or 2030.

    Anthropic’s trajectory looks different but carries the same structural tension. The company hit $19 billion in annualised revenue by March 2026, growing more than tenfold annually. But its gross margins sit at around 40% — a long way from the 77% it needs to justify its $380 billion valuation. That gap has to close, and it will not close through efficiency gains alone.

    Both companies have raised staggering sums to sustain the current pricing. OpenAI’s $110 billion round in February valued it at $730 billion. Anthropic’s $30 billion Series G came from a coalition including GIC, Microsoft, and Nvidia. This is venture capital on a scale that makes the ride-hailing subsidy wars look modest — and, like those wars, it is designed to capture market share before the real pricing arrives.

    The millennial lifestyle subsidy, enterprise edition

    The pattern is familiar. Uber and DoorDash used investor capital to underwrite artificially cheap services, building habits and dependencies before gradually raising prices toward sustainable levels. AI providers are running the same playbook, but the stakes are larger. When Uber raised fares, consumers grumbled and occasionally took the bus. When AI API costs increase threefold — which industry analysts suggest may be the minimum adjustment needed for sustainable economics — enterprises will face a different kind of reckoning.

    The reckoning is already starting at the platform level. Microsoft will raise commercial pricing across its entire 365 suite from July 2026, with increases of 8–17% depending on the tier. The company attributed the rises to AI capabilities such as Copilot Chat being embedded into standard subscriptions. Its new $99-per-user E7 tier bundles Copilot, identity management, and agent orchestration tools — positioning AI not as an optional add-on but as a cost baked into the platform itself.

    The broader enterprise software market is following the same trajectory. Gartner forecasts enterprise software spend rising at least 40% by 2027, with generative AI as the primary accelerant. Average annual SaaS price increases now range from 8–12%, with aggressive movers implementing hikes of 15–25% at renewal.

    The budget gap nobody is discussing

    The disconnect between AI ambition and AI economics is widening. Organisations now spend an average of $7,900 per employee annually on SaaS tools — a 27% increase over two years. AI-native application spend has surged 108% year-on-year, reaching an average of $1.2 million per organisation. And these figures reflect the subsidised era.

    As Axios reported this week, the unusually low cost of many AI services will not survive the transition from venture-funded growth to public-market accountability. As OpenAI and Anthropic pursue potential IPOs, investors will demand the margins that current pricing cannot deliver. Subscription prices and usage-based costs are expected to rise across the industry.

    For enterprises that have been scaling AI adoption on the assumption that current costs are permanent, this represents a planning failure in the making. A consumer application with $2 in AI costs per user per month looks viable. The same application at $10 per user does not. High-volume automation workflows — precisely the use cases enterprises are most excited about — are the most vulnerable to cost increases.

    The pacing argument gains new weight

    This pricing trajectory adds a new dimension to arguments maddaisy has previously explored around pacing AI investment. When Capgemini’s CEO Aiman Ezzat cautioned against getting “too ahead of the learning curve,” his concern was primarily about deploying capabilities ahead of organisational readiness. The pricing question strengthens that case. Organisations that rush to embed AI across every workflow at subsidised rates may find themselves locked into architectures whose economics no longer work when the real costs arrive.

    Similarly, the enterprise scaling gap reported last week — where two-thirds of organisations cannot move AI past pilot stage — takes on a different character when viewed through an economic lens. The skills shortage and governance deficits that constrain scaling today may prove less urgent than the budget constraints that arrive tomorrow. Organisations struggling to scale AI at subsidised prices will find it considerably harder at market rates.

    What prudent organisations should do now

    The adjustment does not need to be dramatic, but it does need to start. Three measures stand out.

    First, stress-test AI budgets against realistic pricing. If API costs tripled tomorrow, which workflows would still deliver positive returns? The answer reveals which AI investments are genuinely valuable and which are artefacts of artificially cheap compute.

    Second, build multi-provider flexibility into the architecture. Vendor lock-in has always been a risk in enterprise technology. In AI, where pricing models are still evolving and open-source alternatives like Llama and Mistral are improving rapidly, flexibility is not just prudent — it is a hedge against the cost increases that are coming.

    Third, watch the open-source floor. The existence of capable open models creates a price ceiling that limits how aggressively commercial providers can raise rates. Organisations that invest in the capability to run open models on their own infrastructure — or through commodity inference services — will have negotiating leverage that others will not.

    The correction, not the crisis

    None of this means AI is overvalued or that enterprise adoption will stall. The technology works. The productivity gains are real. But the current pricing does not reflect the true cost of delivering those gains, and the correction will arrive gradually over the next two to four years as the industry’s largest players transition from growth-at-all-costs to sustainable economics.

    The organisations best positioned for that transition will be those that treated the subsidy era as a window for experimentation — learning which AI applications genuinely transform their operations — rather than a permanent baseline for their technology budgets. The window is closing. The question is whether the planning has already begun.

  • Anthropic’s Pentagon Walkaway and the Price of AI Principles

    When maddaisy examined the Pentagon-Anthropic standoff earlier this month, the focus was on a new category of vendor risk – the possibility that political pressure could turn a trusted AI supplier into a regulatory liability overnight. That analysis ended with a warning: organisations in the defence supply chain needed to start modelling political risk alongside technical capability.

    Nine days later, the story has taken a turn that few risk models would have predicted. The company the US government tried to destroy is winning the consumer market.

    The market’s unexpected verdict

    Claude, Anthropic’s AI assistant, surged to the number one spot on Apple’s App Store in the days following the ban. The #QuitGPT movement saw 2.5 million users cancel ChatGPT subscriptions or publicly pledge to boycott OpenAI. Protesters gathered outside OpenAI’s San Francisco headquarters. Reddit and X filled with migration guides.

    The paradox is striking. Anthropic refused to remove ethical guardrails from its Pentagon contract – specifically, restrictions on mass surveillance and fully autonomous weapons. The government designated it a supply-chain risk, a label previously reserved for foreign adversaries. And the market responded by rewarding Anthropic with the most effective brand-building campaign in AI history – one the company never planned and could not have bought.

    What OpenAI’s ‘win’ actually cost

    OpenAI moved quickly to fill the gap. Sam Altman announced a deal to deploy models on the Pentagon’s classified networks, claiming the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human oversight requirements. The critical difference, as maddaisy noted at the time, is that OpenAI permits “all lawful uses” and relies on existing Pentagon policies rather than contractual restrictions.

    Altman defended the decision in Fortune, arguing that “the absence of responsible AI companies in the military space” would be worse than engagement on imperfect terms. It is a reasonable argument in isolation.

    But the commercial consequences arrived faster than the contracts. OpenAI’s head of robotics resigned over ethical concerns related to the Pentagon deal. Eleven OpenAI employees signed an open letter protesting the government’s treatment of Anthropic – even as their employer stood to benefit from it. OpenAI rushed out GPT-5.4 within 48 hours of GPT-5.3 Instant, in what looked less like a planned release and more like a company trying to change the subject.

    The talent cost may prove more significant than the user exodus. In an industry where the top researchers can choose where they work, being seen as the company that took the contract Anthropic refused is not a recruitment advantage.

    The strategic calculus of principles

    For decades, ethical positioning in technology was treated as a cost centre – a brand exercise that might prevent negative press but would never drive revenue. The Anthropic-Pentagon episode is the first major test of whether that assumption holds in the AI era.

    The early data suggests it does not. Anthropic is being rewarded by consumers and punished by government. OpenAI is being rewarded by government and punished by consumers. Both companies are learning, in real time, that the AI market has bifurcated into constituencies with fundamentally different values – and that serving one may come at the expense of the other.

    This is new territory for the technology industry. Previous ethical standoffs – Apple versus the FBI over iPhone encryption in 2016, Google’s Project Maven withdrawal in 2018 – were significant but did not produce the same kind of mass consumer response. The difference is that AI tools are personal. People interact with ChatGPT and Claude daily, often for sensitive tasks. When those tools become entangled with military use and surveillance, the reaction is visceral in a way that an encryption debate never was.

    The $60 billion question

    None of this means Anthropic’s position is comfortable. The company has received more than $60 billion in total investment from over 200 venture capital funds. If the supply-chain risk designation holds, enterprise clients in the defence ecosystem will be contractually barred from using Anthropic’s products. Anthropic has announced it will challenge the designation in court, but legal processes move slowly and government procurement decisions move fast.

    The Pro-Human Declaration, a framework released by a bipartisan coalition of researchers and former officials, was finalised before the standoff but published in its aftermath. Its signatories include Steve Bannon and Susan Rice – a pairing that underlines just how broadly the anxiety about unchecked AI development has spread. Among its provisions: mandatory pre-deployment testing, prohibitions on fully autonomous weapons, and an outright moratorium on superintelligence development until scientific consensus on safety is established.

    Whether such frameworks gain legislative traction remains an open question. As maddaisy has previously noted, more than 1,000 AI-related bills were introduced across all 50 US states in 2025 alone, while federal action has been conspicuously absent. The Pentagon-Anthropic episode may be the catalyst that changes that – or it may simply add another chapter to the regulatory vacuum.

    What this means for practitioners

    For organisations evaluating AI vendors, the lesson is not that principles are good and pragmatism is bad. It is that ethical positioning has become a material factor in AI vendor strategy – one that affects talent retention, consumer trust, regulatory exposure, and government access in ways that are increasingly difficult to hedge.

    The vendor assessment frameworks that most enterprises use were not designed for this. They evaluate uptime, security certifications, data residency, and pricing. They do not evaluate whether a vendor’s ethical commitments might make it a target of government action, or whether a vendor’s willingness to accept government contracts without restrictions might trigger a consumer backlash that affects product development velocity.

    Both of those scenarios played out in the space of a single week. Any vendor strategy that does not account for them is incomplete.

    The AI industry is discovering what the defence, pharmaceutical, and energy industries learned long ago: when your products touch questions of public safety and national security, your commercial strategy and your ethical positioning become the same thing. The companies that figure this out first – not just as a brand exercise, but as a genuine constraint on what they will and will not do – will be the ones that retain the trust of all their constituencies, not just the most powerful one.

    Anthropic’s stand cost it a $200 million contract. It may also have bought something more durable: a market position built on trust at a time when trust in AI companies is in short supply.

  • OpenAI’s Frontier Alliance Is Not Just About Consulting. It Is a Bet Against the Enterprise Software Stack.

    When maddaisy examined OpenAI’s Frontier Alliance in February, the focus was on what it meant for the consulting firms — McKinsey, BCG, Accenture, and Capgemini — and the admission that AI vendors cannot scale enterprise deployments alone. That story was about the consulting industry. This one is about the companies the alliance is quietly aimed at: the enterprise software vendors that have built trillion-dollar businesses on per-seat licensing.

    The per-seat model under pressure

    OpenAI’s Frontier platform, launched in early February, is designed as an enterprise operating layer — a unified system where AI agents can log into applications, execute workflows, and make decisions across an organisation’s entire technology stack. CRM systems, HR platforms, ticketing tools, internal databases. The ambition is not to replace any single application but to sit above all of them.

    The threat to SaaS vendors is structural, not incremental. If AI agents execute the tasks that human employees currently perform inside Salesforce, ServiceNow, or Workday, the justification for per-seat licensing weakens. Fewer human users logging in means fewer seats to sell. And if agents can orchestrate workflows across multiple systems from a single platform, the case for buying specialised point solutions — each with its own subscription — becomes harder to make.

    The market has not waited for proof. Investors wiped roughly $2 trillion in market value from technology stocks in a single week over AI displacement concerns. ServiceNow shares fell more than 20% year-to-date by mid-February. IBM suffered its largest single-day decline in 25 years after Anthropic’s Claude demonstrated competency with legacy COBOL systems — the very maintenance work that underpins a significant portion of IBM’s consulting revenue.

    The consulting conduit

    What makes the Frontier Alliance specifically dangerous for SaaS incumbents is not the technology. It is the distribution channel.

    McKinsey, BCG, Accenture, and Capgemini are not just consulting firms. They are the primary implementation partners for the very software companies that Frontier could displace. When a Fortune 500 company deploys Salesforce, it typically hires one of these firms to manage the rollout. When it migrates to ServiceNow’s IT service management platform, the same consulting firms handle the integration. The relationships are deep, multi-year, and built on trust.

    OpenAI has effectively enlisted those relationships as a distribution network. Each of the four firms has established dedicated OpenAI practice groups, certified their teams on Frontier, and committed to multi-year alliances. OpenAI’s own forward-deployed engineers will sit alongside consulting teams in client engagements — a model borrowed from Palantir’s playbook for embedding in enterprise accounts.

    The result is a direct-to-enterprise pipeline that does not need SaaS vendors as intermediaries. A consulting firm advising a client on AI strategy can now recommend Frontier agents that orchestrate existing systems, rather than recommending new SaaS products that require their own implementation projects. The consulting firm earns either way. The SaaS vendor may not.

    SaaS is not dying. But the economics are shifting.

    The counterarguments deserve a hearing. Fortune 500 companies will not abandon decades of enterprise software investment overnight. Compliance requirements, audit trails, data sovereignty obligations, and the sheer operational complexity of large organisations create friction that no AI platform can simply wave away. As one analyst put it, “we are simply not going to see a complete unwinding of the past 50 years of enterprise software development.”

    The incumbents are also adapting. Salesforce has pioneered what it calls the “Agentic Enterprise Licence Agreement” — a fixed-price, consumption-based model designed to decouple revenue from headcount. ServiceNow and Microsoft are shifting toward outcome-based pricing. These moves acknowledge the threat and attempt to neutralise it by changing the unit of value from the human user to the business outcome.

    But adaptation comes at a cost. Per-seat licensing has been the engine of SaaS margins for two decades. Moving to consumption or outcome-based models compresses revenue predictability and margins in the short term, even if it preserves relevance in the long term. The transition is not painless, and investors know it.

    Where this connects

    This development sits at the intersection of several threads maddaisy has been tracking. Capgemini’s CEO, Aiman Ezzat, argued last week that organisations are deploying AI capabilities ahead of their ability to absorb them. He is right — but that does not mean the structural pressure on SaaS pricing will wait for organisations to catch up. The market reprices on expectations, not on deployment maturity.

    And the consulting pyramid piece from yesterday noted that the industry’s base is being reshaped as AI compresses the need for junior analytical work. A similar compression is now visible in enterprise software: the middle layer of the stack — the specialised tools that automate individual workflows — faces pressure from platforms that automate across workflows.

    The question for enterprise software vendors is not whether AI agents will change their businesses. It is whether they can shift their pricing, their value propositions, and their competitive moats fast enough to remain the platform of choice — rather than becoming the legacy infrastructure that sits beneath someone else’s agent layer.

    For practitioners evaluating their own technology stacks, the practical implication is this: the next software audit should not just ask what each tool costs per seat. It should ask what happens to that cost when half the seats belong to agents that do not need a licence.

  • The Pentagon-Anthropic Standoff Exposes a New Category of AI Vendor Risk

    When maddaisy examined America’s fragmented AI regulation landscape last week, the focus was on states pulling in different directions while the federal government tried to impose order from above. That piece ended with the observation that organisations face a compliance labyrinth with no clear exit. Five days later, the labyrinth got a new wing — and this one has armed guards.

    On 27 February, President Trump ordered all federal agencies to immediately cease using Anthropic’s AI systems. Hours later, Defence Secretary Pete Hegseth moved to designate the company a “supply-chain risk to national security” — a label previously reserved for foreign adversaries. The same evening, OpenAI CEO Sam Altman announced that his company had struck a deal to deploy its models on the Pentagon’s classified networks.

    The sequence of events was not subtle. An American AI company refused to remove ethical guardrails from its military contract. The government threatened to destroy it. A rival stepped in to take the work. For anyone who manages AI vendor relationships — and that now includes most enterprise technology leaders — the implications are significant and immediate.

    What actually happened

    The conflict had been building for months. Anthropic holds a contract worth up to $200 million with the Pentagon and, through its partnership with Palantir, was one of only two frontier AI models cleared for use on classified defence networks. The arrangement worked — until the Pentagon insisted on access to Claude for “all lawful purposes,” without the ethical restrictions Anthropic had built into its acceptable use policy.

    Anthropic’s red lines were specific: no mass domestic surveillance and no fully autonomous weapons without human oversight. CEO Dario Amodei argued that current AI systems “are simply not reliable enough to power fully autonomous weapons” and that “using these systems for mass domestic surveillance is incompatible with democratic values.”

    The Pentagon disagreed, and the situation escalated rapidly. Defence Secretary Hegseth gave Anthropic an ultimatum: agree to the government’s terms by 5:01 p.m. on 27 February or face designation as a supply-chain risk and potential invocation of the Cold War-era Defence Production Act. Anthropic refused. Trump’s ban and Hegseth’s designation followed within hours.

    What makes this more than a contract dispute is the weapon the government chose. As former Trump AI policy adviser Dean Ball wrote, the supply-chain risk designation would cut Anthropic off from hardware and hosting partners — “effectively destroying the company.” Ball, hardly an opponent of the administration, called it “attempted corporate murder.”

    OpenAI steps in — with familiar language and different terms

    OpenAI’s Pentagon deal arrived with careful framing. Altman claimed the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human responsibility for the use of force. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote.

    The critical difference is what “agreement” means in practice. Anthropic sought contractual guarantees — binding restrictions written into the terms of service. OpenAI’s approach permits “all lawful uses” and relies on the Pentagon’s existing policies and legal frameworks rather than company-imposed limitations. As TechCrunch noted, it remains unclear how — or whether — the safety measures in OpenAI’s deal differ substantively from the terms Anthropic rejected.

    When maddaisy covered OpenAI’s Frontier Alliance with McKinsey, BCG, Accenture, and Capgemini last week, the story was about a vendor that needed consulting partners to scale its enterprise business. The Pentagon deal adds a wholly different dimension to OpenAI’s ambitions — and a wholly different category of risk.

    The institutional gap

    The deeper issue is structural. For decades, Pentagon technology contracts were dominated by slow-moving, heavily regulated defence contractors — Raytheon, Lockheed Martin, Northrop Grumman. These companies built institutional muscle for navigating political transitions, managing classified programmes, and absorbing the long-term volatility of government work. They were not exciting. They were durable.

    AI startups are neither slow-moving nor heavily regulated. They operate on venture capital timelines, consumer brand logic, and talent markets where a single ethical controversy can trigger an employee exodus. OpenAI has already seen 11 of its own employees sign an open letter protesting the government’s treatment of Anthropic — even as their employer benefits from it.

    This is the mismatch that matters for practitioners. The AI companies building the tools that enterprises depend on are now also becoming national security infrastructure — but they have none of the institutional frameworks that role demands. They lack the political risk management, the bipartisan relationship-building, and the organisational resilience to weather what comes next.

    The vendor risk no one modelled

    For organisations that have built their AI strategies around Anthropic or OpenAI, the past week introduced a category of risk that does not appear in most vendor assessment frameworks: political risk.

    Anthropic’s designation as a supply-chain risk, if upheld, would prevent any military contractor or supplier from doing business with the company. Given how deeply Anthropic’s Claude is integrated with Palantir’s systems — which are themselves critical Pentagon infrastructure — the practical implications cascade well beyond the original dispute. The CIO analysis of the situation compared it to the FBI-Apple standoff over iPhone encryption in 2015, but noted that the current administration “seems less willing to be patient.”

    For enterprises in the defence supply chain, the risk is direct: continued use of Anthropic’s technology may become a contractual liability. For everyone else, the risk is precedential. If the federal government can threaten to destroy an American company for negotiating contract terms, the calculus changes for every AI vendor evaluating government work — and for every enterprise evaluating those vendors.

    What consultants and technology leaders should watch

    Three things matter in the weeks ahead.

    First, the legal challenge. Anthropic has stated it will contest the supply-chain designation in court. Legal analysts suggest the designation is unlikely to survive judicial scrutiny — it was designed for foreign adversaries, not domestic companies in active contract negotiations. But legal proceedings take time, and the commercial damage from even a temporary designation could be severe.

    Second, the talent signal. AI companies compete fiercely for a small pool of researchers and engineers. The Pentagon standoff has made the ethical positioning of AI labs a hiring issue, not just a branding one. OpenAI’s internal tension — benefiting commercially from the deal while employees publicly protest the government’s tactics — is a dynamic that affects product roadmaps, not just press coverage.

    Third, the precedent for vendor governance. As the Council on Foreign Relations observed, the Anthropic standoff raises a fundamental question about AI sovereignty: can a private firm constrain the government’s use of a decisive military technology, and should it? Whichever way that question resolves, it will reshape the terms on which AI vendors provide their products — to governments and to enterprises alike.

    The irony is hard to miss. Just days after maddaisy reported on America’s fragmented state-level AI regulation, the federal government demonstrated that its own approach to AI governance is no less chaotic — merely higher-stakes. For organisations navigating vendor relationships in this environment, the lesson is uncomfortable: the AI companies they depend on are now players in a political contest they did not design and cannot control.

  • OpenAI’s Frontier Alliance Confirms What Consultants Already Knew: AI Vendors Cannot Scale Alone

    OpenAI announced on 23 February that it has formed multi-year “Frontier Alliances” with McKinsey, Boston Consulting Group, Accenture, and Capgemini. The four firms will help sell, implement, and scale OpenAI’s Frontier platform — an enterprise system for building, deploying, and governing AI agents across an organisation’s technology stack.

    For readers who have been following maddaisy’s coverage of the consulting industry’s AI pivot, this is not a surprise. It is the logical next step in a pattern that has been building for months — and it tells us more about the limits of AI vendors than about the ambitions of consulting firms.

    The vendor cannot scale alone

    The most revealing line in the announcement came from Capgemini’s chief strategy officer, Fernando Alvarez: “If it was a walk in the park, OpenAI would have done it by themselves, so it’s recognition that it takes a village.”

    That candour is worth pausing on. OpenAI’s enterprise business accounts for roughly 40% of revenue, with expectations of reaching 50% by the end of the year. The company has already signed enterprise deals with Snowflake and ServiceNow this year and appointed Barret Zoph to lead enterprise sales. Yet it still needs consulting firms — with their existing client relationships, implementation expertise, and organisational change capabilities — to get its technology into production at scale.

    This is not a story about OpenAI’s generosity in sharing the enterprise market. It is an admission that the gap between a capable AI platform and a working enterprise deployment remains stubbornly wide. As maddaisy reported last week, PwC’s 2026 CEO Survey found that 56% of chief executives still cannot point to measurable revenue gains from their AI investments. The technology is not the bottleneck. Integration, governance, and organisational readiness are.

    A clear division of labour

    The alliance structure reveals how OpenAI sees the enterprise AI value chain. McKinsey and BCG are positioned as strategy and operating model partners — helping leadership teams determine where agents should be deployed and how workflows need to be redesigned. BCG CEO Christoph Schweizer noted that AI must be “linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives.”

    Accenture and Capgemini take the systems integration role: data architecture, cloud infrastructure, security, and the unglamorous work of connecting Frontier to the CRM platforms, HR systems, and internal tools that enterprises actually run on. Each firm is building dedicated practice groups and certifying teams on OpenAI technology. OpenAI’s own forward-deployed engineers will sit alongside them in client engagements.

    This two-tier model — strategy at the top, integration at the bottom — maps neatly onto the consulting industry’s existing hierarchy. It also creates a clear dependency: OpenAI provides the platform, the consultancies provide the last mile.

    The maddaisy continuity thread

    This announcement intersects with several stories maddaisy has been tracking. When we examined McKinsey’s 25,000 AI agent deployment, the question was whether the firm’s aggressive internal build-out was a first-mover advantage or an expensive experiment. The Frontier Alliance suggests McKinsey is now positioning that internal capability as a credential — evidence that it can deploy agentic AI at scale, which it can now offer to clients through the OpenAI partnership.

    Similarly, when maddaisy covered the shift from billable hours to outcome-based consulting, the question was how firms would make the economics work. Vendor alliances like this provide part of the answer: the consulting firm brings the implementation expertise, the AI vendor provides the platform, and the client pays for outcomes rather than hours. The risk is shared across the chain.

    And Capgemini’s dual bet — adding 82,300 offshore workers while simultaneously investing in AI — now makes more strategic sense. The offshore delivery capacity is precisely what is needed to operationalise Frontier at enterprise scale. The bodies and the bots are not competing; they are complementary.

    The SaaS vendors should be nervous

    As Fortune noted, the Frontier Alliance creates a specific tension for established software-as-a-service vendors. Salesforce, Microsoft, Workday, and ServiceNow all depend on these same consulting firms to market and deploy their products. Now those consultants will also be actively promoting an alternative platform — one that positions itself as a “semantic layer” sitting above the traditional SaaS stack.

    The consulting firms are not choosing sides. They are hedging. Accenture, for instance, signed a multi-year partnership with Anthropic in December 2025 and is now a Frontier Alliance member. The firms will sell whichever platform best fits a given client’s needs, which gives them leverage over the AI vendors rather than the other way around.

    For the SaaS incumbents, however, having McKinsey and BCG actively evangelise an AI-native alternative to C-suite buyers is a development they will not welcome. Investor anxiety in this space is already elevated — shares of several enterprise software companies have been punished over concerns that customers will choose AI-native platforms over traditional offerings.

    What to watch

    The Frontier Alliance is a partnership announcement, not a set of outcomes. The real test is whether this model — AI vendor plus consulting firm — can close the deployment gap that has kept enterprise AI adoption stubbornly below expectations.

    Three things matter from here. First, whether the certified practice groups produce measurably better outcomes than the piecemeal implementations enterprises have been attempting on their own. Second, whether Frontier’s “semantic layer” architecture genuinely simplifies agent deployment or simply adds another platform layer to an already complex stack. And third, whether the consulting firms’ simultaneous alliances with competing AI vendors — OpenAI, Anthropic, Google — create genuine client value or just a more complicated sales cycle.

    For practitioners, the immediate signal is clear: the enterprise AI market is consolidating around a vendor-plus-integrator model. If your organisation is planning an agentic AI deployment, the question is no longer which model to use. It is which combination of platform, integrator, and operating model redesign will actually get agents into production — and keep them there.