Category: AI

  • Brussels Blinked: The EU AI Act’s High-Risk Deadline Just Moved, but the Compliance Clock Has Not Stopped

    When maddaisy examined the shift from AI principles to penalties in February, the EU AI Act’s August 2026 deadline for high-risk AI systems sat at the centre of the analysis. That date — 2 August 2026 — was the moment when compliance stopped being theoretical and started carrying fines of up to seven per cent of global turnover.

    Four weeks later, Brussels blinked.

    On 13 March, the EU Council agreed its position on the Digital Omnibus package, pushing back the application of high-risk AI rules to December 2027 for standalone systems and August 2028 for those embedded in products. The proposal still requires negotiation with the European Parliament, but the direction is clear: the EU’s own regulatory infrastructure was not ready for its own deadline.

    What Actually Changed

    The delay is narrower than the headlines suggest. The EU AI Act’s prohibited practices — social scoring, manipulative AI targeting vulnerable groups, unauthorised real-time biometric surveillance — have been in force since February 2025 and remain untouched. Obligations for general-purpose AI model providers, including transparency and copyright requirements, still apply from August 2025. The Code of Practice requiring machine-readable detection techniques for AI-generated content is already published.

    What shifted is specifically the high-risk classification regime: the rules governing AI systems used in employment decisions, credit scoring, healthcare, education, law enforcement, and critical infrastructure. These are the provisions that demand conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. They are also the provisions that most enterprises have been scrambling to prepare for.

    The Council’s rationale is pragmatic rather than political. The European Commission missed its own February 2026 deadline for publishing the guidance and harmonised standards that enterprises need to demonstrate compliance. Without those standards, companies were being asked to hit a target that the regulator had not yet fully defined. As the Cypriot presidency put it, the goal is “greater legal certainty” and “more proportionate” implementation — diplomatic language for acknowledging that the implementation machinery was not keeping pace with the legislative ambition.

    The Compliance Paradox

    For enterprises that have spent the past 18 months building AI governance programmes, risk inventories, and compliance frameworks, the delay creates an awkward question: should they slow down?

    The short answer is no — and the reasoning matters more than the conclusion.

    First, the delay is conditional. The Council’s position sets fixed dates — December 2027 and August 2028 — but the Commission retains the ability to confirm earlier application if standards become available sooner. Organisations that pause their compliance programmes risk finding themselves back under pressure with less runway than they had before.

    Second, the regulatory landscape extends well beyond Brussels. As maddaisy has previously examined, the United States is building its own patchwork of state-level AI laws. Colorado’s AI Act takes effect in June 2026. California’s transparency requirements are already live. The EU delay does not change these timelines. An enterprise operating across both markets still faces near-term obligations.

    Third, and perhaps most importantly, the governance work itself has value beyond regulatory compliance. Organisations that have inventoried their AI systems, established accountability structures, and implemented monitoring processes are better positioned to manage operational risk, regardless of when a specific regulation takes effect. As the Ethyca governance framework notes, the shift from policy documentation to continuous operational evidence is happening independently of any single regulatory deadline.

    What the Delay Reveals

    The Digital Omnibus is not just a timeline adjustment. It is a signal about the structural challenges of regulating AI at the pace the technology is evolving.

    The EU built the world’s most comprehensive AI regulation. It classified systems by risk tier, defined obligations for providers and deployers, established penalties that exceed GDPR maximums, and applied the rules extraterritorially. What it did not build quickly enough was the operational layer: the harmonised standards, the conformity assessment procedures, the guidance documents that translate legal text into practical compliance steps.

    This mirrors a pattern maddaisy has observed across multiple regulatory domains. Europe’s cloud sovereignty push encountered similar friction — ambitious policy goals meeting incomplete implementation frameworks. The gap between legislative intent and operational readiness is becoming a recurring theme in European technology regulation.

    The Council’s position does include substantive additions alongside the delay. A new prohibition on AI-generated non-consensual intimate content and child sexual abuse material was introduced. Regulatory exemptions previously limited to SMEs were extended to small mid-cap companies. The AI Office’s enforcement powers were reinforced. These are not trivial changes — they show that the regulation is still being actively shaped even as its core provisions await full application.

    The ISO 42001 Factor

    One development running parallel to the regulatory delay is the accelerating adoption of ISO/IEC 42001, the international standard for AI management systems. Enterprise buyers are increasingly adding it to vendor procurement requirements, and AI liability insurers are beginning to factor governance certifications into risk assessments.

    For organisations uncertain about how to structure their compliance programmes during the delay, ISO 42001 offers a practical framework. It maps to the EU AI Act’s requirements without being dependent on them, meaning that compliance work done under the standard retains its value regardless of how regulatory timelines shift. Pega’s recent certification is one example of vendors using the standard to demonstrate governance readiness to enterprise clients.

    What Practitioners Should Do Now

    The EU AI Act delay changes timelines, not trajectories. The practical recommendations remain consistent with what maddaisy outlined in February, with one important addition:

    • Continue AI system inventories. Understanding what AI is deployed, where, and at what risk level is foundational work that no regulatory timeline change invalidates.
    • Monitor the Parliament negotiations. The Council position must be reconciled with the European Parliament before becoming final. The dates could shift again — in either direction.
    • Use the extra time for standards alignment. With harmonised standards still being developed, organisations now have an opportunity to align with ISO 42001 or the NIST AI Risk Management Framework before mandatory compliance begins.
    • Do not treat the delay as permission to deprioritise. Colorado, California, and other US state deadlines remain unchanged. Enterprise clients and procurement teams are not waiting for regulators — they are setting their own governance expectations now.

    The EU built the most ambitious AI regulation in the world, then discovered that ambition requires infrastructure. The delay is a concession to reality, not a retreat from intent. For enterprises, the message is straightforward: the destination has not changed, only the speed limit on the road getting there.

  • The Venture Subsidy Era for AI Is Ending. Enterprise Budgets Are Not Ready.

    For the past three years, enterprises have been building their AI strategies on pricing that does not reflect reality. The era of venture-subsidised AI — where a ChatGPT query costs pennies despite burning roughly ten times the energy of a Google search — is approaching its expiry date. The question is not whether prices will rise. It is whether organisations have budgeted for what comes next.

    The subsidy model, laid bare

    The numbers tell the story clearly enough. OpenAI’s own internal projections show $14 billion in losses for 2026, against roughly $13 billion in revenue. Total spending is expected to reach approximately $22 billion this year. Across the 2023–2028 period, the company expects to lose $44 billion before turning cash-flow positive sometime around 2029 or 2030.

    Anthropic’s trajectory looks different but carries the same structural tension. The company hit $19 billion in annualised revenue by March 2026, growing more than tenfold annually. But its gross margins sit at around 40% — a long way from the 77% it needs to justify its $380 billion valuation. That gap has to close, and it will not close through efficiency gains alone.

    Both companies have raised staggering sums to sustain the current pricing. OpenAI’s $110 billion round in February valued it at $730 billion. Anthropic’s $30 billion Series G came from a coalition including GIC, Microsoft, and Nvidia. This is venture capital on a scale that makes the ride-hailing subsidy wars look modest — and, like those wars, it is designed to capture market share before the real pricing arrives.

    The millennial lifestyle subsidy, enterprise edition

    The pattern is familiar. Uber and DoorDash used investor capital to underwrite artificially cheap services, building habits and dependencies before gradually raising prices toward sustainable levels. AI providers are running the same playbook, but the stakes are larger. When Uber raised fares, consumers grumbled and occasionally took the bus. When AI API costs increase threefold — which industry analysts suggest may be the minimum adjustment needed for sustainable economics — enterprises will face a different kind of reckoning.

    The reckoning is already starting at the platform level. Microsoft will raise commercial pricing across its entire 365 suite from July 2026, with increases of 8–17% depending on the tier. The company attributed the rises to AI capabilities such as Copilot Chat being embedded into standard subscriptions. Its new $99-per-user E7 tier bundles Copilot, identity management, and agent orchestration tools — positioning AI not as an optional add-on but as a cost baked into the platform itself.

    The broader enterprise software market is following the same trajectory. Gartner forecasts enterprise software spend rising at least 40% by 2027, with generative AI as the primary accelerant. Average annual SaaS price increases now range from 8–12%, with aggressive movers implementing hikes of 15–25% at renewal.

    The budget gap nobody is discussing

    The disconnect between AI ambition and AI economics is widening. Organisations now spend an average of $7,900 per employee annually on SaaS tools — a 27% increase over two years. AI-native application spend has surged 108% year-on-year, reaching an average of $1.2 million per organisation. And these figures reflect the subsidised era.

    As Axios reported this week, the unusually low cost of many AI services will not survive the transition from venture-funded growth to public-market accountability. As OpenAI and Anthropic pursue potential IPOs, investors will demand the margins that current pricing cannot deliver. Subscription prices and usage-based costs are expected to rise across the industry.

    For enterprises that have been scaling AI adoption on the assumption that current costs are permanent, this represents a planning failure in the making. A consumer application with $2 in AI costs per user per month looks viable. The same application at $10 per user does not. High-volume automation workflows — precisely the use cases enterprises are most excited about — are the most vulnerable to cost increases.

    The pacing argument gains new weight

    This pricing trajectory adds a new dimension to arguments maddaisy has previously explored around pacing AI investment. When Capgemini’s CEO Aiman Ezzat cautioned against getting “too ahead of the learning curve,” his concern was primarily about deploying capabilities ahead of organisational readiness. The pricing question strengthens that case. Organisations that rush to embed AI across every workflow at subsidised rates may find themselves locked into architectures whose economics no longer work when the real costs arrive.

    Similarly, the enterprise scaling gap reported last week — where two-thirds of organisations cannot move AI past pilot stage — takes on a different character when viewed through an economic lens. The skills shortage and governance deficits that constrain scaling today may prove less urgent than the budget constraints that arrive tomorrow. Organisations struggling to scale AI at subsidised prices will find it considerably harder at market rates.

    What prudent organisations should do now

    The adjustment does not need to be dramatic, but it does need to start. Three measures stand out.

    First, stress-test AI budgets against realistic pricing. If API costs tripled tomorrow, which workflows would still deliver positive returns? The answer reveals which AI investments are genuinely valuable and which are artefacts of artificially cheap compute.

    Second, build multi-provider flexibility into the architecture. Vendor lock-in has always been a risk in enterprise technology. In AI, where pricing models are still evolving and open-source alternatives like Llama and Mistral are improving rapidly, flexibility is not just prudent — it is a hedge against the cost increases that are coming.

    Third, watch the open-source floor. The existence of capable open models creates a price ceiling that limits how aggressively commercial providers can raise rates. Organisations that invest in the capability to run open models on their own infrastructure — or through commodity inference services — will have negotiating leverage that others will not.

    The correction, not the crisis

    None of this means AI is overvalued or that enterprise adoption will stall. The technology works. The productivity gains are real. But the current pricing does not reflect the true cost of delivering those gains, and the correction will arrive gradually over the next two to four years as the industry’s largest players transition from growth-at-all-costs to sustainable economics.

    The organisations best positioned for that transition will be those that treated the subsidy era as a window for experimentation — learning which AI applications genuinely transform their operations — rather than a permanent baseline for their technology budgets. The window is closing. The question is whether the planning has already begun.

  • The Rules for Public Data Are Quietly Reshaping Who Gets to Build AI

    The question of who gets to train AI on publicly available data is quietly becoming one of the most consequential regulatory battles in technology. A new report from the Information Technology and Innovation Foundation (ITIF), published this week, lays out the stakes clearly: the jurisdictions that permit responsible access to public web data will lead in AI development, while those that restrict it risk falling permanently behind.

    This is not abstract. In 2025, US-based organisations produced 40 notable foundation models. China produced 15. The European Union managed three. The gap is driven by many factors — investment, talent, compute infrastructure — but the rules governing access to training data are an increasingly significant one.

    The Transatlantic Divide

    The US and EU have taken fundamentally different approaches to how publicly available data can be used for AI training.

    The US operates what the ITIF describes as a “gates up” framework. Publicly accessible web data is generally available for automated collection unless a site owner implements technical barriers — robots.txt files, authentication walls, or rate-limiting mechanisms. This permissive posture has given American AI labs broad access to the digital commons as training material.

    The EU, by contrast, applies GDPR protections to personal data regardless of whether it appears on a public website. Even a name and job title scraped from a company’s “About Us” page may require a lawful processing basis under European law. The EU AI Act adds a further layer: Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, and rights holders can opt out of their content being used. The European Commission’s November 2025 Digital Omnibus proposal aims to simplify some of this regulatory burden, but the fundamental constraints on data use remain.

    The result is that AI development gravitates toward more permissive jurisdictions. This is not a theoretical concern — it is visible in where companies locate their model training infrastructure and where they hire.

    The US Is Not Unified Either

    As maddaisy examined in February, the United States has its own regulatory fragmentation problem. California’s AB 2013, which took effect on 1 January 2026, requires developers of publicly available generative AI systems to disclose detailed information about their training data — including the sources, whether the data contains copyrighted material, whether it includes personal information, and when it was collected. That transparency obligation applies retrospectively, meaning developers must document historical training practices.

    Colorado’s AI Act addresses the deployment side, with impact assessments and discrimination safeguards for high-risk systems due to take effect in June 2026. Illinois, New York City, and Texas each have their own targeted requirements.

    The federal government wants to consolidate this into a single framework, but as maddaisy noted when AI governance entered its enforcement era, the White House’s December 2025 executive order is a statement of intent, not a statute. State laws remain in force, and the compliance burden is cumulative.

    Technical Governance Is Filling the Gap

    Where regulation is fragmented or slow, technical standards are emerging to manage access to public data for AI training. The ITIF report identifies several mechanisms that are gaining traction:

    Machine-readable opt-out signals extend beyond the familiar robots.txt protocol. New standards like LLMs.txt allow website operators to provide curated, machine-readable summaries of their content specifically for AI systems — a more nuanced approach than a binary allow/block decision.

    Cryptographic bot authentication using HTTP message signatures allows site operators to verify the identity of AI crawlers and grant or restrict access based on who is asking, not just what they are requesting.

    Automated licensing frameworks are experimenting with HTTP 402 (“Payment Required”) signals, creating the technical infrastructure for content owners to set terms for AI training use — including compensation.

    PII filtering tools such as Microsoft’s open-source Presidio project allow developers to detect and remove sensitive personal information during data preparation, addressing privacy concerns at the technical rather than legal level.

    These mechanisms are not yet standardised or universally adopted. But they point toward a model where access to public data is governed by a combination of technical protocols and market-based agreements, rather than solely by regulation.

    The Agentic Wrinkle

    The data access question becomes more complex as AI systems shift from static model training to live, agentic operations. When maddaisy examined the governance challenges for AI agents earlier this week, the focus was on operational controls — monitoring, auditing, and accountability chains. The ITIF report adds a further dimension: data that is technically accessible (visible through a browser or available via API) is not necessarily intended for AI consumption.

    Consider an AI agent authorised to access a company’s customer relationship management system. The data it encounters is not public, but it is available to the agent through delegated credentials. Current regulatory frameworks are largely silent on this category of “private-but-available” data, and the risks compound when agents combine information from multiple sources to surface connections that no individual source intended to reveal.

    What Practitioners Should Watch

    The ITIF report recommends that policymakers focus on three priorities: regulating AI outputs rather than training inputs, encouraging transparency norms for AI agents, and creating safe harbour protections for developers who respect machine-readable opt-out signals and filter sensitive data.

    For consultants and practitioners advising organisations on AI strategy, the practical implications are more immediate. Enterprises deploying AI — whether training proprietary models, fine-tuning foundation models, or deploying agentic systems — need to map their data supply chain with the same rigour they apply to physical procurement. That means understanding where training data originates, what rights framework governs its use, whether it contains personal information subject to GDPR or state privacy laws, and whether the technical mechanisms exist to honour opt-out requests.

    The organisations that treat training data governance as a compliance afterthought will find themselves exposed — not just to regulatory penalties, but to reputational risk and potential litigation. Those that build responsible data practices into their AI development lifecycle will have a genuine competitive advantage, particularly as transparency requirements tighten across jurisdictions.

    The rules for public data are not a peripheral regulatory detail. They are becoming one of the defining factors in who builds the next generation of AI systems, and where.

  • The Governance Frameworks for AI Agents Exist. The Hard Part Is Making Them Work.

    The governance playbook for autonomous AI agents is no longer a blank page. Regulatory bodies have published frameworks. Law firms have issued guidance. Industry coalitions have identified priorities. The principles – least privilege, human checkpoints, real-time monitoring, value-chain accountability – are converging across jurisdictions. And yet, Gartner predicts that 40 per cent of agentic AI projects will be cancelled by the end of 2027, citing escalating costs, unclear business value, and inadequate risk controls.

    The problem is not that enterprises lack governance policies. It is that they lack governance infrastructure – the operational machinery to translate principles into practice across live, autonomous systems operating at scale.

    When maddaisy examined the emerging governance playbook last week, the direction of travel was clear: regulators and advisors were converging on what good governance should look like. The question that follows is more difficult. What does it take to actually run that governance, day after day, across agents that plan, execute, and adapt autonomously?

    The Gap Between Policy and Operations

    The most revealing data point in recent weeks comes not from a governance report but from Logicalis’s 2026 CIO Report. Among 1,000 chief information officers surveyed globally, 89 per cent described their AI governance approach as “learning as we go.” That is not experimentation. That is the absence of operational governance.

    The skills gap compounds the problem. Nearly nine in 10 organisations cite a lack of internal technical capability as their primary constraint on AI deployment. For governance specifically, the deficit is acute. Monitoring agent behaviour in production, auditing multi-step reasoning chains, and interpreting regulatory requirements across jurisdictions all demand expertise that most enterprises have not yet hired for – and in many cases, cannot find.

    A PwC survey found that 79 per cent of companies have adopted agents in some capacity. But when enterprise search firm Lucidworks assessed over 1,100 organisations, only 6 per cent had deployed more than one agentic solution. The implication is significant: most enterprises are governing a single, contained pilot. The governance challenge changes materially when agents multiply, interact, and share data across business functions.

    Regulations Are Arriving – Unevenly

    The regulatory landscape is not waiting for enterprises to catch up. The EU AI Act’s obligations on high-risk and general-purpose AI systems take effect from August 2026, applying globally to any organisation whose systems affect EU residents. In the United States, the picture is more fragmented. President Trump’s December 2025 Executive Order signalled federal intent to consolidate AI oversight, but as legal analysis from Gunderson Dettmer makes clear, it does not preempt existing state laws.

    California, Colorado, and Texas have each enacted comprehensive AI governance statutes with distinct requirements for high-risk systems. New York’s RAISE Act imposes transparency obligations that do not apply elsewhere. For multinational enterprises deploying autonomous agents, the compliance surface is not one framework – it is dozens, with different definitions of high-risk, different disclosure requirements, and different enforcement timelines.

    This is where governance-as-policy meets governance-as-operations. A well-crafted internal policy cannot resolve the question of whether an agent deployed in London, which processes data from a New York customer and executes a transaction through a Singapore-based system, complies with three different regulatory regimes simultaneously. That requires technical infrastructure: jurisdictional routing, dynamic compliance rules, and audit trails that satisfy multiple authorities.

    What Operational Governance Actually Requires

    Several CIOs interviewed by CIO.com this month offered a consistent message: governance cannot be separated from workflow design.

    Don Schuerman, CTO at Pega, put it directly: the expectation that thousands of agents can be deployed randomly across a business and left to operate is a myth. Successful deployments anchor agents in well-defined business processes with prescribed steps, high predictability, and clear audit requirements. The governance is not a layer added afterwards – it is embedded in how the agent’s workflow is designed.

    IBM CIO Matt Lyteson echoed the point, stressing that organisations need to understand the outcomes they are targeting, the data agents will require, and the controls needed to manage them before deployment – not after. Salesforce CIO Dan Shmitt added that without high-quality data and a unified governance model, agents produce unreliable results regardless of the policy framework around them.

    The emerging consensus among practitioners, distinct from the framework-level guidance, centres on three operational requirements.

    First, governance must be embedded in agent design, not bolted on. Decision boundaries, escalation rules, and compliance checks need to be part of the agent’s workflow architecture. Retrofitting governance onto an agent already in production is significantly harder and more expensive.

    Second, observability infrastructure is non-negotiable. As maddaisy has previously reported on agentic drift, agents that pass review at launch can behave differently months later. Continuous monitoring of reasoning chains, action sequences, and decision outcomes is the minimum viable governance stack – not periodic audits.

    Third, governance requires dedicated roles, not committees. The Mayer Brown framework identified four governance functions: policy-setters, product teams, cybersecurity integration, and frontline escalation. Most enterprises have distributed these responsibilities informally. As agents scale beyond pilot stage, informal arrangements become liabilities.

    The Trajectory Ahead

    The governance conversation has moved faster than most observers expected. Twelve months ago, agentic AI governance was a theoretical concern. Today, it has dedicated regulatory guidance, published legal frameworks, and named positions on practitioners’ organisational charts. That is genuine progress.

    But the distance between knowing what governance should look like and operating it reliably is where the next phase of difficulty lies. The 40 per cent cancellation rate Gartner projects is not primarily a technology failure – it is a governance and operational maturity failure. The organisations that succeed with autonomous agents will not be those with the most sophisticated AI models. They will be the ones that built the operational infrastructure to govern them before they scaled.

    For consultants advising enterprise clients on agentic AI, the message has shifted. The question is no longer whether governance frameworks exist. It is whether the organisation has the skills, tooling, and organisational design to make those frameworks operational. That is a harder conversation, but it is now the one that matters.

  • OpenAI’s Frontier Alliance Is Not Just About Consulting. It Is a Bet Against the Enterprise Software Stack.

    When maddaisy examined OpenAI’s Frontier Alliance in February, the focus was on what it meant for the consulting firms — McKinsey, BCG, Accenture, and Capgemini — and the admission that AI vendors cannot scale enterprise deployments alone. That story was about the consulting industry. This one is about the companies the alliance is quietly aimed at: the enterprise software vendors that have built trillion-dollar businesses on per-seat licensing.

    The per-seat model under pressure

    OpenAI’s Frontier platform, launched in early February, is designed as an enterprise operating layer — a unified system where AI agents can log into applications, execute workflows, and make decisions across an organisation’s entire technology stack. CRM systems, HR platforms, ticketing tools, internal databases. The ambition is not to replace any single application but to sit above all of them.

    The threat to SaaS vendors is structural, not incremental. If AI agents execute the tasks that human employees currently perform inside Salesforce, ServiceNow, or Workday, the justification for per-seat licensing weakens. Fewer human users logging in means fewer seats to sell. And if agents can orchestrate workflows across multiple systems from a single platform, the case for buying specialised point solutions — each with its own subscription — becomes harder to make.

    The market has not waited for proof. Investors wiped roughly $2 trillion in market value from technology stocks in a single week over AI displacement concerns. ServiceNow shares fell more than 20% year-to-date by mid-February. IBM suffered its largest single-day decline in 25 years after Anthropic’s Claude demonstrated competency with legacy COBOL systems — the very maintenance work that underpins a significant portion of IBM’s consulting revenue.

    The consulting conduit

    What makes the Frontier Alliance specifically dangerous for SaaS incumbents is not the technology. It is the distribution channel.

    McKinsey, BCG, Accenture, and Capgemini are not just consulting firms. They are the primary implementation partners for the very software companies that Frontier could displace. When a Fortune 500 company deploys Salesforce, it typically hires one of these firms to manage the rollout. When it migrates to ServiceNow’s IT service management platform, the same consulting firms handle the integration. The relationships are deep, multi-year, and built on trust.

    OpenAI has effectively enlisted those relationships as a distribution network. Each of the four firms has established dedicated OpenAI practice groups, certified their teams on Frontier, and committed to multi-year alliances. OpenAI’s own forward-deployed engineers will sit alongside consulting teams in client engagements — a model borrowed from Palantir’s playbook for embedding in enterprise accounts.

    The result is a direct-to-enterprise pipeline that does not need SaaS vendors as intermediaries. A consulting firm advising a client on AI strategy can now recommend Frontier agents that orchestrate existing systems, rather than recommending new SaaS products that require their own implementation projects. The consulting firm earns either way. The SaaS vendor may not.

    SaaS is not dying. But the economics are shifting.

    The counterarguments deserve a hearing. Fortune 500 companies will not abandon decades of enterprise software investment overnight. Compliance requirements, audit trails, data sovereignty obligations, and the sheer operational complexity of large organisations create friction that no AI platform can simply wave away. As one analyst put it, “we are simply not going to see a complete unwinding of the past 50 years of enterprise software development.”

    The incumbents are also adapting. Salesforce has pioneered what it calls the “Agentic Enterprise Licence Agreement” — a fixed-price, consumption-based model designed to decouple revenue from headcount. ServiceNow and Microsoft are shifting toward outcome-based pricing. These moves acknowledge the threat and attempt to neutralise it by changing the unit of value from the human user to the business outcome.

    But adaptation comes at a cost. Per-seat licensing has been the engine of SaaS margins for two decades. Moving to consumption or outcome-based models compresses revenue predictability and margins in the short term, even if it preserves relevance in the long term. The transition is not painless, and investors know it.

    Where this connects

    This development sits at the intersection of several threads maddaisy has been tracking. Capgemini’s CEO, Aiman Ezzat, argued last week that organisations are deploying AI capabilities ahead of their ability to absorb them. He is right — but that does not mean the structural pressure on SaaS pricing will wait for organisations to catch up. The market reprices on expectations, not on deployment maturity.

    And the consulting pyramid piece from yesterday noted that the industry’s base is being reshaped as AI compresses the need for junior analytical work. A similar compression is now visible in enterprise software: the middle layer of the stack — the specialised tools that automate individual workflows — faces pressure from platforms that automate across workflows.

    The question for enterprise software vendors is not whether AI agents will change their businesses. It is whether they can shift their pricing, their value propositions, and their competitive moats fast enough to remain the platform of choice — rather than becoming the legacy infrastructure that sits beneath someone else’s agent layer.

    For practitioners evaluating their own technology stacks, the practical implication is this: the next software audit should not just ask what each tool costs per seat. It should ask what happens to that cost when half the seats belong to agents that do not need a licence.

  • The Pentagon-Anthropic Standoff Exposes a New Category of AI Vendor Risk

    When maddaisy examined America’s fragmented AI regulation landscape last week, the focus was on states pulling in different directions while the federal government tried to impose order from above. That piece ended with the observation that organisations face a compliance labyrinth with no clear exit. Five days later, the labyrinth got a new wing — and this one has armed guards.

    On 27 February, President Trump ordered all federal agencies to immediately cease using Anthropic’s AI systems. Hours later, Defence Secretary Pete Hegseth moved to designate the company a “supply-chain risk to national security” — a label previously reserved for foreign adversaries. The same evening, OpenAI CEO Sam Altman announced that his company had struck a deal to deploy its models on the Pentagon’s classified networks.

    The sequence of events was not subtle. An American AI company refused to remove ethical guardrails from its military contract. The government threatened to destroy it. A rival stepped in to take the work. For anyone who manages AI vendor relationships — and that now includes most enterprise technology leaders — the implications are significant and immediate.

    What actually happened

    The conflict had been building for months. Anthropic holds a contract worth up to $200 million with the Pentagon and, through its partnership with Palantir, was one of only two frontier AI models cleared for use on classified defence networks. The arrangement worked — until the Pentagon insisted on access to Claude for “all lawful purposes,” without the ethical restrictions Anthropic had built into its acceptable use policy.

    Anthropic’s red lines were specific: no mass domestic surveillance and no fully autonomous weapons without human oversight. CEO Dario Amodei argued that current AI systems “are simply not reliable enough to power fully autonomous weapons” and that “using these systems for mass domestic surveillance is incompatible with democratic values.”

    The Pentagon disagreed, and the situation escalated rapidly. Defence Secretary Hegseth gave Anthropic an ultimatum: agree to the government’s terms by 5:01 p.m. on 27 February or face designation as a supply-chain risk and potential invocation of the Cold War-era Defence Production Act. Anthropic refused. Trump’s ban and Hegseth’s designation followed within hours.

    What makes this more than a contract dispute is the weapon the government chose. As former Trump AI policy adviser Dean Ball wrote, the supply-chain risk designation would cut Anthropic off from hardware and hosting partners — “effectively destroying the company.” Ball, hardly an opponent of the administration, called it “attempted corporate murder.”

    OpenAI steps in — with familiar language and different terms

    OpenAI’s Pentagon deal arrived with careful framing. Altman claimed the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human responsibility for the use of force. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote.

    The critical difference is what “agreement” means in practice. Anthropic sought contractual guarantees — binding restrictions written into the terms of service. OpenAI’s approach permits “all lawful uses” and relies on the Pentagon’s existing policies and legal frameworks rather than company-imposed limitations. As TechCrunch noted, it remains unclear how — or whether — the safety measures in OpenAI’s deal differ substantively from the terms Anthropic rejected.

    When maddaisy covered OpenAI’s Frontier Alliance with McKinsey, BCG, Accenture, and Capgemini last week, the story was about a vendor that needed consulting partners to scale its enterprise business. The Pentagon deal adds a wholly different dimension to OpenAI’s ambitions — and a wholly different category of risk.

    The institutional gap

    The deeper issue is structural. For decades, Pentagon technology contracts were dominated by slow-moving, heavily regulated defence contractors — Raytheon, Lockheed Martin, Northrop Grumman. These companies built institutional muscle for navigating political transitions, managing classified programmes, and absorbing the long-term volatility of government work. They were not exciting. They were durable.

    AI startups are neither slow-moving nor heavily regulated. They operate on venture capital timelines, consumer brand logic, and talent markets where a single ethical controversy can trigger an employee exodus. OpenAI has already seen 11 of its own employees sign an open letter protesting the government’s treatment of Anthropic — even as their employer benefits from it.

    This is the mismatch that matters for practitioners. The AI companies building the tools that enterprises depend on are now also becoming national security infrastructure — but they have none of the institutional frameworks that role demands. They lack the political risk management, the bipartisan relationship-building, and the organisational resilience to weather what comes next.

    The vendor risk no one modelled

    For organisations that have built their AI strategies around Anthropic or OpenAI, the past week introduced a category of risk that does not appear in most vendor assessment frameworks: political risk.

    Anthropic’s designation as a supply-chain risk, if upheld, would prevent any military contractor or supplier from doing business with the company. Given how deeply Anthropic’s Claude is integrated with Palantir’s systems — which are themselves critical Pentagon infrastructure — the practical implications cascade well beyond the original dispute. The CIO analysis of the situation compared it to the FBI-Apple standoff over iPhone encryption in 2015, but noted that the current administration “seems less willing to be patient.”

    For enterprises in the defence supply chain, the risk is direct: continued use of Anthropic’s technology may become a contractual liability. For everyone else, the risk is precedential. If the federal government can threaten to destroy an American company for negotiating contract terms, the calculus changes for every AI vendor evaluating government work — and for every enterprise evaluating those vendors.

    What consultants and technology leaders should watch

    Three things matter in the weeks ahead.

    First, the legal challenge. Anthropic has stated it will contest the supply-chain designation in court. Legal analysts suggest the designation is unlikely to survive judicial scrutiny — it was designed for foreign adversaries, not domestic companies in active contract negotiations. But legal proceedings take time, and the commercial damage from even a temporary designation could be severe.

    Second, the talent signal. AI companies compete fiercely for a small pool of researchers and engineers. The Pentagon standoff has made the ethical positioning of AI labs a hiring issue, not just a branding one. OpenAI’s internal tension — benefiting commercially from the deal while employees publicly protest the government’s tactics — is a dynamic that affects product roadmaps, not just press coverage.

    Third, the precedent for vendor governance. As the Council on Foreign Relations observed, the Anthropic standoff raises a fundamental question about AI sovereignty: can a private firm constrain the government’s use of a decisive military technology, and should it? Whichever way that question resolves, it will reshape the terms on which AI vendors provide their products — to governments and to enterprises alike.

    The irony is hard to miss. Just days after maddaisy reported on America’s fragmented state-level AI regulation, the federal government demonstrated that its own approach to AI governance is no less chaotic — merely higher-stakes. For organisations navigating vendor relationships in this environment, the lesson is uncomfortable: the AI companies they depend on are now players in a political contest they did not design and cannot control.

  • Accenture Will Track AI Logins for Promotions. The Risk Is Measuring Compliance, Not Competence.

    Accenture has begun tracking how often senior employees log into its AI tools — and will factor that usage into promotion decisions. An internal email, reported by the Financial Times, put it plainly: “Use of our key tools will be a visible input to talent discussions.” For a firm that sells AI transformation to the world’s largest organisations, the message to its own workforce is unmistakable: adopt or stall.

    The policy is not without logic. Accenture has invested heavily in AI readiness — 550,000 employees trained in generative AI, up from just 30 in 2022, backed by $1 billion in annual learning and development spending. But training people and getting them to change how they work are two very different problems. The promotion-linked tracking is an attempt to close that gap by force.

    The credibility problem

    This move arrives at a moment when Accenture’s external positioning makes internal adoption a matter of commercial credibility. As maddaisy.com reported last week, Accenture is one of four firms named in OpenAI’s Frontier Alliance — tasked with building the data architecture, cloud infrastructure, and systems integration work needed to deploy AI agents at enterprise scale. It is difficult to sell that capability convincingly if your own senior managers are not using the tools.

    The policy applies specifically to senior managers and associate directors, with leadership roles now requiring what Accenture calls “regular adoption” of AI. The firm is tracking weekly logins to its AI platforms for certain senior staff, though employees in 12 European countries and those on US federal contracts are excluded — a pragmatic nod to varying data protection regimes and security requirements.

    The resistance is the interesting part

    What makes this story more than a policy announcement is the reaction. Some senior employees have questioned the value of the tools outright, with one describing them as “broken slop generators.” Another told the Financial Times they would “quit immediately” if the tracking applied to them.

    That resistance is worth taking seriously, not dismissing. It maps directly onto a pattern maddaisy.com has been tracking. Research published earlier this month found that only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly” — a figure that drops to 12% among non-senior staff. When people do not understand why they are being asked to use a tool, mandating its use tends to produce compliance rather than competence.

    Harvard Business Review research, cited in that same analysis, identified three psychological needs that determine whether employees embrace or resist AI: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues). A policy that monitors logins and ties them to career progression addresses none of these. It measures activity. It says nothing about whether that activity is useful.

    Logins are not outcomes

    This is the core tension. Accenture’s leadership knows that senior adoption is a bottleneck — industry observers note that older managers are often “less comfortable with technology and more wedded to established working methods.” CEO Julie Sweet has framed AI adoption as existential, telling analysts that the company is “exiting employees” in areas where reskilling is not possible. The 11,000 layoffs announced in September reinforced the point.

    But tracking logins conflates presence with productivity. A senior manager who logs in weekly to check a dashboard is counted the same as one who has genuinely integrated AI into client delivery. The metric captures the floor, not the ceiling.

    This echoes a broader concern maddaisy.com has documented. UC Berkeley researchers found that employees using AI tools worked faster but not necessarily better — absorbing more tasks, blurring work-life boundaries, and entering a cycle of acceleration that resembled productivity but often was not. If Accenture’s policy drives more tool usage without more thoughtful tool usage, it risks producing exactly this outcome at scale.

    What this tells the rest of the industry

    Accenture is not alone in struggling with senior AI adoption. The challenge is structural across professional services. Deloitte’s 2026 CSO Survey found that while 95% of chief strategy officers expect AI to reshape their priorities, only 28% co-lead their organisation’s AI decisions. The people with the authority to mandate change are often the furthest from understanding it.

    Accenture’s approach is at least direct. Rather than hoping adoption trickles up from junior staff — who typically adopt new tools faster — it is applying pressure from the top. And the numbers suggest some urgency: with 750,000 employees and $70 billion in revenue, Accenture has grown enormously from its 275,000-person, $29 billion base in 2013. Maintaining that trajectory while its competitors embed AI into delivery models requires its own workforce to be fluent, not just trained.

    The risk, though, is that the policy optimises for the wrong signal. Organisations that have navigated AI adoption most effectively — and maddaisy.com has covered several — tend to share a common trait: they measure what AI enables people to do differently, not how often people open the application. Accenture’s policy would be considerably more compelling if it tracked client outcomes improved through AI-assisted work, or time freed for higher-value tasks, rather than weekly platform logins.

    The precedent matters more than the policy

    Whatever one makes of the specifics, Accenture has done something that most large organisations have avoided: it has made AI adoption an explicit, measurable condition of career advancement for senior leaders. That is a significant signal. It tells clients that Accenture is serious about practising what it sells. It tells employees that AI fluency is no longer optional at the leadership level.

    Whether it works depends on what happens next. If the tracking evolves toward measuring genuine integration — how AI changes the quality of work, not just the frequency of logins — Accenture could set a useful template for the industry. If it remains a blunt instrument that rewards compliance over competence, it will likely produce exactly the kind of performative adoption that gives AI transformation programmes a bad name.

    For consultants and enterprise leaders watching from outside, the lesson is practical: mandating AI adoption is easy; mandating it well is the hard part. The metric you choose to track will shape the behaviour you get.

  • America’s Fragmented State-by-State AI Regulation Is Creating a Compliance Labyrinth

    When maddaisy examined the shift from AI principles to penalties earlier this month, the United States’ growing patchwork of state-level AI laws featured as one of three converging forces. That brief mention understated the scale of what is now unfolding. With more than 1,000 AI-related bills introduced across all 50 states in 2025 alone, the US is rapidly building the most fragmented AI regulatory landscape of any major economy — and a federal government that wants to stop it but may lack the tools to do so.

    The federal-state standoff

    The tension is now explicit. In December 2025, the White House released a framework titled “Ensuring a National Policy Framework for Artificial Intelligence”, arguing that diverging state AI rules risk creating a harmful compliance patchwork that undermines American competitiveness. An accompanying fact sheet went further, suggesting the administration may consider litigation and funding mechanisms to counter state AI regimes it considers obstructive.

    This follows Executive Order 14179, issued in January 2025, which rescinded the Biden administration’s 2023 AI order and established a competitiveness-first federal posture. The July 2025 AI Action Plan outlined more than 90 measures to support AI infrastructure and innovation. The message from Washington is consistent: the federal government wants to set the rules, and it wants states to fall in line.

    The states, however, are not listening.

    Four approaches, one country

    What makes the US situation distinctive is not simply that states are regulating AI — it is that they are doing so in fundamentally different ways. Four models are emerging, each with different implications for organisations operating across state lines.

    Colorado: the comprehensive framework. Colorado’s AI Act (SB 24-205) remains the most ambitious state-level attempt at broad AI governance. It requires deployers of high-risk AI systems to conduct impact assessments, provide consumer transparency, and exercise reasonable care against algorithmic discrimination. Although enforcement was delayed to June 2026 through SB25B-004, the statutory framework is intact. Legal analysts note that the delay is strategic, not a retreat — Colorado is buying time to refine its approach while preserving regulatory leverage.

    California: regulation through existing powers. Rather than passing a single comprehensive AI act, California is weaving AI governance into its existing regulatory apparatus. The California Privacy Protection Agency finalised new regulations in September 2025 covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), with phased effective dates running through January 2027. The ADMT obligations are particularly significant: they convert what many organisations treat as aspirational governance practices — impact assessments, decision transparency, audit trails — into enforceable system design requirements.

    Washington: the multi-bill expansion. Washington has taken a fragmented-by-design approach, advancing separate bills for high-risk AI (HB 2157), transparency (HB 1168), training data regulation (HB 2503), and consumer protection in automated systems (SB 6284). The logic is pragmatic: AI risk does not fit into a single statutory box, so Washington is addressing it through multiple, targeted legislative vehicles. For compliance teams, this means tracking not one law but several, each with its own scope and requirements.

    Texas: government-first regulation. The Responsible AI Governance Act (HB 149), effective since January 2026, governs AI use within state government, prohibits certain high-risk applications, and establishes an advisory council. It is narrower than Colorado or California’s approaches but signals that even politically conservative states see a role for AI regulation — just a different one.

    The preemption problem

    The White House clearly wants federal preemption — a single national framework that would override state laws. But there are reasons to doubt this will materialise quickly, if at all.

    First, there is no comprehensive federal AI legislation on the table. The administration’s tools are executive orders and agency guidance, which carry less legal weight than statute and are vulnerable to court challenges. Second, the pattern from privacy law is instructive: the US still lacks a federal privacy law, and state laws like the California Consumer Privacy Act have effectively set national standards by default. AI regulation appears to be following the same trajectory.

    Third, states have institutional momentum. According to the National Conference of State Legislatures, all 50 states, Washington DC, and US territories introduced AI legislation in 2025. That level of legislative activity does not simply stop because the federal government asks it to. As one legal analysis from The Beckage Firm observes, states are not ignoring the federal signal — they are interpreting it differently, rolling out requirements more slowly or in smaller pieces, but maintaining control rather than ceding it.

    What this means for organisations

    For any company deploying AI across multiple US states — which is to say, most enterprises of any scale — the practical implications are significant and immediate.

    Multi-jurisdictional compliance is now unavoidable. Organisations cannot build a single AI governance programme around one state’s requirements and assume it covers the rest. Colorado’s impact assessment obligations differ from California’s ADMT rules, which differ from New York City’s bias audit requirements for automated employment tools. Each demands different documentation, different processes, and in some cases, different technical controls.

    Effective dates are stacking up. Colorado’s enforcement begins June 2026. California’s ADMT obligations take full effect January 2027. New York City’s AEDT rules are already being actively enforced. Texas’s government-focused requirements are live now. Organisations treating these as isolated events rather than overlapping compliance waves risk being caught unprepared.

    The privacy playbook applies. Companies that built adaptable privacy programmes — mapping data flows, documenting processing purposes, conducting impact assessments — when GDPR and the CCPA emerged are better positioned than those that treated each regulation as a separate exercise. The same principle holds for AI governance. The organisations that will manage this landscape most effectively are those building flexible frameworks capable of mapping risk assessments, accountability structures, and documentation across multiple jurisdictions simultaneously.

    The consulting opportunity — and obligation

    For consultants and practitioners, the US regulatory fragmentation creates both demand and responsibility. Demand, because multi-state AI compliance is genuinely complex and most organisations lack in-house expertise to navigate it. Responsibility, because the temptation to oversimplify — to sell a single “AI compliance solution” that claims to cover all jurisdictions — is real and should be resisted. The details matter, and they vary state by state.

    As maddaisy has noted in covering shadow AI governance challenges and the agentic AI governance gap, the operational machinery for AI oversight is still immature in most enterprises. Adding a fragmented regulatory landscape on top of that immaturity does not just increase cost — it increases the likelihood that organisations will get something materially wrong.

    The US may eventually get a federal AI framework. But the states are not waiting, and neither should the organisations that operate in them. The compliance labyrinth is already being built, one statehouse at a time.

  • Agentic AI Drift: The Silent Production Risk No One Is Measuring

    When maddaisy examined the agentic AI governance gap last week, the focus was on a structural mismatch: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. That gap remains wide. But a more specific — and arguably more dangerous — operational risk is now coming into focus: agentic AI systems do not fail suddenly. They drift.

    A recent analysis published by CIO makes the case plainly. Unlike earlier generations of AI, which tend to produce identifiable errors — a wrong classification, a hallucinated fact — agentic systems degrade gradually. Their behaviour evolves incrementally as models are updated, prompts are refined, tools are added, and execution paths adapt to real-world conditions. For long stretches, everything appears fine. KPIs hold. No alarms fire. But underneath, the system’s risk posture has already shifted.

    The Problem with Demo-Driven Confidence

    Most organisations still evaluate agentic AI the way they evaluate any software feature: through demonstrations, curated test scenarios, and human judgment of output quality. In controlled settings, this looks adequate. Prompts are fresh, tools are stable, edge cases are avoided, and execution paths are short and predictable.

    Production is different. Prompts evolve. Dependencies fail intermittently. Execution depth varies. New behaviours emerge over time. Research from Stanford and Harvard has examined why many agentic systems perform convincingly in demonstrations but struggle under sustained real-world use — a gap that grows wider the longer a system runs.

    The result is a pattern that will be familiar to anyone who has managed complex software in production: a system passes all its review gates, earns early trust, and then becomes brittle or inconsistent months later, without any single change that clearly broke it. The difference with agentic AI is that the degradation is harder to detect, because the system’s outputs can still look reasonable even as the reasoning behind them has shifted.

    What Drift Actually Looks Like

    The CIO analysis includes a telling case study from a credit adjudication pilot. An agent designed to support high-risk lending decisions initially ran an income verification step consistently before producing recommendations. Over time, a series of small, individually reasonable changes — prompt adjustments for efficiency, a new tool for an edge case, a model upgrade, tweaked retry logic — caused the verification step to be skipped in 20 to 30 per cent of cases.

    No single run produced an obviously wrong result. Reviewers often agreed with the recommendations. But the way the agent arrived at those recommendations had fundamentally changed. In a credit context, that difference carries real financial and regulatory consequences.

    This is the nature of agentic drift: it is not a bug. It is the predictable outcome of complex, adaptive systems operating in changing environments. Two executions of the same agent with the same inputs can legitimately differ — that stochasticity is inherent to how modern agentic systems work. But it also means that point-in-time evaluation, one-off tests, and spot checks are structurally insufficient for production risk management.

    From Policy to Diagnostics

    When maddaisy covered the shadow AI governance challenge earlier this month, one theme was clear: governance frameworks are necessary but not sufficient. They define ownership, policies, escalation paths, and controls. What they often lack is an operational mechanism to answer a deceptively simple question: has the agent’s behaviour actually changed?

    Without that evidence, governance operates in the dark. Policy defines what should happen. Diagnostics establish what is actually happening. When measurement is absent, controls develop blind spots in precisely the live systems where agentic risk tends to accumulate.

    The Cloud Security Alliance has begun framing this as “cognitive degradation” — a systemic risk that emerges gradually rather than through sudden failure. Carnegie Mellon’s Software Engineering Institute has similarly emphasised the need for continuous testing and evaluation discipline in complex AI-enabled systems, drawing parallels to how other high-risk software domains manage operational risk.

    What Practitioners Should Watch For

    The emerging consensus points toward several operational principles for managing agentic drift:

    Behavioural baselines over output checks. No single execution is representative. What matters is how behaviour shows up across repeated runs under similar conditions. Organisations need to establish baselines — not for what an agent should do in the abstract, but for how it has actually behaved under known conditions — and then monitor for sustained deviations.

    Separate configuration changes from behavioural evidence. Prompt updates, tool additions, and model upgrades are important signals, but they are not evidence of drift on their own. What matters is persistence: transient deviations are often noise in stochastic systems, while sustained behavioural shifts across time and conditions are where risk begins to emerge.

    Treat agent behaviour as an operational signal. Internal audit teams are asking new questions about control and traceability. Regulators are paying closer attention to AI system behaviour. Platform teams are under growing pressure to demonstrate stability in live environments. “It looked fine in testing” is no longer a defensible operational posture, particularly in sectors — financial services, healthcare, compliance — where subtle behavioural changes carry real consequences.

    The Observability Gap

    This is, ultimately, the next chapter in the governance story maddaisy has been tracking. The first chapter — covered in the enforcement era analysis — was about moving from principles to rules. The second, examined through Deloitte’s enterprise data, was the gap between strategic confidence and operational readiness. This third chapter is more specific and more technical: the gap between having governance frameworks and having the observability infrastructure to make them work.

    The goal is not to eliminate drift. Drift is inevitable in adaptive systems. The goal is to detect it early — while it is still measurable, explainable, and correctable — rather than discovering it through incidents, audits, or post-mortems. Organisations that build this capability will be better positioned to deploy agentic AI at scale with confidence. Those that do not will continue to be surprised by systems that appeared stable, until they were not.

    For consultants advising on enterprise AI deployments, the implication is practical: governance reviews that stop at policy documentation are incomplete. The question to ask is not just whether a client has an AI governance framework, but whether they can tell you how their agents are behaving today compared to three months ago. If the answer is silence, that is where the work begins.

  • Shadow AI and the Year-Two Governance Challenge

    Over a third of employees have already used AI tools without their organisation’s knowledge or permission. As enforcement deadlines tighten, the gap between what staff are doing and what leadership has sanctioned is becoming one of enterprise AI’s most urgent — and most overlooked — governance problems.

    The term “shadow AI” describes AI tools adopted by employees outside official channels — free-tier large language models used to draft client communications, image generators repurposed for internal presentations, coding assistants plugged into development environments without security review. It is the AI equivalent of shadow IT, but with a critical difference: the data exposure risks are significantly higher and the speed of adoption is faster than anything IT departments have previously managed.

    According to the State of Information Security Report 2025 from ISMS.online, 37% of organisations surveyed said employees had already used generative AI tools without organisational permission or guidance. A further 34% identified shadow AI as a top emerging threat for the next 12 months. These are not projections about some distant risk — they describe what is happening now, in organisations that thought they had AI under control.

    The year-two reckoning

    The pattern is familiar to anyone who lived through the early cloud adoption cycle, but compressed into a fraction of the time. Year one — roughly 2024 into early 2025 — was characterised by experimentation. Departments trialled AI tools, leaders encouraged innovation, and governance was deferred in favour of speed. The implicit message in many organisations was: try things, move fast, we will sort out the rules later.

    Year two is when “later” arrives. And the numbers suggest most organisations are not ready for it. The same ISMS.online report found that 54% of respondents admitted their business had adopted AI technology too quickly and was now facing challenges in scaling it back or implementing it more responsibly. That figure represents a majority of enterprises acknowledging, in effect, that they have accumulated governance debt they do not yet know how to service.

    This aligns with a pattern maddaisy has been tracking across several dimensions. Deloitte’s 2026 State of AI report revealed that organisations report growing strategic confidence in AI but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute responsibly. Shadow AI is the ground-level expression of that paradox: strategy says “adopt AI,” but operations never built the guardrails to manage what adoption actually looks like across thousands of employees making independent tool choices every day.

    Why traditional IT governance falls short

    The instinct in many organisations is to treat shadow AI like shadow IT — block unsanctioned tools, enforce approved vendor lists, and route everything through procurement. That approach, while understandable, misses what makes AI different.

    When an employee uses an unapproved project management tool, the risk is primarily operational: data silos, integration headaches, wasted licences. When an employee pastes client data, financial projections, or intellectual property into a free-tier language model, the risk is fundamentally different. That data may be used for model training, stored in jurisdictions with different privacy regimes, or exposed through security vulnerabilities the organisation has no visibility into.

    As CIO.com recently noted, boards are increasingly recognising that AI is not waiting for permission — it is already shaping decisions through vendor systems, employee-adopted tools, and embedded algorithms that grow more powerful without explicit organisational consent. The question for governance teams is not whether employees are using unsanctioned AI. It is how much sensitive data has already left the building.

    The regulatory dimension

    Shadow AI also creates a specific compliance exposure that many organisations have not fully mapped. As maddaisy examined last week, the EU AI Act reaches its most consequential enforcement milestone in August 2026, with requirements for high-risk AI systems carrying penalties of up to €35 million or 7% of global annual turnover. Colorado’s AI Act takes effect in June 2026 with its own set of requirements around algorithmic discrimination and impact assessments.

    The compliance challenge with shadow AI is that an organisation cannot demonstrate responsible use of systems it does not know exist. If an employee in a hiring function uses an unsanctioned AI tool to screen CVs, or a financial analyst uses a free language model to generate risk assessments, the organisation may be deploying high-risk AI — as defined by regulators — without any of the documentation, monitoring, or impact assessment that compliance requires.

    This is not a theoretical concern. It is a direct consequence of the gap between adoption speed and governance maturity that maddaisy has documented across the enterprise AI landscape.

    What a pragmatic response looks like

    The organisations handling shadow AI most effectively are not the ones deploying the heaviest restrictions. They are the ones that recognised early that prohibition does not work when the tools are free, browser-based, and genuinely useful.

    A pragmatic governance approach has several components. First, visibility: understanding what AI tools employees are actually using, through network monitoring, surveys, and — critically — creating an environment where people feel safe disclosing their usage rather than hiding it. Second, clear usage policies that distinguish between acceptable and unacceptable use cases, specifying what data categories must never enter external AI systems regardless of the tool’s provenance.

    Third, an approved toolset that is genuinely competitive with the free alternatives. One of the most common drivers of shadow AI is that official enterprise AI tools are slower, more restricted, or simply worse than what employees can access on their own. If the sanctioned option requires a three-week procurement process while ChatGPT is a browser tab away, governance has already lost.

    Fourth, the frameworks exist. ISO 42001 provides a structured approach to establishing an AI management system, covering policy, roles, impact assessment, and continuous improvement through the familiar Plan-Do-Check-Act cycle. It is not a silver bullet, but it offers a starting point for organisations that currently have no systematic approach to AI governance beyond hoping the problem does not escalate.

    The window is narrowing

    The uncomfortable reality for many enterprises is that shadow AI has already created facts on the ground. Data has been shared with external models. Workflows have been built around unsanctioned tools. Employees have integrated AI into their daily routines in ways that would be disruptive to simply switch off.

    The year-two governance challenge is not about preventing AI adoption — that ship has sailed. It is about catching up with what has already happened, building the visibility and policy infrastructure to manage it going forward, and doing so before enforcement deadlines turn a governance gap into a compliance crisis.

    For consultants and practitioners advising organisations through this transition, the message is straightforward: audit first, policy second, technology third. The organisations that will navigate this best are not the ones with the most sophisticated AI strategies. They are the ones that know, concretely and completely, what AI is actually being used within their walls — and by whom.