Author: Connie Botelli

  • Brussels Blinked: The EU AI Act’s High-Risk Deadline Just Moved, but the Compliance Clock Has Not Stopped

    When maddaisy examined the shift from AI principles to penalties in February, the EU AI Act’s August 2026 deadline for high-risk AI systems sat at the centre of the analysis. That date — 2 August 2026 — was the moment when compliance stopped being theoretical and started carrying fines of up to seven per cent of global turnover.

    Four weeks later, Brussels blinked.

    On 13 March, the EU Council agreed its position on the Digital Omnibus package, pushing back the application of high-risk AI rules to December 2027 for standalone systems and August 2028 for those embedded in products. The proposal still requires negotiation with the European Parliament, but the direction is clear: the EU’s own regulatory infrastructure was not ready for its own deadline.

    What Actually Changed

    The delay is narrower than the headlines suggest. The EU AI Act’s prohibited practices — social scoring, manipulative AI targeting vulnerable groups, unauthorised real-time biometric surveillance — have been in force since February 2025 and remain untouched. Obligations for general-purpose AI model providers, including transparency and copyright requirements, still apply from August 2025. The Code of Practice requiring machine-readable detection techniques for AI-generated content is already published.

    What shifted is specifically the high-risk classification regime: the rules governing AI systems used in employment decisions, credit scoring, healthcare, education, law enforcement, and critical infrastructure. These are the provisions that demand conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. They are also the provisions that most enterprises have been scrambling to prepare for.

    The Council’s rationale is pragmatic rather than political. The European Commission missed its own February 2026 deadline for publishing the guidance and harmonised standards that enterprises need to demonstrate compliance. Without those standards, companies were being asked to hit a target that the regulator had not yet fully defined. As the Cypriot presidency put it, the goal is “greater legal certainty” and “more proportionate” implementation — diplomatic language for acknowledging that the implementation machinery was not keeping pace with the legislative ambition.

    The Compliance Paradox

    For enterprises that have spent the past 18 months building AI governance programmes, risk inventories, and compliance frameworks, the delay creates an awkward question: should they slow down?

    The short answer is no — and the reasoning matters more than the conclusion.

    First, the delay is conditional. The Council’s position sets fixed dates — December 2027 and August 2028 — but the Commission retains the ability to confirm earlier application if standards become available sooner. Organisations that pause their compliance programmes risk finding themselves back under pressure with less runway than they had before.

    Second, the regulatory landscape extends well beyond Brussels. As maddaisy has previously examined, the United States is building its own patchwork of state-level AI laws. Colorado’s AI Act takes effect in June 2026. California’s transparency requirements are already live. The EU delay does not change these timelines. An enterprise operating across both markets still faces near-term obligations.

    Third, and perhaps most importantly, the governance work itself has value beyond regulatory compliance. Organisations that have inventoried their AI systems, established accountability structures, and implemented monitoring processes are better positioned to manage operational risk, regardless of when a specific regulation takes effect. As the Ethyca governance framework notes, the shift from policy documentation to continuous operational evidence is happening independently of any single regulatory deadline.

    What the Delay Reveals

    The Digital Omnibus is not just a timeline adjustment. It is a signal about the structural challenges of regulating AI at the pace the technology is evolving.

    The EU built the world’s most comprehensive AI regulation. It classified systems by risk tier, defined obligations for providers and deployers, established penalties that exceed GDPR maximums, and applied the rules extraterritorially. What it did not build quickly enough was the operational layer: the harmonised standards, the conformity assessment procedures, the guidance documents that translate legal text into practical compliance steps.

    This mirrors a pattern maddaisy has observed across multiple regulatory domains. Europe’s cloud sovereignty push encountered similar friction — ambitious policy goals meeting incomplete implementation frameworks. The gap between legislative intent and operational readiness is becoming a recurring theme in European technology regulation.

    The Council’s position does include substantive additions alongside the delay. A new prohibition on AI-generated non-consensual intimate content and child sexual abuse material was introduced. Regulatory exemptions previously limited to SMEs were extended to small mid-cap companies. The AI Office’s enforcement powers were reinforced. These are not trivial changes — they show that the regulation is still being actively shaped even as its core provisions await full application.

    The ISO 42001 Factor

    One development running parallel to the regulatory delay is the accelerating adoption of ISO/IEC 42001, the international standard for AI management systems. Enterprise buyers are increasingly adding it to vendor procurement requirements, and AI liability insurers are beginning to factor governance certifications into risk assessments.

    For organisations uncertain about how to structure their compliance programmes during the delay, ISO 42001 offers a practical framework. It maps to the EU AI Act’s requirements without being dependent on them, meaning that compliance work done under the standard retains its value regardless of how regulatory timelines shift. Pega’s recent certification is one example of vendors using the standard to demonstrate governance readiness to enterprise clients.

    What Practitioners Should Do Now

    The EU AI Act delay changes timelines, not trajectories. The practical recommendations remain consistent with what maddaisy outlined in February, with one important addition:

    • Continue AI system inventories. Understanding what AI is deployed, where, and at what risk level is foundational work that no regulatory timeline change invalidates.
    • Monitor the Parliament negotiations. The Council position must be reconciled with the European Parliament before becoming final. The dates could shift again — in either direction.
    • Use the extra time for standards alignment. With harmonised standards still being developed, organisations now have an opportunity to align with ISO 42001 or the NIST AI Risk Management Framework before mandatory compliance begins.
    • Do not treat the delay as permission to deprioritise. Colorado, California, and other US state deadlines remain unchanged. Enterprise clients and procurement teams are not waiting for regulators — they are setting their own governance expectations now.

    The EU built the most ambitious AI regulation in the world, then discovered that ambition requires infrastructure. The delay is a concession to reality, not a retreat from intent. For enterprises, the message is straightforward: the destination has not changed, only the speed limit on the road getting there.

  • CTOs and CHROs Are Climbing the Pay Table. That Tells You Where Boards Think Risk Lives Now.

    For decades, the path to being among a public company’s five highest-paid executives ran through operations, sales, or running a business unit. Technology leaders and HR chiefs were important, certainly — but they were support functions, not the positions that commanded top-tier compensation.

    That hierarchy is shifting. New research from The Conference Board, published this week, tracks the named executive officers (NEOs) — the five highest-paid executives — across the Russell 3000 from 2021 to 2025. The findings are striking: Chief Technology Officers appearing as NEOs increased by 61%, while Chief Human Resources Officers rose by 55%. In the same period, business unit leaders — historically the dominant non-CEO, non-CFO category — declined by 15%.

    The numbers are not subtle. They represent a structural revaluation of which functions boards consider most critical to company performance and risk.

    From support functions to enterprise risk owners

    The explanation is not difficult to locate. Two forces are converging: the AI boom is making technology capability an existential strategic question, and a persistent talent war is making the ability to attract, retain, and reskill workers a board-level concern rather than an HR department problem.

    “Growth in CHRO and CTO roles signals that talent, culture, and digital capability are now viewed as enterprise risks, not support functions,” said Andrew Jones, Principal Researcher at The Conference Board. “Boards are prioritising leaders who shape resilience and transformation across the organisation.”

    This tracks with what maddaisy has been reporting for months. The consulting pyramid piece in early March documented how firms are reshuffling roles rather than eliminating them — cutting some positions while creating others in AI engineering and data science. Accenture’s decision to track AI tool usage for promotions was another signal: when a firm ties career progression to technology adoption, the executive overseeing that technology becomes strategically indispensable.

    The specific numbers tell the story

    CTO NEO disclosures rose from 155 to 249 in the Russell 3000 between 2021 and 2025. CHRO disclosures went from 148 to 230. These are not marginal shifts — they represent boards deciding, through the bluntest mechanism available (compensation), that these roles belong at the top table.

    Meanwhile, business unit leaders fell from 1,734 to 1,475 disclosures. The decline suggests that boards are placing less emphasis on divisional performance and more on enterprise-wide capabilities: technology infrastructure, talent strategy, and the legal and regulatory architecture that governs both.

    That last point matters. Legal roles — including chief legal officers, corporate secretaries, and general counsels — saw the largest increase of any non-mandatory NEO category, rising 21% over the same period. As AI governance, data regulation, and compliance pressures mount, the lawyers are moving closer to the centre of power too.

    What this means for the talent market

    The compensation data carries implications well beyond boardroom politics. When CTOs and CHROs move into the highest-paid tier, it reshapes the talent pipeline for those roles. More ambitious executives will target those paths. Boards will demand different skill sets — not just technical competence for CTOs, but strategic vision for how AI and digital infrastructure create competitive advantage. Not just process management for CHROs, but the ability to navigate workforce transformation at scale.

    The Conference Board’s data also reveals an interesting gender dimension. Among S&P 500 NEOs, women CEOs earned 11% more than men in 2025. But male NEOs overall still earned 8% more in the S&P 500 and 12% more in the Russell 3000. The gap, as researcher Paul Hodgson noted, “largely reflects who holds which roles” — men remain more prevalent in higher-paid operational and commercial positions, with longer average tenure and concentration at larger firms.

    The COO plateau and what it signals

    One quieter finding deserves attention: COO representation rose just 6% over the period and has actually declined from a 2023 peak. The chief operating officer — once the natural second-in-command — appears to be losing ground to more specialised enterprise-wide roles. In an era where the critical operational questions are “how do we deploy AI safely” and “how do we retain the people who know how to do it,” a generalist operations mandate may no longer be enough to justify top-tier compensation.

    The board’s revealed preferences

    Compensation data has always been the most reliable indicator of what organisations actually value, as opposed to what they claim to value. Press releases can announce “people-first cultures” and “digital-first strategies” without consequence. Paying the CHRO and CTO as much as the head of your largest business unit is a commitment that shows up in proxy statements.

    The Conference Board’s findings confirm a pattern that has been building for several years: boards are redefining which risks are existential and which executives own them. The AI boom has made technology leadership a strategic imperative. The talent war — intensified by the very AI transformation companies are pursuing — has elevated workforce strategy from an administrative function to an enterprise risk.

    For consultants and practitioners, the practical implication is clear. The organisations they advise are restructuring their leadership hierarchies around technology and talent. Advisory work that once centred on operational efficiency and market strategy increasingly requires fluency in AI deployment, workforce transformation, and the regulatory landscape that governs both. The C-suite is telling you where the priorities are. The pay data just makes it impossible to ignore.

  • The Three-Tool Threshold: BCG Research Reveals Where AI Productivity Gains Turn Into Cognitive Overload

    For months, the evidence that AI tools are intensifying work rather than simplifying it has been accumulating. maddaisy has tracked this story from the UC Berkeley research showing employees absorbing more tasks under AI, through the organisational failures that leave workers unsupported, to the implementation problems that bake burnout into the system from day one. What was missing was a specific threshold — a number that tells enterprises where the gains end and the damage begins.

    Boston Consulting Group has now supplied one. In a study published in Harvard Business Review this month, researchers surveyed 1,488 full-time US workers and found a clean break point: employees using three or fewer AI tools reported genuine productivity gains. Those using four or more reported the opposite — declining productivity, increased mental fatigue, and higher error rates. BCG calls the phenomenon “AI brain fry.”

    The finding is not just academic. Among workers reporting brain fry, 34% expressed active intention to leave their employer, compared with 25% of those who did not. For a workforce already under pressure from rapid technology deployment, that nine-percentage-point gap represents a tangible retention risk.

    The cognitive cost no one budgeted for

    The BCG research puts numbers to something the UC Berkeley study identified in qualitative terms earlier this year. When AI tools require high levels of oversight — reading, interpreting, and verifying LLM-generated content rather than simply delegating administrative tasks — workers expend 14% more mental effort. They experience 12% greater mental fatigue and 19% more information overload.

    Many respondents described a “fog” or “buzzing” sensation that forced them to step away from their screens. Others reported an increase in small mistakes — exactly the kind of errors that compound in professional services, financial analysis, and other high-stakes environments.

    “People were using the tool and getting a lot more done, but also feeling like they were reaching the limits of their brain power,” Julie Bedard, the study’s lead author and a managing director at BCG, told Fortune. “Things were moving too fast, and they didn’t have the cognitive ability to process all the information and make all the decisions.”

    This aligns with what maddaisy has previously described as the task expansion pattern: when AI makes certain tasks faster, employees do not use the freed-up time for strategic thinking. They absorb more work. The BCG data now suggests the breaking point arrives sooner than most organisations assume — at the fourth tool, not the tenth.

    The macro picture is equally sobering

    The three-tool threshold sits against a broader backdrop of underwhelming AI productivity data at scale. A Goldman Sachs analysis published this month found “no meaningful relationship between productivity and AI adoption at the economy-wide level,” with measurable gains confined to just two domains: customer service and software development.

    Separately, a survey of 6,000 C-suite executives found that 90% saw no evidence of AI impacting productivity or employment in their workplaces over the past three years. Their median forecast: a 1.4% productivity increase over the next three. That is hardly the transformation narrative that justified billions in enterprise AI spending.

    These findings do not mean AI is useless. The Federal Reserve Bank of St. Louis estimated a 33% hourly productivity boost for workers during the specific hours they use generative AI. The problem is that this micro-level gain does not scale linearly. Adding more tools, more prompts, and more AI-generated outputs does not multiply the benefit — it multiplies the cognitive overhead.

    What the threshold means for enterprises

    The practical implications are straightforward, even if they run against the instincts of most technology procurement processes.

    First, fewer tools, better deployed. The BCG data suggests that organisations would get better results from consolidating around two or three well-integrated AI tools than from giving every team access to every available platform. This runs counter to the current market dynamic, where vendors push specialised AI tools for every function — writing, coding, data analysis, scheduling, customer interaction — and enterprises buy them all to avoid falling behind.

    Second, oversight design matters as much as tool selection. The highest cognitive costs were associated with tasks requiring workers to interpret and verify AI output, not with AI performing autonomous background work. Enterprises that can shift more AI usage toward the latter — automated workflows, pre-verified data processing, agent-completed administrative tasks — will impose less cognitive strain on their people.

    Third, training needs to include when not to use AI. As maddaisy has previously noted, most organisations treat AI capability-building as a deployment event rather than a sustained practice. The BCG researchers found that when managers provided ongoing training and support, brain-fry symptoms decreased. The Berkeley team suggested batching AI-intensive work into specific time blocks rather than leaving it on all day — a scheduling discipline that few organisations currently enforce.

    The next chapter in a familiar story

    The AI-productivity narrative is following a pattern that technology historians will recognise. Early adopters see real gains. Organisations rush to scale. The gains plateau or reverse as implementation complexity outpaces human capacity to manage it. Eventually, a more measured approach emerges — not abandoning the technology, but deploying it with greater discipline.

    The BCG three-tool threshold may turn out to be an early data point rather than a universal law. But it offers something that has been missing from the AI-adoption conversation: a concrete starting point for right-sizing the technology stack to what human cognition can actually sustain.

    For consultants advising on AI transformation, that is a message worth delivering — even when it runs counter to the vendor pitch deck.

  • CX Metrics Are Broken and Enterprises Know It. The Question Is What Comes Next.

    One in three customer experience professionals cannot connect their organisation’s Net Promoter Score to financial performance. That is not the conclusion of a contrarian think piece – it is a finding from Medallia’s 2026 State of Customer Experience Report, which surveyed more than 1,500 consumers and 550 CX practitioners globally. The metric that has anchored boardroom CX conversations for the better part of two decades is, for a significant share of the profession, unmeasurably disconnected from the outcomes it is supposed to predict.

    And the profession knows it. According to the same report, 78 per cent of CX professionals plan to adopt at least one new metric in 2026.

    The perception gap that should worry everyone

    The most striking number in the Medallia research is not about NPS at all. It is the gap between how enterprises think they are performing and how their customers actually feel. Two-thirds of CX professionals believe their brand’s experience has improved over the past year. Only 16 per cent of consumers agree.

    That is not a minor calibration error. It is a 50-percentage-point disconnect between corporate self-assessment and market reality. And it has consequences: 40 per cent of consumers in the study reported switching brands within three months, often without the kind of dramatic service failure that would register on traditional satisfaction surveys. They simply drifted, quietly, toward something better – or at least less friction-filled.

    The implication is uncomfortable. Many enterprises are not just using the wrong metrics. They are using metrics that actively mislead them into believing things are going well when customers are already leaving.

    Too many metrics, too little insight

    The problem is not a shortage of measurement. If anything, it is the opposite. Research published in MIT Sloan Management Review found that large organisations routinely track between 50 and 200 CX metrics across multiple channels and touchpoints. Some track even more. The result is not better decision-making but what one report from Usan’s 2026 CX findings calls “data paralysis” – mountains of signals with no clear path from insight to action.

    MIT Sloan’s researchers, working with 14 subscription-services companies, found that most organisations collect metrics because they are industry-standard, not because they have been validated as useful for their specific customer journey. The recommendation is counterintuitive for an era obsessed with data: collect fewer metrics, but align the ones you keep to specific stages of the customer lifecycle where they can actually inform decisions.

    Medallia’s own data supports this. Among CX professionals using 10 or more data sources, 92 per cent said they could demonstrate return on investment. Among those using five or fewer, the figure dropped to 73 per cent. More data sources help – but only when they are deliberately chosen and connected, not accumulated by default.

    The structural problem underneath

    Even when CX teams produce meaningful insights, the organisational infrastructure often fails to act on them. Medallia’s survey found that 41 per cent of CX professionals worry their teams are too siloed from the rest of the business. One-third said entire departments take no action on CX findings. Half said their organisation does not respond often enough or with enough impact.

    Budget control compounds the issue. Fifty-eight per cent of CX practitioners said their initiatives are funded from budgets they do not directly oversee – a structural dependency that limits their ability to move quickly or invest in new approaches.

    This echoes a pattern maddaisy.com has tracked across several domains. When this site examined how enterprises are designing digital experiences for both humans and AI agents, the core finding was similar: organisations recognise that the landscape has shifted, but their internal structures have not caught up. The CX measurement problem is another instance of the same gap – not a lack of awareness, but a failure of organisational plumbing.

    Where the replacement metrics are coming from

    The 78 per cent of CX professionals looking for new metrics are not starting from scratch. A CX Network analysis of customer listening trends suggests three directions gaining traction.

    First, Customer Effort Score – a measure of how easy or difficult a specific interaction was – is increasingly favoured over NPS for transactional touchpoints, because it maps directly to a friction point rather than measuring a generalised sentiment. Second, frontline employee input is being formalised as a data source, recognising that the people handling customer interactions daily often have a more accurate picture of experience quality than any survey. Third, AI-powered analysis of unstructured data – support transcripts, social mentions, behavioural signals – is enabling what practitioners call “continuous listening”, replacing periodic surveys with ongoing, passive measurement.

    None of these are new ideas. What is new is the urgency. Transcom’s 2026 CX paradoxes report makes a pointed observation: many organisations are layering AI-driven CX tools on top of broken measurement foundations. Automating a process that was never validated does not fix it. It scales the dysfunction.

    What practitioners should be watching

    The shift away from legacy CX metrics is not a technology problem waiting for a technology solution. It is a governance problem. Organisations that want their CX measurement to mean something will need to do three things that are simple to describe and difficult to execute.

    They will need to audit their existing metrics ruthlessly – not asking “is this metric popular?” but “does this metric predict a financial outcome we care about?” They will need to connect whatever survives that audit to specific journey stages, so that insights map to decisions rather than floating in quarterly reports. And they will need to give CX teams the budget authority and cross-functional access to act on what they find, rather than producing dashboards that nobody is structurally empowered to respond to.

    The 50-point perception gap in Medallia’s data is not a measurement artefact. It is a mirror. Enterprises that cannot close it will keep telling themselves that customer experience is improving, right up until the churn numbers say otherwise.

  • The Venture Subsidy Era for AI Is Ending. Enterprise Budgets Are Not Ready.

    For the past three years, enterprises have been building their AI strategies on pricing that does not reflect reality. The era of venture-subsidised AI — where a ChatGPT query costs pennies despite burning roughly ten times the energy of a Google search — is approaching its expiry date. The question is not whether prices will rise. It is whether organisations have budgeted for what comes next.

    The subsidy model, laid bare

    The numbers tell the story clearly enough. OpenAI’s own internal projections show $14 billion in losses for 2026, against roughly $13 billion in revenue. Total spending is expected to reach approximately $22 billion this year. Across the 2023–2028 period, the company expects to lose $44 billion before turning cash-flow positive sometime around 2029 or 2030.

    Anthropic’s trajectory looks different but carries the same structural tension. The company hit $19 billion in annualised revenue by March 2026, growing more than tenfold annually. But its gross margins sit at around 40% — a long way from the 77% it needs to justify its $380 billion valuation. That gap has to close, and it will not close through efficiency gains alone.

    Both companies have raised staggering sums to sustain the current pricing. OpenAI’s $110 billion round in February valued it at $730 billion. Anthropic’s $30 billion Series G came from a coalition including GIC, Microsoft, and Nvidia. This is venture capital on a scale that makes the ride-hailing subsidy wars look modest — and, like those wars, it is designed to capture market share before the real pricing arrives.

    The millennial lifestyle subsidy, enterprise edition

    The pattern is familiar. Uber and DoorDash used investor capital to underwrite artificially cheap services, building habits and dependencies before gradually raising prices toward sustainable levels. AI providers are running the same playbook, but the stakes are larger. When Uber raised fares, consumers grumbled and occasionally took the bus. When AI API costs increase threefold — which industry analysts suggest may be the minimum adjustment needed for sustainable economics — enterprises will face a different kind of reckoning.

    The reckoning is already starting at the platform level. Microsoft will raise commercial pricing across its entire 365 suite from July 2026, with increases of 8–17% depending on the tier. The company attributed the rises to AI capabilities such as Copilot Chat being embedded into standard subscriptions. Its new $99-per-user E7 tier bundles Copilot, identity management, and agent orchestration tools — positioning AI not as an optional add-on but as a cost baked into the platform itself.

    The broader enterprise software market is following the same trajectory. Gartner forecasts enterprise software spend rising at least 40% by 2027, with generative AI as the primary accelerant. Average annual SaaS price increases now range from 8–12%, with aggressive movers implementing hikes of 15–25% at renewal.

    The budget gap nobody is discussing

    The disconnect between AI ambition and AI economics is widening. Organisations now spend an average of $7,900 per employee annually on SaaS tools — a 27% increase over two years. AI-native application spend has surged 108% year-on-year, reaching an average of $1.2 million per organisation. And these figures reflect the subsidised era.

    As Axios reported this week, the unusually low cost of many AI services will not survive the transition from venture-funded growth to public-market accountability. As OpenAI and Anthropic pursue potential IPOs, investors will demand the margins that current pricing cannot deliver. Subscription prices and usage-based costs are expected to rise across the industry.

    For enterprises that have been scaling AI adoption on the assumption that current costs are permanent, this represents a planning failure in the making. A consumer application with $2 in AI costs per user per month looks viable. The same application at $10 per user does not. High-volume automation workflows — precisely the use cases enterprises are most excited about — are the most vulnerable to cost increases.

    The pacing argument gains new weight

    This pricing trajectory adds a new dimension to arguments maddaisy has previously explored around pacing AI investment. When Capgemini’s CEO Aiman Ezzat cautioned against getting “too ahead of the learning curve,” his concern was primarily about deploying capabilities ahead of organisational readiness. The pricing question strengthens that case. Organisations that rush to embed AI across every workflow at subsidised rates may find themselves locked into architectures whose economics no longer work when the real costs arrive.

    Similarly, the enterprise scaling gap reported last week — where two-thirds of organisations cannot move AI past pilot stage — takes on a different character when viewed through an economic lens. The skills shortage and governance deficits that constrain scaling today may prove less urgent than the budget constraints that arrive tomorrow. Organisations struggling to scale AI at subsidised prices will find it considerably harder at market rates.

    What prudent organisations should do now

    The adjustment does not need to be dramatic, but it does need to start. Three measures stand out.

    First, stress-test AI budgets against realistic pricing. If API costs tripled tomorrow, which workflows would still deliver positive returns? The answer reveals which AI investments are genuinely valuable and which are artefacts of artificially cheap compute.

    Second, build multi-provider flexibility into the architecture. Vendor lock-in has always been a risk in enterprise technology. In AI, where pricing models are still evolving and open-source alternatives like Llama and Mistral are improving rapidly, flexibility is not just prudent — it is a hedge against the cost increases that are coming.

    Third, watch the open-source floor. The existence of capable open models creates a price ceiling that limits how aggressively commercial providers can raise rates. Organisations that invest in the capability to run open models on their own infrastructure — or through commodity inference services — will have negotiating leverage that others will not.

    The correction, not the crisis

    None of this means AI is overvalued or that enterprise adoption will stall. The technology works. The productivity gains are real. But the current pricing does not reflect the true cost of delivering those gains, and the correction will arrive gradually over the next two to four years as the industry’s largest players transition from growth-at-all-costs to sustainable economics.

    The organisations best positioned for that transition will be those that treated the subsidy era as a window for experimentation — learning which AI applications genuinely transform their operations — rather than a permanent baseline for their technology budgets. The window is closing. The question is whether the planning has already begun.

  • The Rules for Public Data Are Quietly Reshaping Who Gets to Build AI

    The question of who gets to train AI on publicly available data is quietly becoming one of the most consequential regulatory battles in technology. A new report from the Information Technology and Innovation Foundation (ITIF), published this week, lays out the stakes clearly: the jurisdictions that permit responsible access to public web data will lead in AI development, while those that restrict it risk falling permanently behind.

    This is not abstract. In 2025, US-based organisations produced 40 notable foundation models. China produced 15. The European Union managed three. The gap is driven by many factors — investment, talent, compute infrastructure — but the rules governing access to training data are an increasingly significant one.

    The Transatlantic Divide

    The US and EU have taken fundamentally different approaches to how publicly available data can be used for AI training.

    The US operates what the ITIF describes as a “gates up” framework. Publicly accessible web data is generally available for automated collection unless a site owner implements technical barriers — robots.txt files, authentication walls, or rate-limiting mechanisms. This permissive posture has given American AI labs broad access to the digital commons as training material.

    The EU, by contrast, applies GDPR protections to personal data regardless of whether it appears on a public website. Even a name and job title scraped from a company’s “About Us” page may require a lawful processing basis under European law. The EU AI Act adds a further layer: Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, and rights holders can opt out of their content being used. The European Commission’s November 2025 Digital Omnibus proposal aims to simplify some of this regulatory burden, but the fundamental constraints on data use remain.

    The result is that AI development gravitates toward more permissive jurisdictions. This is not a theoretical concern — it is visible in where companies locate their model training infrastructure and where they hire.

    The US Is Not Unified Either

    As maddaisy examined in February, the United States has its own regulatory fragmentation problem. California’s AB 2013, which took effect on 1 January 2026, requires developers of publicly available generative AI systems to disclose detailed information about their training data — including the sources, whether the data contains copyrighted material, whether it includes personal information, and when it was collected. That transparency obligation applies retrospectively, meaning developers must document historical training practices.

    Colorado’s AI Act addresses the deployment side, with impact assessments and discrimination safeguards for high-risk systems due to take effect in June 2026. Illinois, New York City, and Texas each have their own targeted requirements.

    The federal government wants to consolidate this into a single framework, but as maddaisy noted when AI governance entered its enforcement era, the White House’s December 2025 executive order is a statement of intent, not a statute. State laws remain in force, and the compliance burden is cumulative.

    Technical Governance Is Filling the Gap

    Where regulation is fragmented or slow, technical standards are emerging to manage access to public data for AI training. The ITIF report identifies several mechanisms that are gaining traction:

    Machine-readable opt-out signals extend beyond the familiar robots.txt protocol. New standards like LLMs.txt allow website operators to provide curated, machine-readable summaries of their content specifically for AI systems — a more nuanced approach than a binary allow/block decision.

    Cryptographic bot authentication using HTTP message signatures allows site operators to verify the identity of AI crawlers and grant or restrict access based on who is asking, not just what they are requesting.

    Automated licensing frameworks are experimenting with HTTP 402 (“Payment Required”) signals, creating the technical infrastructure for content owners to set terms for AI training use — including compensation.

    PII filtering tools such as Microsoft’s open-source Presidio project allow developers to detect and remove sensitive personal information during data preparation, addressing privacy concerns at the technical rather than legal level.

    These mechanisms are not yet standardised or universally adopted. But they point toward a model where access to public data is governed by a combination of technical protocols and market-based agreements, rather than solely by regulation.

    The Agentic Wrinkle

    The data access question becomes more complex as AI systems shift from static model training to live, agentic operations. When maddaisy examined the governance challenges for AI agents earlier this week, the focus was on operational controls — monitoring, auditing, and accountability chains. The ITIF report adds a further dimension: data that is technically accessible (visible through a browser or available via API) is not necessarily intended for AI consumption.

    Consider an AI agent authorised to access a company’s customer relationship management system. The data it encounters is not public, but it is available to the agent through delegated credentials. Current regulatory frameworks are largely silent on this category of “private-but-available” data, and the risks compound when agents combine information from multiple sources to surface connections that no individual source intended to reveal.

    What Practitioners Should Watch

    The ITIF report recommends that policymakers focus on three priorities: regulating AI outputs rather than training inputs, encouraging transparency norms for AI agents, and creating safe harbour protections for developers who respect machine-readable opt-out signals and filter sensitive data.

    For consultants and practitioners advising organisations on AI strategy, the practical implications are more immediate. Enterprises deploying AI — whether training proprietary models, fine-tuning foundation models, or deploying agentic systems — need to map their data supply chain with the same rigour they apply to physical procurement. That means understanding where training data originates, what rights framework governs its use, whether it contains personal information subject to GDPR or state privacy laws, and whether the technical mechanisms exist to honour opt-out requests.

    The organisations that treat training data governance as a compliance afterthought will find themselves exposed — not just to regulatory penalties, but to reputational risk and potential litigation. Those that build responsible data practices into their AI development lifecycle will have a genuine competitive advantage, particularly as transparency requirements tighten across jurisdictions.

    The rules for public data are not a peripheral regulatory detail. They are becoming one of the defining factors in who builds the next generation of AI systems, and where.

  • Digital Experiences Now Have Two Audiences. Most Enterprises Are Only Designing for One.

    For as long as digital products have existed, experience design has asked a single question: what does the user want? The user browses, clicks, hesitates, backtracks, and eventually converts — or does not. Every interface decision, from navigation hierarchy to button placement, has been optimised around that human journey.

    In 2026, a second audience has arrived. AI agents now browse websites, interpret content, summarise product pages, compare services, and make purchasing recommendations — often before a human ever sees the interface. Search engines have done this quietly for years. But the new generation of autonomous agents does it actively, making decisions and taking actions on behalf of the people they serve.

    The implication for enterprises is straightforward and largely unaddressed: digital experiences must now be designed for two interpreters simultaneously, and they do not read the same way.

    The dual-interpreter problem

    Humans and machines process digital experiences through fundamentally different lenses. A human visitor might scan a page loosely, drawn by visual hierarchy, tone of voice, and emotional cues. They browse without clarity, explore without urgency, and change their minds mid-session. That inconsistency is not a flaw — it is how people navigate complex decisions.

    Machines, by contrast, prefer structure. They infer meaning from hierarchy, repetition, semantic markup, and patterns. They classify, compress, and summarise. When an AI agent visits a product page, it does not feel reassured by a warm brand photograph. It parses structured data, identifies key claims, and decides — in milliseconds — what that page is about, what matters, and what to report back to the user who sent it.

    As Composite Global noted in a recent analysis, experience design has shifted from being about flow to being about interpretation. The question is no longer just “how will a person navigate this?” but “how will an agent read this — and will it get the right answer?”

    Where the gap shows up

    The consequences of ignoring machine intent are already visible. When AI agents summarise a company’s offerings inaccurately, the problem is rarely that the agent is broken. More often, the page was never designed to be machine-readable in any meaningful way. The content was written for humans — rich in nuance, light on structure — and the agent did its best with what it found.

    Research from TBlocks found that 71 per cent of users now expect digital experiences to adapt to their intent, while 76 per cent notice and feel frustrated when that adaptation fails. Those expectations increasingly extend to agent-mediated experiences. If a user asks an AI assistant to compare three consulting firms’ service offerings, and the agent returns a garbled summary because one firm’s website relies on unstructured prose and JavaScript-rendered content, the brand loses — not the agent.

    The practical failures tend to cluster around a few recurring problems: content hierarchies that make sense visually but not semantically; messaging that requires context an agent cannot infer; calls to action that depend on emotional persuasion rather than clear structure; and pages that load dynamically in ways that agents cannot reliably parse.

    This is not SEO by another name

    It would be tempting to treat this as an extension of search engine optimisation. After all, making content machine-readable has been a concern since the early days of Google. But the agent-readability challenge goes further than search ranking.

    Search engines index pages and rank them. AI agents interpret pages and act on them. An agent does not return a list of blue links — it makes a recommendation, completes a task, or rules out an option entirely. The stakes are different. A page that ranks poorly in search results is still findable. A page that an AI agent misinterprets may never surface at all, or worse, may surface with the wrong message attached.

    This distinction matters for how enterprises invest. SEO focuses on keywords, metadata, and backlinks. Agent-readability requires structured data, semantic clarity, explicit labelling, and content architectures that hold meaning when stripped of their visual presentation. The overlap exists, but the disciplines are not the same.

    What maddaisy’s coverage has been pointing toward

    Readers of maddaisy’s recent coverage will recognise the broader pattern here. When this publication examined the governance challenges of AI agents, the focus was on how enterprises monitor and control autonomous systems. When it covered OpenAI’s Frontier Alliance, the story was about agents disrupting enterprise software by sitting above it. And when it explored vibe coding’s enterprise arrival, the thread was about how AI is reshaping how software gets built.

    The digital experience question is downstream of all three. If agents are going to interact with enterprise digital products — browsing service pages, interpreting pricing structures, summarising capabilities for prospective clients — then those products need to be designed with agents in mind. Not instead of humans. Alongside them.

    Designing for clarity across interpreters

    The emerging discipline — sometimes called “dual-intent design” — requires thinking in layers. Composite Global’s framework identifies three dimensions of intent that designers must now map simultaneously: explicit intent (what a user directly communicates), behavioural intent (what systems infer from interaction patterns), and emotional context (the confidence, uncertainty, or curiosity a human brings to the interaction).

    The first two are measurable. The third is where human judgment lives — and where machines consistently fall short. Strong experience design ensures that machine interpretation reinforces human meaning rather than distorting it. In practice, that means clear content hierarchies so agents classify correctly, structured data so machines parse quickly, explicit labelling so summaries remain accurate, and focused messaging so automated recommendations do not flatten a brand’s positioning.

    CoreMedia’s analysis of 2026 customer experience trends puts it bluntly: AI has become “a powerful new intermediary stepping between brand and customer.” The brands that treat that intermediary as an afterthought will find their message distorted in transit.

    The practical question for enterprises

    For most organisations, the immediate question is not whether to redesign everything. It is whether their existing digital properties communicate clearly to both audiences. A simple audit reveals the answer quickly: take a key product or service page, strip away the visual design, and read only the structured content. Does it still make sense? Would an agent, parsing that structure, draw the right conclusions?

    If the answer is no — and for most enterprise websites built in the pre-agent era, it will be — the remediation is less about redesign than about augmentation. Adding structured data, clarifying semantic hierarchy, making content modular rather than monolithic, and ensuring that key claims do not depend on visual context for meaning.

    None of this requires abandoning human-centred design. The point is not to optimise for machines at the expense of people. It is to build clarity that holds up under both interpretations — a standard that, arguably, should have been the goal all along.

    The enterprises that get this right will not just rank well or convert well. They will be accurately represented by the AI systems that increasingly mediate how their customers discover, evaluate, and choose them. In a market where agents are becoming the first point of contact, being misunderstood by a machine may prove more costly than being overlooked by a human.

  • Anthropic’s Pentagon Walkaway and the Price of AI Principles

    When maddaisy examined the Pentagon-Anthropic standoff earlier this month, the focus was on a new category of vendor risk – the possibility that political pressure could turn a trusted AI supplier into a regulatory liability overnight. That analysis ended with a warning: organisations in the defence supply chain needed to start modelling political risk alongside technical capability.

    Nine days later, the story has taken a turn that few risk models would have predicted. The company the US government tried to destroy is winning the consumer market.

    The market’s unexpected verdict

    Claude, Anthropic’s AI assistant, surged to the number one spot on Apple’s App Store in the days following the ban. The #QuitGPT movement saw 2.5 million users cancel ChatGPT subscriptions or publicly pledge to boycott OpenAI. Protesters gathered outside OpenAI’s San Francisco headquarters. Reddit and X filled with migration guides.

    The paradox is striking. Anthropic refused to remove ethical guardrails from its Pentagon contract – specifically, restrictions on mass surveillance and fully autonomous weapons. The government designated it a supply-chain risk, a label previously reserved for foreign adversaries. And the market responded by rewarding Anthropic with the most effective brand-building campaign in AI history – one the company never planned and could not have bought.

    What OpenAI’s ‘win’ actually cost

    OpenAI moved quickly to fill the gap. Sam Altman announced a deal to deploy models on the Pentagon’s classified networks, claiming the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human oversight requirements. The critical difference, as maddaisy noted at the time, is that OpenAI permits “all lawful uses” and relies on existing Pentagon policies rather than contractual restrictions.

    Altman defended the decision in Fortune, arguing that “the absence of responsible AI companies in the military space” would be worse than engagement on imperfect terms. It is a reasonable argument in isolation.

    But the commercial consequences arrived faster than the contracts. OpenAI’s head of robotics resigned over ethical concerns related to the Pentagon deal. Eleven OpenAI employees signed an open letter protesting the government’s treatment of Anthropic – even as their employer stood to benefit from it. OpenAI rushed out GPT-5.4 within 48 hours of GPT-5.3 Instant, in what looked less like a planned release and more like a company trying to change the subject.

    The talent cost may prove more significant than the user exodus. In an industry where the top researchers can choose where they work, being seen as the company that took the contract Anthropic refused is not a recruitment advantage.

    The strategic calculus of principles

    For decades, ethical positioning in technology was treated as a cost centre – a brand exercise that might prevent negative press but would never drive revenue. The Anthropic-Pentagon episode is the first major test of whether that assumption holds in the AI era.

    The early data suggests it does not. Anthropic is being rewarded by consumers and punished by government. OpenAI is being rewarded by government and punished by consumers. Both companies are learning, in real time, that the AI market has bifurcated into constituencies with fundamentally different values – and that serving one may come at the expense of the other.

    This is new territory for the technology industry. Previous ethical standoffs – Apple versus the FBI over iPhone encryption in 2016, Google’s Project Maven withdrawal in 2018 – were significant but did not produce the same kind of mass consumer response. The difference is that AI tools are personal. People interact with ChatGPT and Claude daily, often for sensitive tasks. When those tools become entangled with military use and surveillance, the reaction is visceral in a way that an encryption debate never was.

    The $60 billion question

    None of this means Anthropic’s position is comfortable. The company has received more than $60 billion in total investment from over 200 venture capital funds. If the supply-chain risk designation holds, enterprise clients in the defence ecosystem will be contractually barred from using Anthropic’s products. Anthropic has announced it will challenge the designation in court, but legal processes move slowly and government procurement decisions move fast.

    The Pro-Human Declaration, a framework released by a bipartisan coalition of researchers and former officials, was finalised before the standoff but published in its aftermath. Its signatories include Steve Bannon and Susan Rice – a pairing that underlines just how broadly the anxiety about unchecked AI development has spread. Among its provisions: mandatory pre-deployment testing, prohibitions on fully autonomous weapons, and an outright moratorium on superintelligence development until scientific consensus on safety is established.

    Whether such frameworks gain legislative traction remains an open question. As maddaisy has previously noted, more than 1,000 AI-related bills were introduced across all 50 US states in 2025 alone, while federal action has been conspicuously absent. The Pentagon-Anthropic episode may be the catalyst that changes that – or it may simply add another chapter to the regulatory vacuum.

    What this means for practitioners

    For organisations evaluating AI vendors, the lesson is not that principles are good and pragmatism is bad. It is that ethical positioning has become a material factor in AI vendor strategy – one that affects talent retention, consumer trust, regulatory exposure, and government access in ways that are increasingly difficult to hedge.

    The vendor assessment frameworks that most enterprises use were not designed for this. They evaluate uptime, security certifications, data residency, and pricing. They do not evaluate whether a vendor’s ethical commitments might make it a target of government action, or whether a vendor’s willingness to accept government contracts without restrictions might trigger a consumer backlash that affects product development velocity.

    Both of those scenarios played out in the space of a single week. Any vendor strategy that does not account for them is incomplete.

    The AI industry is discovering what the defence, pharmaceutical, and energy industries learned long ago: when your products touch questions of public safety and national security, your commercial strategy and your ethical positioning become the same thing. The companies that figure this out first – not just as a brand exercise, but as a genuine constraint on what they will and will not do – will be the ones that retain the trust of all their constituencies, not just the most powerful one.

    Anthropic’s stand cost it a $200 million contract. It may also have bought something more durable: a market position built on trust at a time when trust in AI companies is in short supply.

  • The Governance Frameworks for AI Agents Exist. The Hard Part Is Making Them Work.

    The governance playbook for autonomous AI agents is no longer a blank page. Regulatory bodies have published frameworks. Law firms have issued guidance. Industry coalitions have identified priorities. The principles – least privilege, human checkpoints, real-time monitoring, value-chain accountability – are converging across jurisdictions. And yet, Gartner predicts that 40 per cent of agentic AI projects will be cancelled by the end of 2027, citing escalating costs, unclear business value, and inadequate risk controls.

    The problem is not that enterprises lack governance policies. It is that they lack governance infrastructure – the operational machinery to translate principles into practice across live, autonomous systems operating at scale.

    When maddaisy examined the emerging governance playbook last week, the direction of travel was clear: regulators and advisors were converging on what good governance should look like. The question that follows is more difficult. What does it take to actually run that governance, day after day, across agents that plan, execute, and adapt autonomously?

    The Gap Between Policy and Operations

    The most revealing data point in recent weeks comes not from a governance report but from Logicalis’s 2026 CIO Report. Among 1,000 chief information officers surveyed globally, 89 per cent described their AI governance approach as “learning as we go.” That is not experimentation. That is the absence of operational governance.

    The skills gap compounds the problem. Nearly nine in 10 organisations cite a lack of internal technical capability as their primary constraint on AI deployment. For governance specifically, the deficit is acute. Monitoring agent behaviour in production, auditing multi-step reasoning chains, and interpreting regulatory requirements across jurisdictions all demand expertise that most enterprises have not yet hired for – and in many cases, cannot find.

    A PwC survey found that 79 per cent of companies have adopted agents in some capacity. But when enterprise search firm Lucidworks assessed over 1,100 organisations, only 6 per cent had deployed more than one agentic solution. The implication is significant: most enterprises are governing a single, contained pilot. The governance challenge changes materially when agents multiply, interact, and share data across business functions.

    Regulations Are Arriving – Unevenly

    The regulatory landscape is not waiting for enterprises to catch up. The EU AI Act’s obligations on high-risk and general-purpose AI systems take effect from August 2026, applying globally to any organisation whose systems affect EU residents. In the United States, the picture is more fragmented. President Trump’s December 2025 Executive Order signalled federal intent to consolidate AI oversight, but as legal analysis from Gunderson Dettmer makes clear, it does not preempt existing state laws.

    California, Colorado, and Texas have each enacted comprehensive AI governance statutes with distinct requirements for high-risk systems. New York’s RAISE Act imposes transparency obligations that do not apply elsewhere. For multinational enterprises deploying autonomous agents, the compliance surface is not one framework – it is dozens, with different definitions of high-risk, different disclosure requirements, and different enforcement timelines.

    This is where governance-as-policy meets governance-as-operations. A well-crafted internal policy cannot resolve the question of whether an agent deployed in London, which processes data from a New York customer and executes a transaction through a Singapore-based system, complies with three different regulatory regimes simultaneously. That requires technical infrastructure: jurisdictional routing, dynamic compliance rules, and audit trails that satisfy multiple authorities.

    What Operational Governance Actually Requires

    Several CIOs interviewed by CIO.com this month offered a consistent message: governance cannot be separated from workflow design.

    Don Schuerman, CTO at Pega, put it directly: the expectation that thousands of agents can be deployed randomly across a business and left to operate is a myth. Successful deployments anchor agents in well-defined business processes with prescribed steps, high predictability, and clear audit requirements. The governance is not a layer added afterwards – it is embedded in how the agent’s workflow is designed.

    IBM CIO Matt Lyteson echoed the point, stressing that organisations need to understand the outcomes they are targeting, the data agents will require, and the controls needed to manage them before deployment – not after. Salesforce CIO Dan Shmitt added that without high-quality data and a unified governance model, agents produce unreliable results regardless of the policy framework around them.

    The emerging consensus among practitioners, distinct from the framework-level guidance, centres on three operational requirements.

    First, governance must be embedded in agent design, not bolted on. Decision boundaries, escalation rules, and compliance checks need to be part of the agent’s workflow architecture. Retrofitting governance onto an agent already in production is significantly harder and more expensive.

    Second, observability infrastructure is non-negotiable. As maddaisy has previously reported on agentic drift, agents that pass review at launch can behave differently months later. Continuous monitoring of reasoning chains, action sequences, and decision outcomes is the minimum viable governance stack – not periodic audits.

    Third, governance requires dedicated roles, not committees. The Mayer Brown framework identified four governance functions: policy-setters, product teams, cybersecurity integration, and frontline escalation. Most enterprises have distributed these responsibilities informally. As agents scale beyond pilot stage, informal arrangements become liabilities.

    The Trajectory Ahead

    The governance conversation has moved faster than most observers expected. Twelve months ago, agentic AI governance was a theoretical concern. Today, it has dedicated regulatory guidance, published legal frameworks, and named positions on practitioners’ organisational charts. That is genuine progress.

    But the distance between knowing what governance should look like and operating it reliably is where the next phase of difficulty lies. The 40 per cent cancellation rate Gartner projects is not primarily a technology failure – it is a governance and operational maturity failure. The organisations that succeed with autonomous agents will not be those with the most sophisticated AI models. They will be the ones that built the operational infrastructure to govern them before they scaled.

    For consultants advising enterprise clients on agentic AI, the message has shifted. The question is no longer whether governance frameworks exist. It is whether the organisation has the skills, tooling, and organisational design to make those frameworks operational. That is a harder conversation, but it is now the one that matters.

  • Nearly Every Enterprise Wants AI. Two-Thirds Cannot Scale It Past a Pilot.

    The Appetite Is Not the Problem

    Nearly every large organisation wants AI. The technology works. The budgets are approved. The pilots are running. And yet, when it comes to deploying AI at scale – moving from a successful proof of concept to a production capability that changes how the business operates – two-thirds of enterprises are stuck.

    That is the central finding of Logicalis’s 2026 CIO Report, which surveyed more than 1,000 chief information officers globally. The numbers paint a stark picture of an industry that has solved the belief problem but not the execution one: 94% of CIOs report growing organisational appetite for AI. Over a third have accelerated AI initiatives based on early proof-of-concept results. But two-thirds say they cannot scale AI beyond those initial deployments.

    The gap between wanting AI and running AI at enterprise scale has become the defining challenge of 2026. And the constraints holding organisations back are not the ones most boardrooms are discussing.

    Skills, Not Budgets, Are the Bottleneck

    The most striking finding in the Logicalis data is what CIOs identify as their primary constraint. It is not funding. It is not technology. It is skills.

    A lack of internal technical capability is holding back AI ambitions in nearly nine out of ten organisations. This is not a shortage of data scientists or machine learning engineers in the abstract – it is a shortage of people who understand how to integrate AI into existing business processes, manage its outputs, and govern its behaviour in production environments.

    The skills gap becomes more consequential as AI moves from experimentation to operation. A pilot needs a small team of enthusiasts and a sandbox. A production deployment needs data engineers, integration specialists, change managers, and – critically – people who understand both the technology and the business domain well enough to know when the AI is wrong.

    This connects directly to what Harvard Business Review reported in February: 88% of companies now report regular AI use, yet adoption is stalling because employees experiment with tools without integrating them into how work actually gets done. The tools are present. The capability to use them well is not.

    Governance Is Being Compromised, Not Solved

    If the skills gap explains why organisations cannot scale, the governance gap explains why scaling carries risk. The Logicalis report found that 62% of CIOs have compromised on AI governance due to limited knowledge, and only 44% say they fully grasp the risks of AI adoption. Meanwhile, 76% describe unchecked AI as a serious concern.

    This is not a theoretical problem. As maddaisy.com has reported extensively, the governance question follows AI into every new domain – from agentic systems that drift in production to vibe coding tools that generate enterprise software without conventional oversight. Organisations are deploying AI faster than they can build the frameworks to manage it, and most acknowledge this openly. An overwhelming 89% of CIOs in the Logicalis survey describe their current approach as “learning as we go.”

    That phrase deserves attention. It means that the majority of enterprise AI programmes are operating without mature risk management, without clear accountability structures, and without the monitoring infrastructure needed to catch problems before they compound. For regulated industries – financial services, healthcare, defence – this is not a growing pain. It is an exposure.

    The Pattern maddaisy.com Has Been Tracking

    The Logicalis data does not exist in isolation. It quantifies a pattern that has been building across multiple data points this year.

    In February, maddaisy.com reported on PwC’s Global CEO Survey, which found that 56% of chief executives could not point to measurable revenue gains from their AI investments. The diagnosis then was a measurement problem as much as a technology one – organisations deploying AI without redesigning workflows or building the instrumentation to track outcomes.

    Earlier this month, Capgemini’s CEO Aiman Ezzat made the case for pacing AI investment, arguing that companies are deploying capabilities ahead of their organisation’s ability to absorb them. EY research cited in that analysis showed organisations failing to capture up to 40% of potential AI benefits – not because the technology underperformed, but because the surrounding processes, skills, and culture were not ready.

    And in maddaisy.com’s analysis of the consulting pyramid, Eden McCallum research revealed that 95% of AI pilots have failed to deliver returns – a figure that aligns precisely with the Logicalis finding that two-thirds of organisations cannot move past the pilot stage.

    These are not isolated reports reaching coincidentally similar conclusions. They are different measurements of the same underlying problem: the bottleneck in enterprise AI has moved from technology capability to organisational readiness.

    The Managed Services Pivot

    One of the more telling details in the Logicalis report is that 94% of CIOs plan to lean on managed service providers over the next two to three years to help navigate AI governance, scaling, and sustainability. This is a quiet but significant shift in how enterprises relate to their technology infrastructure.

    It suggests that many CIOs have concluded they cannot build the required skills and governance frameworks internally – at least not at the pace the technology demands. Rather than owning and operating AI capabilities directly, they are moving toward orchestrating a network of external providers. The CIO role, in this model, becomes less about technology ownership and more about vendor management, risk oversight, and strategic coordination.

    For consulting and technology services firms, this creates a substantial market opportunity. But it also raises a question that the industry has not yet answered convincingly: if the clients cannot scale AI internally because they lack the skills and governance frameworks, and they outsource to service providers who are themselves still working out how to embed AI into their own operations, where does the actual expertise reside?

    What Practitioners Should Watch

    The adoption gap is unlikely to close quickly. Skills take time to develop. Governance frameworks take time to mature. Organisational change – the kind that turns a pilot into a production capability – is measured in quarters and years, not weeks.

    Three things are worth tracking. First, whether the 89% “learning as we go” figure starts to decline in subsequent surveys – that will be the clearest signal that enterprises are moving from experimentation to operational maturity. Second, whether the managed services pivot produces measurable outcomes or simply moves the scaling problem from one organisation to another. And third, whether the 67% of CIOs who expressed concern about an “AI bubble” translate that concern into more disciplined investment, or whether competitive pressure continues to override caution.

    The technology has arrived. The appetite is not in question. What remains unresolved is whether organisations can build the human and structural foundations fast enough to use what they have already bought.