Tag: ai-roi

  • CX Metrics Are Broken and Enterprises Know It. The Question Is What Comes Next.

    One in three customer experience professionals cannot connect their organisation’s Net Promoter Score to financial performance. That is not the conclusion of a contrarian think piece – it is a finding from Medallia’s 2026 State of Customer Experience Report, which surveyed more than 1,500 consumers and 550 CX practitioners globally. The metric that has anchored boardroom CX conversations for the better part of two decades is, for a significant share of the profession, unmeasurably disconnected from the outcomes it is supposed to predict.

    And the profession knows it. According to the same report, 78 per cent of CX professionals plan to adopt at least one new metric in 2026.

    The perception gap that should worry everyone

    The most striking number in the Medallia research is not about NPS at all. It is the gap between how enterprises think they are performing and how their customers actually feel. Two-thirds of CX professionals believe their brand’s experience has improved over the past year. Only 16 per cent of consumers agree.

    That is not a minor calibration error. It is a 50-percentage-point disconnect between corporate self-assessment and market reality. And it has consequences: 40 per cent of consumers in the study reported switching brands within three months, often without the kind of dramatic service failure that would register on traditional satisfaction surveys. They simply drifted, quietly, toward something better – or at least less friction-filled.

    The implication is uncomfortable. Many enterprises are not just using the wrong metrics. They are using metrics that actively mislead them into believing things are going well when customers are already leaving.

    Too many metrics, too little insight

    The problem is not a shortage of measurement. If anything, it is the opposite. Research published in MIT Sloan Management Review found that large organisations routinely track between 50 and 200 CX metrics across multiple channels and touchpoints. Some track even more. The result is not better decision-making but what one report from Usan’s 2026 CX findings calls “data paralysis” – mountains of signals with no clear path from insight to action.

    MIT Sloan’s researchers, working with 14 subscription-services companies, found that most organisations collect metrics because they are industry-standard, not because they have been validated as useful for their specific customer journey. The recommendation is counterintuitive for an era obsessed with data: collect fewer metrics, but align the ones you keep to specific stages of the customer lifecycle where they can actually inform decisions.

    Medallia’s own data supports this. Among CX professionals using 10 or more data sources, 92 per cent said they could demonstrate return on investment. Among those using five or fewer, the figure dropped to 73 per cent. More data sources help – but only when they are deliberately chosen and connected, not accumulated by default.

    The structural problem underneath

    Even when CX teams produce meaningful insights, the organisational infrastructure often fails to act on them. Medallia’s survey found that 41 per cent of CX professionals worry their teams are too siloed from the rest of the business. One-third said entire departments take no action on CX findings. Half said their organisation does not respond often enough or with enough impact.

    Budget control compounds the issue. Fifty-eight per cent of CX practitioners said their initiatives are funded from budgets they do not directly oversee – a structural dependency that limits their ability to move quickly or invest in new approaches.

    This echoes a pattern maddaisy.com has tracked across several domains. When this site examined how enterprises are designing digital experiences for both humans and AI agents, the core finding was similar: organisations recognise that the landscape has shifted, but their internal structures have not caught up. The CX measurement problem is another instance of the same gap – not a lack of awareness, but a failure of organisational plumbing.

    Where the replacement metrics are coming from

    The 78 per cent of CX professionals looking for new metrics are not starting from scratch. A CX Network analysis of customer listening trends suggests three directions gaining traction.

    First, Customer Effort Score – a measure of how easy or difficult a specific interaction was – is increasingly favoured over NPS for transactional touchpoints, because it maps directly to a friction point rather than measuring a generalised sentiment. Second, frontline employee input is being formalised as a data source, recognising that the people handling customer interactions daily often have a more accurate picture of experience quality than any survey. Third, AI-powered analysis of unstructured data – support transcripts, social mentions, behavioural signals – is enabling what practitioners call “continuous listening”, replacing periodic surveys with ongoing, passive measurement.

    None of these are new ideas. What is new is the urgency. Transcom’s 2026 CX paradoxes report makes a pointed observation: many organisations are layering AI-driven CX tools on top of broken measurement foundations. Automating a process that was never validated does not fix it. It scales the dysfunction.

    What practitioners should be watching

    The shift away from legacy CX metrics is not a technology problem waiting for a technology solution. It is a governance problem. Organisations that want their CX measurement to mean something will need to do three things that are simple to describe and difficult to execute.

    They will need to audit their existing metrics ruthlessly – not asking “is this metric popular?” but “does this metric predict a financial outcome we care about?” They will need to connect whatever survives that audit to specific journey stages, so that insights map to decisions rather than floating in quarterly reports. And they will need to give CX teams the budget authority and cross-functional access to act on what they find, rather than producing dashboards that nobody is structurally empowered to respond to.

    The 50-point perception gap in Medallia’s data is not a measurement artefact. It is a mirror. Enterprises that cannot close it will keep telling themselves that customer experience is improving, right up until the churn numbers say otherwise.

  • Nearly Every Enterprise Wants AI. Two-Thirds Cannot Scale It Past a Pilot.

    The Appetite Is Not the Problem

    Nearly every large organisation wants AI. The technology works. The budgets are approved. The pilots are running. And yet, when it comes to deploying AI at scale – moving from a successful proof of concept to a production capability that changes how the business operates – two-thirds of enterprises are stuck.

    That is the central finding of Logicalis’s 2026 CIO Report, which surveyed more than 1,000 chief information officers globally. The numbers paint a stark picture of an industry that has solved the belief problem but not the execution one: 94% of CIOs report growing organisational appetite for AI. Over a third have accelerated AI initiatives based on early proof-of-concept results. But two-thirds say they cannot scale AI beyond those initial deployments.

    The gap between wanting AI and running AI at enterprise scale has become the defining challenge of 2026. And the constraints holding organisations back are not the ones most boardrooms are discussing.

    Skills, Not Budgets, Are the Bottleneck

    The most striking finding in the Logicalis data is what CIOs identify as their primary constraint. It is not funding. It is not technology. It is skills.

    A lack of internal technical capability is holding back AI ambitions in nearly nine out of ten organisations. This is not a shortage of data scientists or machine learning engineers in the abstract – it is a shortage of people who understand how to integrate AI into existing business processes, manage its outputs, and govern its behaviour in production environments.

    The skills gap becomes more consequential as AI moves from experimentation to operation. A pilot needs a small team of enthusiasts and a sandbox. A production deployment needs data engineers, integration specialists, change managers, and – critically – people who understand both the technology and the business domain well enough to know when the AI is wrong.

    This connects directly to what Harvard Business Review reported in February: 88% of companies now report regular AI use, yet adoption is stalling because employees experiment with tools without integrating them into how work actually gets done. The tools are present. The capability to use them well is not.

    Governance Is Being Compromised, Not Solved

    If the skills gap explains why organisations cannot scale, the governance gap explains why scaling carries risk. The Logicalis report found that 62% of CIOs have compromised on AI governance due to limited knowledge, and only 44% say they fully grasp the risks of AI adoption. Meanwhile, 76% describe unchecked AI as a serious concern.

    This is not a theoretical problem. As maddaisy.com has reported extensively, the governance question follows AI into every new domain – from agentic systems that drift in production to vibe coding tools that generate enterprise software without conventional oversight. Organisations are deploying AI faster than they can build the frameworks to manage it, and most acknowledge this openly. An overwhelming 89% of CIOs in the Logicalis survey describe their current approach as “learning as we go.”

    That phrase deserves attention. It means that the majority of enterprise AI programmes are operating without mature risk management, without clear accountability structures, and without the monitoring infrastructure needed to catch problems before they compound. For regulated industries – financial services, healthcare, defence – this is not a growing pain. It is an exposure.

    The Pattern maddaisy.com Has Been Tracking

    The Logicalis data does not exist in isolation. It quantifies a pattern that has been building across multiple data points this year.

    In February, maddaisy.com reported on PwC’s Global CEO Survey, which found that 56% of chief executives could not point to measurable revenue gains from their AI investments. The diagnosis then was a measurement problem as much as a technology one – organisations deploying AI without redesigning workflows or building the instrumentation to track outcomes.

    Earlier this month, Capgemini’s CEO Aiman Ezzat made the case for pacing AI investment, arguing that companies are deploying capabilities ahead of their organisation’s ability to absorb them. EY research cited in that analysis showed organisations failing to capture up to 40% of potential AI benefits – not because the technology underperformed, but because the surrounding processes, skills, and culture were not ready.

    And in maddaisy.com’s analysis of the consulting pyramid, Eden McCallum research revealed that 95% of AI pilots have failed to deliver returns – a figure that aligns precisely with the Logicalis finding that two-thirds of organisations cannot move past the pilot stage.

    These are not isolated reports reaching coincidentally similar conclusions. They are different measurements of the same underlying problem: the bottleneck in enterprise AI has moved from technology capability to organisational readiness.

    The Managed Services Pivot

    One of the more telling details in the Logicalis report is that 94% of CIOs plan to lean on managed service providers over the next two to three years to help navigate AI governance, scaling, and sustainability. This is a quiet but significant shift in how enterprises relate to their technology infrastructure.

    It suggests that many CIOs have concluded they cannot build the required skills and governance frameworks internally – at least not at the pace the technology demands. Rather than owning and operating AI capabilities directly, they are moving toward orchestrating a network of external providers. The CIO role, in this model, becomes less about technology ownership and more about vendor management, risk oversight, and strategic coordination.

    For consulting and technology services firms, this creates a substantial market opportunity. But it also raises a question that the industry has not yet answered convincingly: if the clients cannot scale AI internally because they lack the skills and governance frameworks, and they outsource to service providers who are themselves still working out how to embed AI into their own operations, where does the actual expertise reside?

    What Practitioners Should Watch

    The adoption gap is unlikely to close quickly. Skills take time to develop. Governance frameworks take time to mature. Organisational change – the kind that turns a pilot into a production capability – is measured in quarters and years, not weeks.

    Three things are worth tracking. First, whether the 89% “learning as we go” figure starts to decline in subsequent surveys – that will be the clearest signal that enterprises are moving from experimentation to operational maturity. Second, whether the managed services pivot produces measurable outcomes or simply moves the scaling problem from one organisation to another. And third, whether the 67% of CIOs who expressed concern about an “AI bubble” translate that concern into more disciplined investment, or whether competitive pressure continues to override caution.

    The technology has arrived. The appetite is not in question. What remains unresolved is whether organisations can build the human and structural foundations fast enough to use what they have already bought.

  • The AI ROI Crisis: Why 56% of CEOs Still Cannot Prove Value From Their AI Investments

    More than half of the world’s chief executives cannot point to revenue gains from their AI investments. That is not a fringe finding from an alarmist report — it is the central conclusion of PwC’s 2026 Global CEO Survey, which polled 4,454 business leaders and was released at Davos in January.

    The figure — 56% reporting no measurable revenue uplift from AI — lands at a moment when enterprise AI spending continues to accelerate. Budgets are growing, headcounts in AI-adjacent roles are expanding, and the consulting industry’s order books are thick with transformation mandates. Yet the returns remain stubbornly elusive for the majority. Only 12% of CEOs surveyed reported achieving both revenue growth and cost reduction from their AI programmes.

    This is not, as some commentary has framed it, evidence that AI does not work. It is evidence that most organisations have not yet figured out how to make it work — a distinction that matters enormously for practitioners and consultants navigating the current landscape.

    The pattern maddaisy has been tracking

    PwC’s data confirms a dynamic that maddaisy has examined from several angles over the past fortnight. Deloitte’s 2026 State of AI report revealed that enterprises feel more strategically confident about AI than ever, yet less operationally ready — a paradox the report’s authors attributed to weak data infrastructure, insufficient talent, and immature governance. Research published in Harvard Business Review, which maddaisy covered last week, found that AI tools were making employees busier rather than more productive, with efficiency gains absorbed by task expansion rather than redirected toward higher-value work.

    The ROI gap is what happens when these operational failures compound. Organisations adopt AI tools without redesigning workflows. They measure deployment (how many teams have access) rather than outcomes (what changed as a result). They invest in the technology layer while underinvesting in the organisational layer — the process changes, role redesigns, and measurement frameworks that turn a pilot into a production capability.

    A measurement problem masquerading as a technology problem

    One of the more revealing aspects of the PwC data is what it implies about how enterprises are tracking AI value. CEO revenue confidence sits at a five-year low of 30%, and this pessimism correlates with the inability to demonstrate AI returns. But the question is whether the returns genuinely are not there, or whether organisations simply lack the instrumentation to detect them.

    The answer, for many enterprises, is likely both. Some AI deployments are genuinely failing to deliver — deployed in the wrong processes, aimed at the wrong problems, or undermined by poor data quality. But others may be generating real value that never surfaces in the metrics that CEOs review. Time saved in middle-office processes, reduced error rates in document handling, faster iteration cycles in product development — these are real gains, but they rarely appear on a revenue line unless someone has built the measurement architecture to capture them.

    This is a familiar pattern in enterprise technology adoption. The early years of cloud computing saw similar complaints: organisations spent heavily on migration but struggled to quantify the business impact beyond infrastructure cost reduction. The value was real — in agility, speed to market, and developer productivity — but it took years for finance teams to develop frameworks that could track it. AI is following the same trajectory, with the added complication that its benefits are often diffuse, spread across many small improvements rather than concentrated in a single, measurable outcome.

    What the 12% are doing differently

    PwC’s survey found that the minority of organisations achieving both revenue and cost benefits from AI share common characteristics. They have invested in data foundations — not just data lakes and pipelines, but governance structures that ensure data quality, accessibility, and appropriate use. They have moved beyond isolated pilots to embed AI into core business processes. And they treat AI adoption as an organisational change programme, not a technology deployment.

    None of this is conceptually new. Consultancies have been advising clients on change management, data governance, and process redesign for decades. What is new is the speed at which the gap between leaders and laggards is widening. The 12% who have cracked the ROI equation are pulling ahead, using AI-generated insights to inform strategy, AI-automated processes to reduce costs, and AI-enhanced products to capture new revenue. The 56% who have not are still running pilots, still debating governance frameworks, and still struggling to answer the board’s most basic question: what are we getting for this money?

    The consulting industry’s uncomfortable position

    For the consulting sector, PwC’s findings create an awkward tension. The firms advising enterprises on AI strategy are, in many cases, the same firms whose clients cannot demonstrate returns. This is not necessarily a reflection of poor advice — the operational barriers to AI value are genuine and deep — but it does raise questions about what consulting engagements are actually delivering.

    As maddaisy noted when examining Capgemini’s recent results, CEO Aiman Ezzat explicitly framed the company’s direction as a shift “from AI hype to AI realism.” Generative AI bookings exceeded 8% of Capgemini’s total for the year, but the company’s 2026 revenue guidance fell slightly below analyst expectations — a reminder that even firms positioning AI at the centre of their strategy face questions about whether the investment is translating into proportional growth.

    The risk for consultancies is that the ROI gap erodes client confidence in AI-related engagements. If more than half of CEOs see no revenue benefit, the appetite for further AI spending — and the advisory services that accompany it — may tighten. The counter-argument, which PwC’s own data supports, is that the solution to poor AI returns is not less AI but better AI implementation. That is a consulting engagement waiting to happen, provided firms can credibly demonstrate that they know how to close the gap.

    What practitioners should watch

    Three developments will shape whether the AI ROI picture improves or deteriorates over the coming quarters.

    First, the regulatory environment is tightening. As maddaisy recently examined, the EU AI Act’s high-risk system requirements become enforceable in August 2026, and state-level legislation in the US is creating a fragmented compliance landscape. Compliance costs will add to the total cost of AI ownership, making the ROI equation harder to balance for organisations that have not already built governance into their deployment model.

    Second, the rise of agentic AI — systems that plan and execute tasks with minimal human oversight — will test whether organisations can capture value from more autonomous AI without losing control. Deloitte’s data showed a nearly fivefold increase in planned agentic AI deployments over the next two years, but only one in five companies has a mature governance model for autonomous agents. The ROI potential is significant; so is the risk of expensive failures.

    Third, and perhaps most importantly, watch for a shift in how organisations measure AI value. The enterprises that move beyond “did revenue go up?” to more granular metrics — cycle time reduction, error rate improvement, employee capacity freed for strategic work — will be better positioned to demonstrate returns and justify continued investment. The measurement framework may matter as much as the technology itself.

    PwC’s 56% figure is striking, but it is a snapshot of a transition, not a verdict on AI’s potential. The technology is not the bottleneck. Execution is. And for consultants and practitioners, that distinction is where the real work — and the real opportunity — lies.