Tag: digital-transformation

  • CX Metrics Are Broken and Enterprises Know It. The Question Is What Comes Next.

    One in three customer experience professionals cannot connect their organisation’s Net Promoter Score to financial performance. That is not the conclusion of a contrarian think piece – it is a finding from Medallia’s 2026 State of Customer Experience Report, which surveyed more than 1,500 consumers and 550 CX practitioners globally. The metric that has anchored boardroom CX conversations for the better part of two decades is, for a significant share of the profession, unmeasurably disconnected from the outcomes it is supposed to predict.

    And the profession knows it. According to the same report, 78 per cent of CX professionals plan to adopt at least one new metric in 2026.

    The perception gap that should worry everyone

    The most striking number in the Medallia research is not about NPS at all. It is the gap between how enterprises think they are performing and how their customers actually feel. Two-thirds of CX professionals believe their brand’s experience has improved over the past year. Only 16 per cent of consumers agree.

    That is not a minor calibration error. It is a 50-percentage-point disconnect between corporate self-assessment and market reality. And it has consequences: 40 per cent of consumers in the study reported switching brands within three months, often without the kind of dramatic service failure that would register on traditional satisfaction surveys. They simply drifted, quietly, toward something better – or at least less friction-filled.

    The implication is uncomfortable. Many enterprises are not just using the wrong metrics. They are using metrics that actively mislead them into believing things are going well when customers are already leaving.

    Too many metrics, too little insight

    The problem is not a shortage of measurement. If anything, it is the opposite. Research published in MIT Sloan Management Review found that large organisations routinely track between 50 and 200 CX metrics across multiple channels and touchpoints. Some track even more. The result is not better decision-making but what one report from Usan’s 2026 CX findings calls “data paralysis” – mountains of signals with no clear path from insight to action.

    MIT Sloan’s researchers, working with 14 subscription-services companies, found that most organisations collect metrics because they are industry-standard, not because they have been validated as useful for their specific customer journey. The recommendation is counterintuitive for an era obsessed with data: collect fewer metrics, but align the ones you keep to specific stages of the customer lifecycle where they can actually inform decisions.

    Medallia’s own data supports this. Among CX professionals using 10 or more data sources, 92 per cent said they could demonstrate return on investment. Among those using five or fewer, the figure dropped to 73 per cent. More data sources help – but only when they are deliberately chosen and connected, not accumulated by default.

    The structural problem underneath

    Even when CX teams produce meaningful insights, the organisational infrastructure often fails to act on them. Medallia’s survey found that 41 per cent of CX professionals worry their teams are too siloed from the rest of the business. One-third said entire departments take no action on CX findings. Half said their organisation does not respond often enough or with enough impact.

    Budget control compounds the issue. Fifty-eight per cent of CX practitioners said their initiatives are funded from budgets they do not directly oversee – a structural dependency that limits their ability to move quickly or invest in new approaches.

    This echoes a pattern maddaisy.com has tracked across several domains. When this site examined how enterprises are designing digital experiences for both humans and AI agents, the core finding was similar: organisations recognise that the landscape has shifted, but their internal structures have not caught up. The CX measurement problem is another instance of the same gap – not a lack of awareness, but a failure of organisational plumbing.

    Where the replacement metrics are coming from

    The 78 per cent of CX professionals looking for new metrics are not starting from scratch. A CX Network analysis of customer listening trends suggests three directions gaining traction.

    First, Customer Effort Score – a measure of how easy or difficult a specific interaction was – is increasingly favoured over NPS for transactional touchpoints, because it maps directly to a friction point rather than measuring a generalised sentiment. Second, frontline employee input is being formalised as a data source, recognising that the people handling customer interactions daily often have a more accurate picture of experience quality than any survey. Third, AI-powered analysis of unstructured data – support transcripts, social mentions, behavioural signals – is enabling what practitioners call “continuous listening”, replacing periodic surveys with ongoing, passive measurement.

    None of these are new ideas. What is new is the urgency. Transcom’s 2026 CX paradoxes report makes a pointed observation: many organisations are layering AI-driven CX tools on top of broken measurement foundations. Automating a process that was never validated does not fix it. It scales the dysfunction.

    What practitioners should be watching

    The shift away from legacy CX metrics is not a technology problem waiting for a technology solution. It is a governance problem. Organisations that want their CX measurement to mean something will need to do three things that are simple to describe and difficult to execute.

    They will need to audit their existing metrics ruthlessly – not asking “is this metric popular?” but “does this metric predict a financial outcome we care about?” They will need to connect whatever survives that audit to specific journey stages, so that insights map to decisions rather than floating in quarterly reports. And they will need to give CX teams the budget authority and cross-functional access to act on what they find, rather than producing dashboards that nobody is structurally empowered to respond to.

    The 50-point perception gap in Medallia’s data is not a measurement artefact. It is a mirror. Enterprises that cannot close it will keep telling themselves that customer experience is improving, right up until the churn numbers say otherwise.

  • Digital Experiences Now Have Two Audiences. Most Enterprises Are Only Designing for One.

    For as long as digital products have existed, experience design has asked a single question: what does the user want? The user browses, clicks, hesitates, backtracks, and eventually converts — or does not. Every interface decision, from navigation hierarchy to button placement, has been optimised around that human journey.

    In 2026, a second audience has arrived. AI agents now browse websites, interpret content, summarise product pages, compare services, and make purchasing recommendations — often before a human ever sees the interface. Search engines have done this quietly for years. But the new generation of autonomous agents does it actively, making decisions and taking actions on behalf of the people they serve.

    The implication for enterprises is straightforward and largely unaddressed: digital experiences must now be designed for two interpreters simultaneously, and they do not read the same way.

    The dual-interpreter problem

    Humans and machines process digital experiences through fundamentally different lenses. A human visitor might scan a page loosely, drawn by visual hierarchy, tone of voice, and emotional cues. They browse without clarity, explore without urgency, and change their minds mid-session. That inconsistency is not a flaw — it is how people navigate complex decisions.

    Machines, by contrast, prefer structure. They infer meaning from hierarchy, repetition, semantic markup, and patterns. They classify, compress, and summarise. When an AI agent visits a product page, it does not feel reassured by a warm brand photograph. It parses structured data, identifies key claims, and decides — in milliseconds — what that page is about, what matters, and what to report back to the user who sent it.

    As Composite Global noted in a recent analysis, experience design has shifted from being about flow to being about interpretation. The question is no longer just “how will a person navigate this?” but “how will an agent read this — and will it get the right answer?”

    Where the gap shows up

    The consequences of ignoring machine intent are already visible. When AI agents summarise a company’s offerings inaccurately, the problem is rarely that the agent is broken. More often, the page was never designed to be machine-readable in any meaningful way. The content was written for humans — rich in nuance, light on structure — and the agent did its best with what it found.

    Research from TBlocks found that 71 per cent of users now expect digital experiences to adapt to their intent, while 76 per cent notice and feel frustrated when that adaptation fails. Those expectations increasingly extend to agent-mediated experiences. If a user asks an AI assistant to compare three consulting firms’ service offerings, and the agent returns a garbled summary because one firm’s website relies on unstructured prose and JavaScript-rendered content, the brand loses — not the agent.

    The practical failures tend to cluster around a few recurring problems: content hierarchies that make sense visually but not semantically; messaging that requires context an agent cannot infer; calls to action that depend on emotional persuasion rather than clear structure; and pages that load dynamically in ways that agents cannot reliably parse.

    This is not SEO by another name

    It would be tempting to treat this as an extension of search engine optimisation. After all, making content machine-readable has been a concern since the early days of Google. But the agent-readability challenge goes further than search ranking.

    Search engines index pages and rank them. AI agents interpret pages and act on them. An agent does not return a list of blue links — it makes a recommendation, completes a task, or rules out an option entirely. The stakes are different. A page that ranks poorly in search results is still findable. A page that an AI agent misinterprets may never surface at all, or worse, may surface with the wrong message attached.

    This distinction matters for how enterprises invest. SEO focuses on keywords, metadata, and backlinks. Agent-readability requires structured data, semantic clarity, explicit labelling, and content architectures that hold meaning when stripped of their visual presentation. The overlap exists, but the disciplines are not the same.

    What maddaisy’s coverage has been pointing toward

    Readers of maddaisy’s recent coverage will recognise the broader pattern here. When this publication examined the governance challenges of AI agents, the focus was on how enterprises monitor and control autonomous systems. When it covered OpenAI’s Frontier Alliance, the story was about agents disrupting enterprise software by sitting above it. And when it explored vibe coding’s enterprise arrival, the thread was about how AI is reshaping how software gets built.

    The digital experience question is downstream of all three. If agents are going to interact with enterprise digital products — browsing service pages, interpreting pricing structures, summarising capabilities for prospective clients — then those products need to be designed with agents in mind. Not instead of humans. Alongside them.

    Designing for clarity across interpreters

    The emerging discipline — sometimes called “dual-intent design” — requires thinking in layers. Composite Global’s framework identifies three dimensions of intent that designers must now map simultaneously: explicit intent (what a user directly communicates), behavioural intent (what systems infer from interaction patterns), and emotional context (the confidence, uncertainty, or curiosity a human brings to the interaction).

    The first two are measurable. The third is where human judgment lives — and where machines consistently fall short. Strong experience design ensures that machine interpretation reinforces human meaning rather than distorting it. In practice, that means clear content hierarchies so agents classify correctly, structured data so machines parse quickly, explicit labelling so summaries remain accurate, and focused messaging so automated recommendations do not flatten a brand’s positioning.

    CoreMedia’s analysis of 2026 customer experience trends puts it bluntly: AI has become “a powerful new intermediary stepping between brand and customer.” The brands that treat that intermediary as an afterthought will find their message distorted in transit.

    The practical question for enterprises

    For most organisations, the immediate question is not whether to redesign everything. It is whether their existing digital properties communicate clearly to both audiences. A simple audit reveals the answer quickly: take a key product or service page, strip away the visual design, and read only the structured content. Does it still make sense? Would an agent, parsing that structure, draw the right conclusions?

    If the answer is no — and for most enterprise websites built in the pre-agent era, it will be — the remediation is less about redesign than about augmentation. Adding structured data, clarifying semantic hierarchy, making content modular rather than monolithic, and ensuring that key claims do not depend on visual context for meaning.

    None of this requires abandoning human-centred design. The point is not to optimise for machines at the expense of people. It is to build clarity that holds up under both interpretations — a standard that, arguably, should have been the goal all along.

    The enterprises that get this right will not just rank well or convert well. They will be accurately represented by the AI systems that increasingly mediate how their customers discover, evaluate, and choose them. In a market where agents are becoming the first point of contact, being misunderstood by a machine may prove more costly than being overlooked by a human.

  • Vibe Coding Enters the Enterprise. The Governance Question Follows It In.

    When Andrej Karpathy coined the term “vibe coding” in early 2025, he was describing something informal — a developer giving in to the flow of conversation with an AI assistant, accepting whatever code it generated, and iterating by feel rather than by specification. It was a shorthand for a new way of working that felt more like directing than engineering.

    Fourteen months later, the term has migrated from developer Twitter into enterprise press releases. Pegasystems announced this week that its Blueprint platform now offers an “end-to-end vibe coding experience” for designing mission-critical workflow applications. Salesforce has embedded similar capabilities into Agentforce. Gartner, in a May 2025 report titled Why Vibe Coding Needs to Be Taken Seriously, predicted that 40 per cent of new enterprise production software will be created using vibe coding techniques by 2028. What started as a solo developer’s guilty pleasure is being repackaged as an enterprise strategy.

    The question is whether the repackaging addresses the risks, or merely relabels them.

    From Slang to Sales Pitch

    The appeal of vibe coding in an enterprise context is straightforward. Natural language replaces formal specification. Business users can describe what they want in conversational terms — a workflow, an approval chain, a customer-facing process — and an AI assistant translates that intent into a working application. Development cycles that previously took months collapse into days or hours. Stakeholder alignment happens at the prototype stage rather than after months of requirements gathering.

    Pega’s implementation illustrates the model. Users converse with an AI assistant using text or speech to design applications, refine workflows, define data models, and build interfaces. They can switch between conversational input and traditional drag-and-drop modelling at any point. Completed designs deploy directly into Pega’s platform as live, governed workflows. The company’s chief product officer, Kerim Akgonul, framed it as “the excitement and speed of vibe coding” combined with “enterprise-grade governance, security, and predictability.”

    That framing is telling. Enterprise vendors are not adopting vibe coding wholesale — they are domesticating it. The original concept involved a developer accepting AI-generated code on trust, with minimal review. The enterprise version keeps the conversational interface but routes the output through structured frameworks, predefined best practices, and platform-level guardrails. Whether that still qualifies as vibe coding or is simply a new marketing label for low-code development with an AI front end is an open question.

    The Numbers Behind the Hype

    Gartner’s 40 per cent prediction is eye-catching, but it deserves scrutiny. The firm also projects that 90 per cent of enterprise software engineers will use AI coding assistants by 2028, up from under 14 per cent in early 2024. These are not niche forecasts — they describe a wholesale transformation of how software gets built.

    The market signals support the direction. Y Combinator reported that a quarter of its Winter 2025 startup cohort had codebases that were 95 per cent AI-generated. AI-native SaaS companies are achieving 100 per cent year-on-year growth rates compared with 23 per cent for traditional SaaS. Pega’s own Q4 2025 results showed 17 per cent annual contract value growth and a 33 per cent surge in cloud revenue, with management attributing much of the acceleration to Blueprint adoption.

    But there is a less comfortable set of numbers. A Veracode report from 2025 found that nearly 45 per cent of AI-generated code introduced at least one security vulnerability. Linus Torvalds, creator of Linux, publicly cautioned that vibe coding “may be a horrible idea from a maintenance standpoint” for production systems requiring long-term support. And Gartner’s own research acknowledges that only six per cent of organisations implementing AI become “high performers” achieving significant financial returns.

    The Shadow Already Has a Name

    For regular readers of maddaisy, these risks will sound familiar. When we examined shadow AI in February, the data showed 37 per cent of employees had already used AI tools without organisational permission — including coding assistants plugged into development environments without security review. Vibe coding, in its original ungoverned form, is essentially shadow AI with a better name.

    The enterprise vendors’ pitch — governed vibe coding, with guardrails — is a direct response to this problem. Rather than fighting the tide of developers and business users reaching for AI-assisted tools, platforms like Pega and Salesforce are channelling that energy through controlled environments. It is the same pattern that played out with cloud computing a decade ago: shadow IT became sanctioned cloud adoption once the governance frameworks caught up.

    The difference this time is speed. Cloud adoption played out over years. Vibe coding is moving in months. And as maddaisy’s coverage of agentic AI drift highlighted, AI-generated systems do not fail suddenly — they degrade gradually, in ways that are harder to detect than traditional software failures. An application built through conversational prompts, where the development team may not fully understand the underlying logic, amplifies that risk considerably.

    The Governance Gap Is the Real Story

    The enterprise vibe coding pitch rests on a critical assumption: that platform-level guardrails can substitute for developer-level understanding. In regulated industries — financial services, healthcare, government — this assumption will be tested quickly and publicly.

    The immediate challenge is not whether vibe coding works in a demo. It clearly does. The challenge is what happens six months into production, when the original conversational prompts have been refined dozens of times, the underlying models have been updated, and the people who designed the workflows have moved on. That is the maintenance problem Torvalds flagged, and it maps directly onto the agentic drift pattern: small, individually reasonable changes accumulating into a system whose behaviour no longer matches its original intent.

    Consultants and technology leaders evaluating vibe coding platforms should be asking three questions. First, can you audit the reasoning chain — not just the output, but why the system built what it built? Second, what happens when the AI model underneath is updated — does the application need to be revalidated? Third, who owns the maintenance burden when the person who “vibe coded” the application is no longer available?

    What to Watch

    Enterprise vibe coding is not a fad. The productivity gains are real, the vendor investment is substantial, and the Gartner forecasts — even if directionally approximate — point to a genuine shift in how software gets built. PegaWorld 2026, scheduled for June in Las Vegas, will likely showcase dozens of enterprise vibe coding implementations.

    But the narrative developing around it echoes the early days of every enterprise technology wave: speed first, governance second. The organisations that get this right will be those that treat vibe coding as a development interface, not a development shortcut — using the conversational speed to accelerate design while maintaining the engineering discipline to ensure what gets built can be understood, audited, and maintained over time.

    The vibes are entering the enterprise. The question is whether the rigour follows them in.

  • The Non-AI Agenda: What CIOs Are Actually Prioritising Beyond Artificial Intelligence in 2026

    Global IT spending will hit $6.15 trillion in 2026, a 10.8 per cent rise on the previous year. But dig beneath that headline and the distribution is strikingly lopsided. Gartner’s latest forecast projects AI spending to surge 80.8 per cent and data centre outlays to climb 31.7 per cent, while communications services and device budgets limp along at low single digits. The message is clear: AI is swallowing the budget. The question is what happens to everything else.

    That question matters because, as maddaisy has documented over recent weeks, the AI investment thesis is under strain. PwC’s 2026 CEO Survey found that 56 per cent of executives cannot point to measurable revenue gains from their AI programmes, and governance frameworks are lagging well behind deployment ambitions. If the technology absorbing most of the budget has yet to prove its return, the priorities being squeezed to fund it deserve closer scrutiny.

    The squeeze is real — and deliberate

    John-David Lovelock, a vice president analyst at Gartner, describes a dynamic in which runaway AI spending is forcing CIOs to find savings elsewhere — and IT services providers are bearing the brunt. The logic is straightforward: buyers expect their vendors to be using AI internally, and they want those efficiency gains reflected in lower fees.

    “CIOs need to find somewhere that they have control of their budget, and they can pick on the services companies because they’re using AI,” Lovelock told CIO.com.

    But not every non-AI line item can be trimmed without consequences. Several priorities are growing more urgent precisely because of AI adoption, not despite it.

    Cybersecurity: AI’s expanding attack surface

    The most immediate non-AI priority is, paradoxically, driven by AI itself. Dmitry Nazarevich, CTO at software firm Innowise, notes that his company’s security spending increase “is directly related to the increase in exposure and risk to data associated with the increased attack surface resulting from the introduction of generative AI.”

    This is not a theoretical concern. Every new AI model integrated into enterprise workflows introduces new vectors — data exfiltration through prompt injection, model poisoning through compromised training data, and the simple reality that agentic systems with write access to business processes can do real damage if they malfunction or are manipulated. As enterprises move from experimental AI pilots to production deployments, security spending is not optional — it is the prerequisite.

    Data foundations: the unsexy precondition

    Salesforce CIO Dan Shmitt offered a telling anecdote about an AI agent on the company’s help site that surfaced two conflicting answers to the same question. “Our first reaction was to assume the model was wrong,” he said. “The truth was that our data and content needed more consistency.”

    This pattern — blaming the model when the real problem is the data — is remarkably common. Capgemini’s TechnoVision 2026 framework places “thriving on data” as one of its nine foundational technology domains, emphasising data sharing, AI-driven insights, and sustainable data practices. The message is consistent across vendors and analysts: AI systems are only as reliable as the data infrastructure beneath them.

    For CIOs who have spent two years funding AI pilots, investing in data quality, master data management, and integration architecture may feel like a step backwards. It is not. It is the work that determines whether those pilots ever graduate to production.

    Technical debt and modernisation

    Legacy systems are not merely an annoyance in an AI-first world — they are a bottleneck. Nazarevich highlights that increased modernisation spending at Innowise is “partly a result of the fact that legacy or outdated systems limit the effectiveness of AI technology and delay deployment schedules.”

    This creates a compounding problem. Organisations that deferred modernisation to fund AI experiments now find that their AI initiatives are underperforming because the underlying platforms cannot support them. The CIO.com analysis of strategic imperatives for 2026 makes the point explicitly: if eight out of 10 strategic priorities relate to AI, “you’re likely missing some critical emerging technologies and trends.”

    Edge computing, digital twins, and platform modernisation may not generate the boardroom excitement of a generative AI demo, but they are the infrastructure on which AI capabilities depend.

    FinOps: managing the unpredictable bill

    AI workloads have introduced a new kind of cost unpredictability into IT budgets. Unlike traditional cloud computing, where usage patterns are relatively stable and forecastable, AI inference costs can spike with demand in ways that are difficult to model in advance. Innowise reports increased FinOps spending specifically because of “the unpredictability of computing bills created by AI workloads.”

    FinOps — the practice of bringing financial accountability to cloud spending — is no longer a niche discipline for cloud-native firms. For any organisation running AI at scale, it has become an essential management capability. Without it, the 80 per cent surge in AI spending that Gartner forecasts could easily overshoot, consuming budget earmarked for the very modernisation and security initiatives that AI requires to succeed.

    Workforce fluency: the persistent gap

    Technology investment alone solves nothing if the people using it are not equipped to do so effectively. Rebecca Gasser, global CIO at FGS Global, frames this as a literacy challenge: building digital and AI fluency across the organisation so that workers “can be more agile and adaptable to the ongoing changes.”

    Pat Lawicki, CIO of TruStage, puts it more directly: “We’re committed to balancing innovation with humanity: leveraging digital tools where they add real value while preserving the human connection that defines trust and empathy.”

    This is consistent with the pattern maddaisy has tracked in recent coverage of AI-driven burnout and the organisational failures behind poor AI rollouts. The technology works best when employees understand it, trust it, and can exercise judgement about when to rely on it and when to intervene. That requires sustained investment in training, change management, and communication — none of which appear in an AI spending forecast but all of which determine whether the forecast delivers value.

    The convergence argument

    The most sophisticated framing of the CIO’s 2026 agenda comes not from treating AI and non-AI priorities as competitors for budget, but from recognising their interdependence. Capgemini’s TechnoVision 2026 describes a shift toward “synchronicity at scale” — the idea that boundaries between digital, physical, and biological innovation are dissolving, and that the CIO’s job is to orchestrate across all of them simultaneously.

    In practice, this means cybersecurity investment protects AI deployments. Data foundation work makes AI outputs reliable. Modernisation enables AI to reach production. FinOps keeps AI costs sustainable. Workforce fluency ensures AI adoption sticks.

    The CIOs who treat 2026 as an AI-only year will likely find themselves explaining, 12 months from now, why their AI investments still are not delivering returns. The ones who invest in the full stack — the security, the data, the infrastructure, the people — are building the conditions under which AI can actually work.

    That is not a story about choosing between AI and everything else. It is a story about understanding that AI does not operate in isolation, and neither should the budget that funds it.

  • Deploy First, Fix Later: How Poor AI Rollouts Are Engineering Burnout Into the System

    Over the past week, maddaisy has examined two dimensions of the AI-burnout nexus: the work intensification mechanisms that make employees busier rather than better, and the organisational frameworks that might address the people side of the equation. But there is a third dimension that sits upstream of both: the technology deployment itself.

    Before burnout becomes a management problem, it is often an implementation problem. And a growing body of evidence suggests that the way organisations roll out AI tools — rushed timelines, inadequate capability-building, and a persistent belief that go-live equals readiness — is baking unsustainable working conditions into the system from day one.

    The capability gap that no one budgets for

    A detailed analysis published by CIO.com this month draws on workforce upskilling data from large-scale enterprise rollouts to make a blunt assessment: systems rarely fail because the technology does not work. They fail because the organisation has not built the infrastructure to support the people using them.

    The numbers are sobering. Organisations routinely experience productivity drops of 30 to 40% within the first 90 days of a major technology go-live when workforce capability has not been adequately addressed. Support tickets triple. Workaround behaviours — offline spreadsheets, manual reconciliations, shadow systems — proliferate as employees revert to what feels safe and controllable. Post-go-live support costs run 40 to 60% over budget. ROI timelines slip by six to 12 months.

    None of this is caused by employee resistance or technological failure. It is the predictable consequence of treating capability-building as a training event rather than a systemic requirement.

    The 10% problem

    Perhaps the most striking data point concerns training transfer. Research on technology implementation training suggests that only 10 to 20% of skills learned in formal programmes translate into sustained on-the-job performance. The issue is not poor course design. It is the assumption that a three-day training session two weeks before go-live can substitute for the repeated practice, pattern recognition, and feedback loops that genuine capability requires.

    This matters directly for the burnout conversation. When employees lack the capability to operate new systems confidently, every minor issue becomes a support ticket. Every exception becomes an escalation. The cognitive load of navigating unfamiliar tools while maintaining output expectations creates exactly the kind of sustained pressure that Harvard Business Review’s recent research identified as driving work intensification. The tools are not the problem — the deployment is.

    Where AI deployments differ from traditional IT rollouts

    Enterprise technology deployments have always carried capability risks. But AI introduces specific complications that amplify them.

    First, AI tools change the scope of work, not just the process. As maddaisy’s earlier analysis noted, employees using AI do not simply do the same tasks faster — they absorb new responsibilities, blur role boundaries, and take on work that previously justified additional headcount. A traditional ERP deployment changes how someone does their job. An AI deployment can change what their job is. Upskilling programmes designed around process training cannot address a shift that is fundamentally about role redesign.

    Second, AI output is probabilistic, not deterministic. An ERP system produces the same result given the same inputs. An AI tool might produce different outputs each time, requiring users to exercise judgement about quality, accuracy, and appropriateness. That judgement cannot be trained in a classroom — it is built through experience, and it demands a kind of cognitive engagement that is qualitatively different from following a process manual.

    Third, AI raises the visibility of individual output. When an AI tool enables someone to produce a first draft in minutes rather than hours, the speed becomes the new baseline expectation. Employees who are still building capability with the tool face pressure to match output norms set by early adopters or by the tool’s theoretical capacity. The result is the self-reinforcing acceleration cycle that researchers have now documented repeatedly.

    Governance as deployment infrastructure

    The CIO.com analysis proposes treating upskilling as a governance concern rather than a training administration task — embedding capability-building into the transformation workstream with the same rigour as data migration or integration testing. This means defining capability in behavioural terms tied to business processes, starting practice in sandbox environments as soon as process designs stabilise, and measuring performance data rather than training completion rates.

    One practical recommendation stands out: establishing a “performance council” distinct from the training team. This group — composed of process owners, frontline managers, and high-performing end users — meets weekly to review whether people are performing reliably in live operations. They examine error rates, support ticket patterns, workaround behaviours, and time-to-competency. Critically, they have the authority to pause rollouts when the data shows capability is not sticking.

    This is not a radical proposal. It is the kind of operational discipline that well-run technology programmes have always applied to infrastructure and integration. The fact that it needs to be articulated separately for workforce capability suggests how often organisations skip the people side of deployment planning.

    Connecting the implementation layer to the burnout evidence

    The burnout data that has emerged over recent weeks tells a consistent story. DHR Global reports that 83% of workers experience some degree of burnout, with engagement dropping 24 percentage points in a single year. Only 34% of employees say their organisation has communicated AI’s workplace impact clearly. Among entry-level staff, that figure falls to 12%.

    These are not just management failures. They are deployment failures. When organisations ship AI tools without building the capability infrastructure to support them — without adequate training transfer, without performance monitoring, without role redesign — they are not just creating a change management problem. They are creating a technical debt of human capability that compounds over time, manifesting as burnout, disengagement, and quiet resistance.

    As Deloitte’s 2026 AI report made clear, the gap between strategic confidence and operational readiness remains the defining feature of enterprise AI. The implementation evidence suggests that closing this gap requires treating workforce capability not as a soft skill or an HR deliverable, but as core deployment infrastructure — as essential as the data pipeline and as measurable as system uptime.

    What this means for the next phase

    The organisations that avoid engineering burnout into their AI deployments will share a common trait: they will treat go-live as the beginning of capability-building, not the end. They will budget for the 90-day productivity dip and plan for it rather than being surprised by it. They will measure whether people can perform reliably at scale under real business pressure, not whether they completed a training module.

    Most importantly, they will recognise that the fastest way to undermine an AI investment is not a technical failure — it is deploying capable technology to an unprepared workforce and then measuring success by adoption rates alone. The tools work. The question, as it has always been with technology transformations, is whether the organisation around them does too.