Tag: workforce-management

  • CTOs and CHROs Are Climbing the Pay Table. That Tells You Where Boards Think Risk Lives Now.

    For decades, the path to being among a public company’s five highest-paid executives ran through operations, sales, or running a business unit. Technology leaders and HR chiefs were important, certainly — but they were support functions, not the positions that commanded top-tier compensation.

    That hierarchy is shifting. New research from The Conference Board, published this week, tracks the named executive officers (NEOs) — the five highest-paid executives — across the Russell 3000 from 2021 to 2025. The findings are striking: Chief Technology Officers appearing as NEOs increased by 61%, while Chief Human Resources Officers rose by 55%. In the same period, business unit leaders — historically the dominant non-CEO, non-CFO category — declined by 15%.

    The numbers are not subtle. They represent a structural revaluation of which functions boards consider most critical to company performance and risk.

    From support functions to enterprise risk owners

    The explanation is not difficult to locate. Two forces are converging: the AI boom is making technology capability an existential strategic question, and a persistent talent war is making the ability to attract, retain, and reskill workers a board-level concern rather than an HR department problem.

    “Growth in CHRO and CTO roles signals that talent, culture, and digital capability are now viewed as enterprise risks, not support functions,” said Andrew Jones, Principal Researcher at The Conference Board. “Boards are prioritising leaders who shape resilience and transformation across the organisation.”

    This tracks with what maddaisy has been reporting for months. The consulting pyramid piece in early March documented how firms are reshuffling roles rather than eliminating them — cutting some positions while creating others in AI engineering and data science. Accenture’s decision to track AI tool usage for promotions was another signal: when a firm ties career progression to technology adoption, the executive overseeing that technology becomes strategically indispensable.

    The specific numbers tell the story

    CTO NEO disclosures rose from 155 to 249 in the Russell 3000 between 2021 and 2025. CHRO disclosures went from 148 to 230. These are not marginal shifts — they represent boards deciding, through the bluntest mechanism available (compensation), that these roles belong at the top table.

    Meanwhile, business unit leaders fell from 1,734 to 1,475 disclosures. The decline suggests that boards are placing less emphasis on divisional performance and more on enterprise-wide capabilities: technology infrastructure, talent strategy, and the legal and regulatory architecture that governs both.

    That last point matters. Legal roles — including chief legal officers, corporate secretaries, and general counsels — saw the largest increase of any non-mandatory NEO category, rising 21% over the same period. As AI governance, data regulation, and compliance pressures mount, the lawyers are moving closer to the centre of power too.

    What this means for the talent market

    The compensation data carries implications well beyond boardroom politics. When CTOs and CHROs move into the highest-paid tier, it reshapes the talent pipeline for those roles. More ambitious executives will target those paths. Boards will demand different skill sets — not just technical competence for CTOs, but strategic vision for how AI and digital infrastructure create competitive advantage. Not just process management for CHROs, but the ability to navigate workforce transformation at scale.

    The Conference Board’s data also reveals an interesting gender dimension. Among S&P 500 NEOs, women CEOs earned 11% more than men in 2025. But male NEOs overall still earned 8% more in the S&P 500 and 12% more in the Russell 3000. The gap, as researcher Paul Hodgson noted, “largely reflects who holds which roles” — men remain more prevalent in higher-paid operational and commercial positions, with longer average tenure and concentration at larger firms.

    The COO plateau and what it signals

    One quieter finding deserves attention: COO representation rose just 6% over the period and has actually declined from a 2023 peak. The chief operating officer — once the natural second-in-command — appears to be losing ground to more specialised enterprise-wide roles. In an era where the critical operational questions are “how do we deploy AI safely” and “how do we retain the people who know how to do it,” a generalist operations mandate may no longer be enough to justify top-tier compensation.

    The board’s revealed preferences

    Compensation data has always been the most reliable indicator of what organisations actually value, as opposed to what they claim to value. Press releases can announce “people-first cultures” and “digital-first strategies” without consequence. Paying the CHRO and CTO as much as the head of your largest business unit is a commitment that shows up in proxy statements.

    The Conference Board’s findings confirm a pattern that has been building for several years: boards are redefining which risks are existential and which executives own them. The AI boom has made technology leadership a strategic imperative. The talent war — intensified by the very AI transformation companies are pursuing — has elevated workforce strategy from an administrative function to an enterprise risk.

    For consultants and practitioners, the practical implication is clear. The organisations they advise are restructuring their leadership hierarchies around technology and talent. Advisory work that once centred on operational efficiency and market strategy increasingly requires fluency in AI deployment, workforce transformation, and the regulatory landscape that governs both. The C-suite is telling you where the priorities are. The pay data just makes it impossible to ignore.

  • The Three-Tool Threshold: BCG Research Reveals Where AI Productivity Gains Turn Into Cognitive Overload

    For months, the evidence that AI tools are intensifying work rather than simplifying it has been accumulating. maddaisy has tracked this story from the UC Berkeley research showing employees absorbing more tasks under AI, through the organisational failures that leave workers unsupported, to the implementation problems that bake burnout into the system from day one. What was missing was a specific threshold — a number that tells enterprises where the gains end and the damage begins.

    Boston Consulting Group has now supplied one. In a study published in Harvard Business Review this month, researchers surveyed 1,488 full-time US workers and found a clean break point: employees using three or fewer AI tools reported genuine productivity gains. Those using four or more reported the opposite — declining productivity, increased mental fatigue, and higher error rates. BCG calls the phenomenon “AI brain fry.”

    The finding is not just academic. Among workers reporting brain fry, 34% expressed active intention to leave their employer, compared with 25% of those who did not. For a workforce already under pressure from rapid technology deployment, that nine-percentage-point gap represents a tangible retention risk.

    The cognitive cost no one budgeted for

    The BCG research puts numbers to something the UC Berkeley study identified in qualitative terms earlier this year. When AI tools require high levels of oversight — reading, interpreting, and verifying LLM-generated content rather than simply delegating administrative tasks — workers expend 14% more mental effort. They experience 12% greater mental fatigue and 19% more information overload.

    Many respondents described a “fog” or “buzzing” sensation that forced them to step away from their screens. Others reported an increase in small mistakes — exactly the kind of errors that compound in professional services, financial analysis, and other high-stakes environments.

    “People were using the tool and getting a lot more done, but also feeling like they were reaching the limits of their brain power,” Julie Bedard, the study’s lead author and a managing director at BCG, told Fortune. “Things were moving too fast, and they didn’t have the cognitive ability to process all the information and make all the decisions.”

    This aligns with what maddaisy has previously described as the task expansion pattern: when AI makes certain tasks faster, employees do not use the freed-up time for strategic thinking. They absorb more work. The BCG data now suggests the breaking point arrives sooner than most organisations assume — at the fourth tool, not the tenth.

    The macro picture is equally sobering

    The three-tool threshold sits against a broader backdrop of underwhelming AI productivity data at scale. A Goldman Sachs analysis published this month found “no meaningful relationship between productivity and AI adoption at the economy-wide level,” with measurable gains confined to just two domains: customer service and software development.

    Separately, a survey of 6,000 C-suite executives found that 90% saw no evidence of AI impacting productivity or employment in their workplaces over the past three years. Their median forecast: a 1.4% productivity increase over the next three. That is hardly the transformation narrative that justified billions in enterprise AI spending.

    These findings do not mean AI is useless. The Federal Reserve Bank of St. Louis estimated a 33% hourly productivity boost for workers during the specific hours they use generative AI. The problem is that this micro-level gain does not scale linearly. Adding more tools, more prompts, and more AI-generated outputs does not multiply the benefit — it multiplies the cognitive overhead.

    What the threshold means for enterprises

    The practical implications are straightforward, even if they run against the instincts of most technology procurement processes.

    First, fewer tools, better deployed. The BCG data suggests that organisations would get better results from consolidating around two or three well-integrated AI tools than from giving every team access to every available platform. This runs counter to the current market dynamic, where vendors push specialised AI tools for every function — writing, coding, data analysis, scheduling, customer interaction — and enterprises buy them all to avoid falling behind.

    Second, oversight design matters as much as tool selection. The highest cognitive costs were associated with tasks requiring workers to interpret and verify AI output, not with AI performing autonomous background work. Enterprises that can shift more AI usage toward the latter — automated workflows, pre-verified data processing, agent-completed administrative tasks — will impose less cognitive strain on their people.

    Third, training needs to include when not to use AI. As maddaisy has previously noted, most organisations treat AI capability-building as a deployment event rather than a sustained practice. The BCG researchers found that when managers provided ongoing training and support, brain-fry symptoms decreased. The Berkeley team suggested batching AI-intensive work into specific time blocks rather than leaving it on all day — a scheduling discipline that few organisations currently enforce.

    The next chapter in a familiar story

    The AI-productivity narrative is following a pattern that technology historians will recognise. Early adopters see real gains. Organisations rush to scale. The gains plateau or reverse as implementation complexity outpaces human capacity to manage it. Eventually, a more measured approach emerges — not abandoning the technology, but deploying it with greater discipline.

    The BCG three-tool threshold may turn out to be an early data point rather than a universal law. But it offers something that has been missing from the AI-adoption conversation: a concrete starting point for right-sizing the technology stack to what human cognition can actually sustain.

    For consultants advising on AI transformation, that is a message worth delivering — even when it runs counter to the vendor pitch deck.

  • The Consulting Pyramid Is Not Collapsing. It Is Being Quietly Redesigned.

    The consulting industry’s foundational staffing model — a small number of partners supported by large cohorts of junior analysts — is facing its most serious structural challenge in decades. But the narrative that AI is “dismantling” the pyramid overstates the pace and understates the complexity of what is actually happening.

    Nick Pye, managing partner at Mangrove Consulting, argued in Consultancy.uk this month that the traditional pyramid is “becoming increasingly difficult to justify.” His core thesis is straightforward: AI can now perform the analytical heavy lifting — research, financial modelling, scenario analysis — that once required rooms of graduates billing at premium rates. Clients are noticing. They want senior judgement, not junior analysis. And they increasingly have their own data capabilities, reducing dependence on external advisers for the work that used to fill the pyramid’s base.

    The argument is sound in principle. Where it risks overreach is in assuming the transition is further along than the evidence suggests.

    The data tells a more complicated story

    If AI were genuinely dismantling the consulting pyramid, one would expect to see mass reductions in headcount at the bottom. The picture is more mixed than that.

    As maddaisy reported in February, Capgemini ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. The company simultaneously announced €700 million in restructuring charges. It is not shrinking. It is reshuffling: eliminating some roles while creating others, primarily in AI engineering, data science, and agentic AI delivery.

    McKinsey, meanwhile, has begun testing new recruits on AI capabilities, and more than half of graduate roles now reportedly require AI skills. The Big Four have reduced student intakes, but the UK consulting market’s difficulties in 2025 — its worst year since lockdown — owe as much to broader demand softness as to structural AI disruption.

    And the technology itself is not yet delivering at the scale the hype implies. Eden McCallum research published this year found that while excitement for generative AI remains high, revenue impact remains minimal. Ninety-five per cent of AI pilots have failed to deliver returns, according to industry data cited by Consultancy.uk. That is not a technology that has already displaced the analyst class.

    What is genuinely changing

    None of this means the pyramid is safe. The direction of travel is clear, even if the pace is slower than the most breathless accounts suggest.

    Three shifts are converging. First, clients increasingly own and understand their own data. The analytical monopoly that consulting firms once held — gathering, processing, and synthesising information that clients could not access themselves — has eroded as organisations have built internal data teams and deployed their own AI tools.

    Second, the economics of the base are deteriorating. When an AI system can produce a comparable market analysis in minutes rather than weeks, it becomes progressively harder to justify billing rates for the same work performed by junior staff. This does not eliminate the need for human analysis, but it compresses the time and headcount required.

    Third, client expectations have shifted from deliverables to outcomes. As Pye puts it: clients want “decisions and performance,” not “decks and processes.” That shift favours experienced practitioners who can navigate organisational politics and drive implementation — not the analysts who assemble the slides.

    The diamond, the inverted pyramid, and the graduate question

    The replacement models being discussed are instructive. The most conservative is the “diamond” — wider in the middle, thinner at the base, with fewer entry-level analysts but more mid-level orchestration roles. It preserves hierarchy while acknowledging that the bottom of the pyramid has less to do.

    The more radical option is what Pye calls “flipping the pyramid”: small teams of senior and mid-level consultants tackling specific challenges, supported by AI systems rather than junior staff. Boutique consultancies have operated variations of this model for years. What AI changes is the scale at which it becomes viable.

    But neither model addresses the question that should concern the industry most: if junior roles contract, where do future senior consultants come from?

    The traditional pyramid functioned as a training pipeline. Graduates entered, learned the craft through years of analytical work, and developed into the experienced practitioners clients now prize. Close that entry point, and the industry faces a slow-motion skills crisis — a generation of senior consultants with no successors trained in the discipline.

    Pye’s answer is that consulting firms will increasingly recruit mid-career professionals who have already developed sector expertise elsewhere. The career path inverts: specialise first in an industry, then move into consulting.

    This is plausible but raises its own problems. The consulting skill set — structured problem solving, client management, the ability to diagnose organisational dysfunction — is not the same as industry expertise. A decade in financial services does not automatically produce someone who can run a transformation programme. The two capabilities overlap, but they are not identical.

    The real risk is not speed — it is the talent pipeline

    The firms that are moving fastest on AI are not necessarily the ones best positioned for the long term. Accenture is tracking AI logins for promotion decisions. OpenAI has formed alliances with McKinsey, BCG, Accenture, and Capgemini to deploy its enterprise AI platform. Capgemini’s CEO is counselling patience, arguing that deploying AI ahead of organisational readiness wastes both money and credibility.

    Each response reflects a different bet on how quickly the pyramid will change — and how to navigate the transition without breaking the firm’s ability to develop talent.

    The consulting industry is not being dismantled by AI. It is being redesigned, unevenly, firm by firm, with no consensus on the target operating model. The firms that get the balance right — reducing the base without severing the pipeline that produces tomorrow’s senior partners — will define what the industry looks like in a decade. The ones that treat this as a simple cost-cutting exercise will find, in five years, that they have cut too deep in exactly the wrong place.

  • Accenture Will Track AI Logins for Promotions. The Risk Is Measuring Compliance, Not Competence.

    Accenture has begun tracking how often senior employees log into its AI tools — and will factor that usage into promotion decisions. An internal email, reported by the Financial Times, put it plainly: “Use of our key tools will be a visible input to talent discussions.” For a firm that sells AI transformation to the world’s largest organisations, the message to its own workforce is unmistakable: adopt or stall.

    The policy is not without logic. Accenture has invested heavily in AI readiness — 550,000 employees trained in generative AI, up from just 30 in 2022, backed by $1 billion in annual learning and development spending. But training people and getting them to change how they work are two very different problems. The promotion-linked tracking is an attempt to close that gap by force.

    The credibility problem

    This move arrives at a moment when Accenture’s external positioning makes internal adoption a matter of commercial credibility. As maddaisy.com reported last week, Accenture is one of four firms named in OpenAI’s Frontier Alliance — tasked with building the data architecture, cloud infrastructure, and systems integration work needed to deploy AI agents at enterprise scale. It is difficult to sell that capability convincingly if your own senior managers are not using the tools.

    The policy applies specifically to senior managers and associate directors, with leadership roles now requiring what Accenture calls “regular adoption” of AI. The firm is tracking weekly logins to its AI platforms for certain senior staff, though employees in 12 European countries and those on US federal contracts are excluded — a pragmatic nod to varying data protection regimes and security requirements.

    The resistance is the interesting part

    What makes this story more than a policy announcement is the reaction. Some senior employees have questioned the value of the tools outright, with one describing them as “broken slop generators.” Another told the Financial Times they would “quit immediately” if the tracking applied to them.

    That resistance is worth taking seriously, not dismissing. It maps directly onto a pattern maddaisy.com has been tracking. Research published earlier this month found that only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly” — a figure that drops to 12% among non-senior staff. When people do not understand why they are being asked to use a tool, mandating its use tends to produce compliance rather than competence.

    Harvard Business Review research, cited in that same analysis, identified three psychological needs that determine whether employees embrace or resist AI: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues). A policy that monitors logins and ties them to career progression addresses none of these. It measures activity. It says nothing about whether that activity is useful.

    Logins are not outcomes

    This is the core tension. Accenture’s leadership knows that senior adoption is a bottleneck — industry observers note that older managers are often “less comfortable with technology and more wedded to established working methods.” CEO Julie Sweet has framed AI adoption as existential, telling analysts that the company is “exiting employees” in areas where reskilling is not possible. The 11,000 layoffs announced in September reinforced the point.

    But tracking logins conflates presence with productivity. A senior manager who logs in weekly to check a dashboard is counted the same as one who has genuinely integrated AI into client delivery. The metric captures the floor, not the ceiling.

    This echoes a broader concern maddaisy.com has documented. UC Berkeley researchers found that employees using AI tools worked faster but not necessarily better — absorbing more tasks, blurring work-life boundaries, and entering a cycle of acceleration that resembled productivity but often was not. If Accenture’s policy drives more tool usage without more thoughtful tool usage, it risks producing exactly this outcome at scale.

    What this tells the rest of the industry

    Accenture is not alone in struggling with senior AI adoption. The challenge is structural across professional services. Deloitte’s 2026 CSO Survey found that while 95% of chief strategy officers expect AI to reshape their priorities, only 28% co-lead their organisation’s AI decisions. The people with the authority to mandate change are often the furthest from understanding it.

    Accenture’s approach is at least direct. Rather than hoping adoption trickles up from junior staff — who typically adopt new tools faster — it is applying pressure from the top. And the numbers suggest some urgency: with 750,000 employees and $70 billion in revenue, Accenture has grown enormously from its 275,000-person, $29 billion base in 2013. Maintaining that trajectory while its competitors embed AI into delivery models requires its own workforce to be fluent, not just trained.

    The risk, though, is that the policy optimises for the wrong signal. Organisations that have navigated AI adoption most effectively — and maddaisy.com has covered several — tend to share a common trait: they measure what AI enables people to do differently, not how often people open the application. Accenture’s policy would be considerably more compelling if it tracked client outcomes improved through AI-assisted work, or time freed for higher-value tasks, rather than weekly platform logins.

    The precedent matters more than the policy

    Whatever one makes of the specifics, Accenture has done something that most large organisations have avoided: it has made AI adoption an explicit, measurable condition of career advancement for senior leaders. That is a significant signal. It tells clients that Accenture is serious about practising what it sells. It tells employees that AI fluency is no longer optional at the leadership level.

    Whether it works depends on what happens next. If the tracking evolves toward measuring genuine integration — how AI changes the quality of work, not just the frequency of logins — Accenture could set a useful template for the industry. If it remains a blunt instrument that rewards compliance over competence, it will likely produce exactly the kind of performative adoption that gives AI transformation programmes a bad name.

    For consultants and enterprise leaders watching from outside, the lesson is practical: mandating AI adoption is easy; mandating it well is the hard part. The metric you choose to track will shape the behaviour you get.

  • Capgemini Added 82,300 Offshore Workers Last Year. So Much for AI Replacing Everyone.

    If artificial intelligence is about to make IT services firms obsolete, someone forgot to tell Capgemini. The French consultancy ended 2025 with 423,400 employees — up 24% year-on-year — after adding 82,300 offshore workers in a single year. Its offshore workforce now stands at 279,200, a 42% increase, representing two-thirds of total headcount.

    At the same time, the company is planning €700 million in restructuring charges to reshape its workforce for AI. It is positioning itself as, in CEO Aiman Ezzat’s words, “the catalyst for enterprise-wide AI adoption.”

    The contradiction is only apparent. What Capgemini’s numbers actually reveal is the gap between the market’s AI narrative and the operational reality of large-scale enterprise transformation.

    The WNS factor

    The headline headcount surge requires context. The bulk of those 82,300 new offshore employees came from Capgemini’s acquisition of WNS, the India-headquartered business process services firm, completed in late 2025. Cloud4C, another acquisition, contributed further. This was not organic hiring — it was a deliberate strategic bet on scaling offshore delivery capacity at precisely the moment investors were questioning whether such capacity has a future.

    The acquisition arithmetic is revealing. Capgemini’s 2026 revenue growth guidance of 6.5% to 8.5% includes 4.5 to 5 percentage points from acquisitions, primarily WNS. Strip out the acquired growth, and organic expansion looks more like 2% to 3.5%. The company needed WNS to hit its targets.

    WNS brings something specific: expertise in AI-powered business process services, particularly in financial services, insurance, and healthcare. The company had already identified around 100 cross-selling opportunities and signed an intelligent operations contract worth more than €600 million. This is not a firm buying bodies for the sake of scale. It is buying the delivery infrastructure needed to operationalise AI at enterprise level.

    The restructuring counterweight

    The other side of the equation is the €700 million restructuring programme, most of which will land in 2026. Capgemini describes this as adapting “workforce and skills” to align with demand for AI-driven services. In plainer terms: some roles are being eliminated or relocated, while others — particularly those requiring AI engineering, data science, and agentic AI expertise — are being created or upskilled.

    As maddaisy noted when examining Capgemini’s full-year results last week, generative and agentic AI accounted for more than 10% of group bookings in Q4, up from around 5% earlier in the year. The company has trained 310,000 employees on generative AI and 194,000 on agentic AI. The restructuring is not a contradiction of the hiring — it is the other half of the same workforce transformation.

    The pattern is expand first, optimise second. Acquire the delivery capacity, then reshape it. It is a playbook that makes more commercial sense than the market’s preferred narrative of AI simply deleting headcount.

    What the market gets wrong

    Capgemini’s share price has fallen roughly 26% in 2026, driven largely by investor anxiety that AI will cannibalise the IT services business model. The logic runs: if AI can automate code generation, testing, and business process management, why would enterprises pay consultancies to do it?

    Ezzat addressed this directly in the post-earnings call. “I don’t think clients are thinking this way about reduction,” he told MarketWatch. “Clients are looking at how critical it is for them to adopt AI and where it can have an impact.”

    The distinction matters. Enterprises are not replacing their consulting relationships with AI tools. They are asking their consultants to help them implement AI — which, for now, requires more people, not fewer. The skills are different. The delivery models are changing. But the demand for hands-on expertise in making AI work within complex organisational environments is, if anything, increasing.

    This aligns with what Ezzat told Fortune separately: “AI is a business. It is not a technology. It cannot just be used to keep the house running.” His argument is that CEOs who treat AI as an efficiency tool for individual departments are missing the larger opportunity — and the larger threat — of enterprise-wide transformation.

    The offshore model evolves, but does not disappear

    The shift to 66% offshore headcount is not a return to the labour arbitrage model of the 2000s. Capgemini’s onshore workforce held steady at 144,200. What changed is the nature of offshore work. WNS’s strength is in intelligent operations — business process services enhanced by AI, automation, and analytics. These are not call centres or basic coding shops. They are delivery centres where AI tools augment human workers on complex processes.

    This is broadly consistent with what the wider consulting industry is signalling. McKinsey’s deployment of 25,000 AI agents across its workforce, as maddaisy reported this week, points in the same direction: AI as workforce augmentation, not replacement. The difference is that McKinsey is building internally, while Capgemini is acquiring externally. Both are betting that the future of professional services involves more AI and more people, deployed differently.

    What practitioners should watch

    For consultants and enterprise leaders, Capgemini’s workforce data offers a useful reality check against the AI replacement narrative. Three signals are worth tracking.

    First, the restructuring outcomes. If the €700 million programme results in meaningful upskilling rather than simple headcount reduction, it will validate the “expand and reshape” model. If it quietly becomes a cost-cutting exercise, the AI transformation story weakens.

    Second, the organic growth rate. With acquisitions contributing nearly five percentage points of 2026 growth, the underlying business needs to demonstrate it can grow on its own merits. The Q4 acceleration to over 10% AI bookings was promising, but one quarter does not make a trend.

    Third, the onshore-offshore ratio over time. If AI genuinely transforms delivery models, the 66% offshore share should eventually stabilise or even decline as automation reduces the need for large delivery teams. If it keeps rising, the industry is still in the scale-up phase, and the productivity gains from AI remain further away than the marketing suggests.

    Capgemini’s apparent paradox — adding 82,300 people while betting its future on AI — is not a paradox at all. It is the messy, expensive reality of what enterprise AI transformation actually looks like on the ground: more complexity before less, more people before fewer, more investment before returns. The firms that navigate this transition phase successfully will define the next era of professional services. The market just needs to be patient enough to let them.

  • Deploy First, Fix Later: How Poor AI Rollouts Are Engineering Burnout Into the System

    Over the past week, maddaisy has examined two dimensions of the AI-burnout nexus: the work intensification mechanisms that make employees busier rather than better, and the organisational frameworks that might address the people side of the equation. But there is a third dimension that sits upstream of both: the technology deployment itself.

    Before burnout becomes a management problem, it is often an implementation problem. And a growing body of evidence suggests that the way organisations roll out AI tools — rushed timelines, inadequate capability-building, and a persistent belief that go-live equals readiness — is baking unsustainable working conditions into the system from day one.

    The capability gap that no one budgets for

    A detailed analysis published by CIO.com this month draws on workforce upskilling data from large-scale enterprise rollouts to make a blunt assessment: systems rarely fail because the technology does not work. They fail because the organisation has not built the infrastructure to support the people using them.

    The numbers are sobering. Organisations routinely experience productivity drops of 30 to 40% within the first 90 days of a major technology go-live when workforce capability has not been adequately addressed. Support tickets triple. Workaround behaviours — offline spreadsheets, manual reconciliations, shadow systems — proliferate as employees revert to what feels safe and controllable. Post-go-live support costs run 40 to 60% over budget. ROI timelines slip by six to 12 months.

    None of this is caused by employee resistance or technological failure. It is the predictable consequence of treating capability-building as a training event rather than a systemic requirement.

    The 10% problem

    Perhaps the most striking data point concerns training transfer. Research on technology implementation training suggests that only 10 to 20% of skills learned in formal programmes translate into sustained on-the-job performance. The issue is not poor course design. It is the assumption that a three-day training session two weeks before go-live can substitute for the repeated practice, pattern recognition, and feedback loops that genuine capability requires.

    This matters directly for the burnout conversation. When employees lack the capability to operate new systems confidently, every minor issue becomes a support ticket. Every exception becomes an escalation. The cognitive load of navigating unfamiliar tools while maintaining output expectations creates exactly the kind of sustained pressure that Harvard Business Review’s recent research identified as driving work intensification. The tools are not the problem — the deployment is.

    Where AI deployments differ from traditional IT rollouts

    Enterprise technology deployments have always carried capability risks. But AI introduces specific complications that amplify them.

    First, AI tools change the scope of work, not just the process. As maddaisy’s earlier analysis noted, employees using AI do not simply do the same tasks faster — they absorb new responsibilities, blur role boundaries, and take on work that previously justified additional headcount. A traditional ERP deployment changes how someone does their job. An AI deployment can change what their job is. Upskilling programmes designed around process training cannot address a shift that is fundamentally about role redesign.

    Second, AI output is probabilistic, not deterministic. An ERP system produces the same result given the same inputs. An AI tool might produce different outputs each time, requiring users to exercise judgement about quality, accuracy, and appropriateness. That judgement cannot be trained in a classroom — it is built through experience, and it demands a kind of cognitive engagement that is qualitatively different from following a process manual.

    Third, AI raises the visibility of individual output. When an AI tool enables someone to produce a first draft in minutes rather than hours, the speed becomes the new baseline expectation. Employees who are still building capability with the tool face pressure to match output norms set by early adopters or by the tool’s theoretical capacity. The result is the self-reinforcing acceleration cycle that researchers have now documented repeatedly.

    Governance as deployment infrastructure

    The CIO.com analysis proposes treating upskilling as a governance concern rather than a training administration task — embedding capability-building into the transformation workstream with the same rigour as data migration or integration testing. This means defining capability in behavioural terms tied to business processes, starting practice in sandbox environments as soon as process designs stabilise, and measuring performance data rather than training completion rates.

    One practical recommendation stands out: establishing a “performance council” distinct from the training team. This group — composed of process owners, frontline managers, and high-performing end users — meets weekly to review whether people are performing reliably in live operations. They examine error rates, support ticket patterns, workaround behaviours, and time-to-competency. Critically, they have the authority to pause rollouts when the data shows capability is not sticking.

    This is not a radical proposal. It is the kind of operational discipline that well-run technology programmes have always applied to infrastructure and integration. The fact that it needs to be articulated separately for workforce capability suggests how often organisations skip the people side of deployment planning.

    Connecting the implementation layer to the burnout evidence

    The burnout data that has emerged over recent weeks tells a consistent story. DHR Global reports that 83% of workers experience some degree of burnout, with engagement dropping 24 percentage points in a single year. Only 34% of employees say their organisation has communicated AI’s workplace impact clearly. Among entry-level staff, that figure falls to 12%.

    These are not just management failures. They are deployment failures. When organisations ship AI tools without building the capability infrastructure to support them — without adequate training transfer, without performance monitoring, without role redesign — they are not just creating a change management problem. They are creating a technical debt of human capability that compounds over time, manifesting as burnout, disengagement, and quiet resistance.

    As Deloitte’s 2026 AI report made clear, the gap between strategic confidence and operational readiness remains the defining feature of enterprise AI. The implementation evidence suggests that closing this gap requires treating workforce capability not as a soft skill or an HR deliverable, but as core deployment infrastructure — as essential as the data pipeline and as measurable as system uptime.

    What this means for the next phase

    The organisations that avoid engineering burnout into their AI deployments will share a common trait: they will treat go-live as the beginning of capability-building, not the end. They will budget for the 90-day productivity dip and plan for it rather than being surprised by it. They will measure whether people can perform reliably at scale under real business pressure, not whether they completed a training module.

    Most importantly, they will recognise that the fastest way to undermine an AI investment is not a technical failure — it is deploying capable technology to an unprepared workforce and then measuring success by adoption rates alone. The tools work. The question, as it has always been with technology transformations, is whether the organisation around them does too.

  • The Missing Half of AI Adoption: Why Organisations Are Failing the People Side of the Equation

    The evidence that AI tools can intensify work rather than lighten it has been building steadily. As maddaisy examined earlier this week, UC Berkeley researchers found that employees using AI worked faster but not better — absorbing more tasks, blurring work-life boundaries, and entering a self-reinforcing cycle of acceleration. The diagnosis is now well-documented. The question that remains largely unanswered is what organisations should actually do about it.

    New data suggests that most are doing very little — and that the gap between leadership confidence and frontline reality is wider than many boards realise.

    The engagement crisis hiding behind adoption metrics

    A 2026 workforce trends report from DHR Global paints a stark picture. Some 83% of workers report experiencing at least some degree of burnout, with the technology sector among the worst affected at 58%. That figure has held roughly steady since 2025 — but what has changed is burnout’s impact on engagement. More than half of employees (52%) now say burnout actively reduces their engagement at work, up from 34% just a year earlier.

    Meanwhile, overall engagement has dropped sharply. Only 64% of workers describe themselves as very or extremely engaged, down from 88% in 2025. That 24-percentage-point decline should concern any organisation that has been measuring AI success purely by deployment velocity.

    The report also reveals a telling asymmetry in how AI communication is handled. Only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly.” Among entry-level staff, that figure falls to just 12%. Among C-suite leaders, it rises to 69%. The people making decisions about AI adoption and the people living with its consequences occupy different informational worlds.

    Resistance is not the problem — silence is

    A common framing in boardrooms is that employees resist AI because they fear change. The reality, according to recent research published in Harvard Business Review, is more nuanced. Workers’ responses to AI map onto three core psychological needs: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues).

    When AI deployment satisfies these needs — by removing drudgery and enabling more skilled work — adoption tends to follow. When it frustrates them — by making expertise feel disposable, reducing worker agency, or replacing human collaboration with human-machine interaction — the response is not always visible protest. It is quiet disengagement. Some 31% of knowledge workers admit to actively working against their company’s AI initiatives. Among Gen Z workers, that figure rises to 41%.

    Perhaps more telling: 54% of workers say they would use unapproved AI tools, and 32% hide their AI use from employers entirely. The adoption numbers that leadership teams celebrate may obscure a more complicated picture of how AI is actually being used — and resisted — on the ground.

    From diagnosis to framework

    If the problem is well-documented, the response toolkit is still emerging. Two frameworks have gained traction in the consulting and HR strategy space this year, and both share a common starting point: organisations need to treat AI adoption as a change management challenge, not a technology deployment exercise.

    The first, proposed by Ranganathan and Ye in their HBR study on work intensification, is the concept of “AI practice” — deliberate organisational norms around how AI is used. This includes structured pauses before major decisions (to counteract the speed bias that AI creates), sequencing work to reduce context-switching, and protecting time for human connection. The researchers argue that without intentional guardrails, the default trajectory is escalation: faster output, rising expectations, and eventual burnout.

    The second is the AWARE framework, developed by researchers studying psychological resistance to AI. It stands for: Acknowledge employee concerns proactively; Watch for both adaptive and maladaptive coping behaviours; Align support systems with the psychological needs that AI may be disrupting; Redesign workflows around human-AI complementarities rather than simple substitution; and Empower workers through transparency and genuine participation in implementation decisions.

    Neither framework is complicated. Both amount to structured versions of what good management has always looked like: listen to your people, involve them in decisions that affect their work, and pay attention to unintended consequences. The fact that these principles need to be formalised into acronyms and published in academic journals suggests how far many organisations have drifted from applying them during AI rollouts.

    The consulting opportunity — and obligation

    For consultancies advising on AI transformation, this data represents both a market opportunity and a professional obligation. The firms that positioned AI adoption as primarily a technical challenge — choose the right model, integrate the right data pipeline, deploy at scale — are leaving the harder, more valuable work on the table.

    The harder work is organisational. It involves job redesign, not just process automation. It requires mapping which tasks benefit from AI augmentation and which need to remain human — not as a one-off exercise, but as an ongoing practice. It means building managerial capability in areas that technology deployments rarely prioritise: emotional intelligence, coaching, and the ability to recognise cognitive overload before it becomes a retention problem.

    Only 44% of business leaders currently involve workers in AI implementation decisions. That figure alone explains much of the resistance and burnout data. People who have no say in how their work changes will inevitably feel that change is happening to them rather than with them — regardless of whether the technology itself is genuinely useful.

    What comes next

    The most thoughtful organisations are beginning to reframe AI not as a productivity multiplier but as a capacity management tool. The question shifts from “how can AI help people do more?” to “how can AI help people think more clearly, recover more effectively, and focus on what actually matters?” It is a subtle but significant distinction — one that moves the conversation from output volume to work quality and sustainability.

    As maddaisy noted in its analysis of Deloitte’s 2026 AI report, the gap between strategic confidence and operational readiness remains one of the defining features of enterprise AI. The burnout and engagement data suggests that this gap is not just a technology integration problem. It is, at its core, a people management problem — and one that will not be solved by deploying more tools faster.

    The organisations that get this right will not be the ones with the most AI models in production. They will be the ones that treated adoption as a human process from the beginning.

  • The Productivity Trap: Why AI Tools Are Making Employees Busier, Not Better

    The promise of AI in the workplace has always rested on a simple equation: automate the routine, free up humans for higher-value work. It is the pitch that has launched a thousand consulting engagements and underpinned billions in enterprise software investment. But new research suggests the equation may be running in reverse.

    A study published in Harvard Business Review this month, based on eight months of ethnographic research at a US technology company with roughly 200 employees, found that AI tools did not reduce workload. They intensified it. Employees worked faster, took on more tasks, and felt busier than before — despite the efficiency gains that AI was supposed to deliver.

    The researchers, Aruna Ranganathan and Xingqi Maggie Ye from UC Berkeley’s Haas School of Business, identified three distinct mechanisms through which AI was quietly ratcheting up the pressure.

    Task expansion: doing more with less becomes doing more with the same

    The first mechanism was task expansion. When AI tools made certain tasks faster, employees did not use the freed-up time for strategic thinking or creative work. Instead, they absorbed responsibilities that had previously belonged to other roles. Product managers and designers started writing code. Researchers took on engineering tasks. Workers assumed duties that, in the researchers’ words, “might previously have justified additional help.”

    This is a pattern that will be familiar to anyone who has watched organisations respond to efficiency gains over the past two decades. The spreadsheet did not eliminate accounting departments — it gave accountants more to do. Email did not reduce communication overhead — it multiplied it. AI appears to be following the same trajectory, with one critical difference: the speed at which task expansion occurs is considerably faster.

    The study also identified a secondary burden. Engineers found themselves spending additional time reviewing their colleagues’ AI-assisted work, a form of quality assurance that had not existed before because the work itself had not existed before.

    The vanishing boundary between work and rest

    The second mechanism was the blurring of work-life boundaries. Employees began incorporating work into what had previously been downtime — lunch breaks, gaps between meetings, even waiting for files to load. Because interacting with an AI tool felt, as one participant put it, “closer to chatting than to undertaking a formal task,” the psychological barrier to picking up work during off moments effectively disappeared.

    This is a subtler form of intensification, and arguably a more dangerous one. The physical cues that traditionally separated work from rest — closing a document, leaving a desk, switching off a screen — lose their power when work can be initiated through a conversational prompt on any device. The distinction between being productive and being available collapses.

    The multitasking illusion

    The third mechanism was increased multitasking. With AI handling parts of each task, employees managed multiple active threads simultaneously, creating what the researchers described as “continual switching of attention.” Worse, because AI made fast output visible to colleagues, it raised speed expectations across teams. Employees felt pressure not just to work with AI, but to keep pace with the output norms that AI made possible.

    The result was a self-reinforcing cycle: AI accelerated certain tasks, which raised expectations for speed, which made workers more reliant on AI, which widened the scope of what they attempted. As one engineer told the researchers, they felt “busier than before” despite the supposed time savings.

    Not new, but newly documented

    It is worth noting that the HBR findings are not entirely without precedent. Research from the University of Chicago and the University of Copenhagen published last year found that AI chatbots saved workers only about an hour per week — and that the tools created enough new tasks to largely nullify even that modest gain. What Ranganathan and Ye have added is the ethnographic depth: eight months of direct observation, 40 interviews, and a detailed account of the mechanisms through which intensification occurs.

    For consulting practitioners, this distinction matters. The earlier studies quantified the problem. This one explains the pathways. And that makes it actionable.

    Connecting the dots: from strategy gap to people gap

    The timing of this research is significant. As maddaisy noted last week, Deloitte’s 2026 State of AI in the Enterprise report revealed a widening gap between strategic confidence and operational readiness. Forty-two per cent of companies consider their AI strategy highly prepared, yet fewer feel equipped to execute it.

    The burnout research suggests one reason that gap persists: organisations are measuring AI adoption by deployment metrics — tools rolled out, processes automated, tasks per hour — while ignoring the human cost of that adoption. The operational readiness problem is not just about data pipelines and integration architecture. It is about whether the people using these tools can sustain the pace that the tools enable.

    This also has implications for the AI governance frameworks now coming into force across Europe and beyond. Most governance discussions focus on algorithmic bias, data privacy, and transparency. Workforce wellbeing — whether AI deployment is creating sustainable working conditions — barely features. As enforcement mechanisms sharpen, that gap may become harder to defend.

    What practitioners should watch

    The HBR researchers recommend what they call “AI practice” — a set of organisational disciplines designed to counteract intensification. These include intentional pauses (structured breaks for assessment), sequencing (deliberate pacing rather than continuous output), and human grounding (protected time for dialogue and connection).

    These are not revolutionary ideas. They are, in essence, good management practices adapted for an AI-augmented workplace. But the fact that they need to be articulated at all tells a story about how many organisations are deploying AI tools without thinking through the second-order effects on their people.

    For consultancies advising on AI transformation, this research is a prompt to broaden the conversation. Deployment is not the finish line. If the tools make employees faster but not better — busier but not more effective — then the productivity gains that justified the investment may prove temporary, eroded by turnover, cognitive fatigue, and declining work quality.

    The question, as the researchers put it, is not whether AI will change work, but whether organisations will actively shape that change — or let it quietly shape them.