Author: Connie Botelli

  • The Non-AI Agenda: What CIOs Are Actually Prioritising Beyond Artificial Intelligence in 2026

    Global IT spending will hit $6.15 trillion in 2026, a 10.8 per cent rise on the previous year. But dig beneath that headline and the distribution is strikingly lopsided. Gartner’s latest forecast projects AI spending to surge 80.8 per cent and data centre outlays to climb 31.7 per cent, while communications services and device budgets limp along at low single digits. The message is clear: AI is swallowing the budget. The question is what happens to everything else.

    That question matters because, as maddaisy has documented over recent weeks, the AI investment thesis is under strain. PwC’s 2026 CEO Survey found that 56 per cent of executives cannot point to measurable revenue gains from their AI programmes, and governance frameworks are lagging well behind deployment ambitions. If the technology absorbing most of the budget has yet to prove its return, the priorities being squeezed to fund it deserve closer scrutiny.

    The squeeze is real — and deliberate

    John-David Lovelock, a vice president analyst at Gartner, describes a dynamic in which runaway AI spending is forcing CIOs to find savings elsewhere — and IT services providers are bearing the brunt. The logic is straightforward: buyers expect their vendors to be using AI internally, and they want those efficiency gains reflected in lower fees.

    “CIOs need to find somewhere that they have control of their budget, and they can pick on the services companies because they’re using AI,” Lovelock told CIO.com.

    But not every non-AI line item can be trimmed without consequences. Several priorities are growing more urgent precisely because of AI adoption, not despite it.

    Cybersecurity: AI’s expanding attack surface

    The most immediate non-AI priority is, paradoxically, driven by AI itself. Dmitry Nazarevich, CTO at software firm Innowise, notes that his company’s security spending increase “is directly related to the increase in exposure and risk to data associated with the increased attack surface resulting from the introduction of generative AI.”

    This is not a theoretical concern. Every new AI model integrated into enterprise workflows introduces new vectors — data exfiltration through prompt injection, model poisoning through compromised training data, and the simple reality that agentic systems with write access to business processes can do real damage if they malfunction or are manipulated. As enterprises move from experimental AI pilots to production deployments, security spending is not optional — it is the prerequisite.

    Data foundations: the unsexy precondition

    Salesforce CIO Dan Shmitt offered a telling anecdote about an AI agent on the company’s help site that surfaced two conflicting answers to the same question. “Our first reaction was to assume the model was wrong,” he said. “The truth was that our data and content needed more consistency.”

    This pattern — blaming the model when the real problem is the data — is remarkably common. Capgemini’s TechnoVision 2026 framework places “thriving on data” as one of its nine foundational technology domains, emphasising data sharing, AI-driven insights, and sustainable data practices. The message is consistent across vendors and analysts: AI systems are only as reliable as the data infrastructure beneath them.

    For CIOs who have spent two years funding AI pilots, investing in data quality, master data management, and integration architecture may feel like a step backwards. It is not. It is the work that determines whether those pilots ever graduate to production.

    Technical debt and modernisation

    Legacy systems are not merely an annoyance in an AI-first world — they are a bottleneck. Nazarevich highlights that increased modernisation spending at Innowise is “partly a result of the fact that legacy or outdated systems limit the effectiveness of AI technology and delay deployment schedules.”

    This creates a compounding problem. Organisations that deferred modernisation to fund AI experiments now find that their AI initiatives are underperforming because the underlying platforms cannot support them. The CIO.com analysis of strategic imperatives for 2026 makes the point explicitly: if eight out of 10 strategic priorities relate to AI, “you’re likely missing some critical emerging technologies and trends.”

    Edge computing, digital twins, and platform modernisation may not generate the boardroom excitement of a generative AI demo, but they are the infrastructure on which AI capabilities depend.

    FinOps: managing the unpredictable bill

    AI workloads have introduced a new kind of cost unpredictability into IT budgets. Unlike traditional cloud computing, where usage patterns are relatively stable and forecastable, AI inference costs can spike with demand in ways that are difficult to model in advance. Innowise reports increased FinOps spending specifically because of “the unpredictability of computing bills created by AI workloads.”

    FinOps — the practice of bringing financial accountability to cloud spending — is no longer a niche discipline for cloud-native firms. For any organisation running AI at scale, it has become an essential management capability. Without it, the 80 per cent surge in AI spending that Gartner forecasts could easily overshoot, consuming budget earmarked for the very modernisation and security initiatives that AI requires to succeed.

    Workforce fluency: the persistent gap

    Technology investment alone solves nothing if the people using it are not equipped to do so effectively. Rebecca Gasser, global CIO at FGS Global, frames this as a literacy challenge: building digital and AI fluency across the organisation so that workers “can be more agile and adaptable to the ongoing changes.”

    Pat Lawicki, CIO of TruStage, puts it more directly: “We’re committed to balancing innovation with humanity: leveraging digital tools where they add real value while preserving the human connection that defines trust and empathy.”

    This is consistent with the pattern maddaisy has tracked in recent coverage of AI-driven burnout and the organisational failures behind poor AI rollouts. The technology works best when employees understand it, trust it, and can exercise judgement about when to rely on it and when to intervene. That requires sustained investment in training, change management, and communication — none of which appear in an AI spending forecast but all of which determine whether the forecast delivers value.

    The convergence argument

    The most sophisticated framing of the CIO’s 2026 agenda comes not from treating AI and non-AI priorities as competitors for budget, but from recognising their interdependence. Capgemini’s TechnoVision 2026 describes a shift toward “synchronicity at scale” — the idea that boundaries between digital, physical, and biological innovation are dissolving, and that the CIO’s job is to orchestrate across all of them simultaneously.

    In practice, this means cybersecurity investment protects AI deployments. Data foundation work makes AI outputs reliable. Modernisation enables AI to reach production. FinOps keeps AI costs sustainable. Workforce fluency ensures AI adoption sticks.

    The CIOs who treat 2026 as an AI-only year will likely find themselves explaining, 12 months from now, why their AI investments still are not delivering returns. The ones who invest in the full stack — the security, the data, the infrastructure, the people — are building the conditions under which AI can actually work.

    That is not a story about choosing between AI and everything else. It is a story about understanding that AI does not operate in isolation, and neither should the budget that funds it.

  • SAP’s “Break Glass” Cloud Plan Exposes the Limits of European Digital Sovereignty

    SAP, Microsoft, Capgemini, and Orange have announced a joint contingency plan for European cloud services — a “break glass” option in case US hyperscalers are legally blocked from operating in Europe. The partnership, routed through SAP’s German subsidiary Delos Cloud and the French entity Bleu (co-owned by Capgemini and Orange), promises business continuity in crisis scenarios ranging from sanctions to military conflict.

    It is a notable development, and it connects directly to the sovereignty narrative maddaisy.com has been tracking. But before treating it as a solution, it is worth examining what the plan actually offers — and what analysts say it cannot.

    The deal in context

    When maddaisy examined Capgemini’s sovereignty strategy earlier this month, the picture was clear: European digital sovereignty is converging on a pragmatic middle ground. Rather than building independent infrastructure from scratch, European firms are positioning themselves as trusted operators running workloads on American hyperscaler platforms — sovereign in governance and operations, reliant on US technology underneath.

    The SAP-Microsoft-Capgemini-Orange agreement is the logical extension of that approach. SAP’s announcement describes a mutual assistance framework where Delos Cloud and Bleu would cooperate on cross-border crisis response, including “early detection, analysis, defence, and remediation of cyber incidents.” Separately, Delos Cloud and Microsoft signed a business continuity agreement allowing Delos to access source code and maintain operations if sanctions restrict Microsoft’s European services.

    In other words: if the worst happens, European operators would run a local copy of Azure, disconnected from Microsoft’s global network.

    The wildcard is Washington, not Brussels

    Analysts are broadly aligned on one point: the EU itself is highly unlikely to block American cloud providers. Some 75% of the European cloud market sits with US hyperscalers, according to Forrester senior analyst Dario Maisto. Cutting off that access would amount to economic self-harm on a significant scale.

    The real concern is the reverse scenario — the US government using its leverage over hyperscalers to pressure European governments. As Maisto put it to CIO: “What if the US administration pulls the kill switch? It would be the weaponisation of IT, because the US knows about this dependency.”

    Danilo Kirschner, managing director at European cloud consulting firm Zoi, was blunter: “There have been non-logical, nonsensical decisions in the past year. From a European perspective, we need to prepare for anything.”

    The likelihood of such a scenario remains low. But the fact that SAP and Microsoft are publicly planning for it signals that enterprise customers are asking uncomfortable questions — and expect answers.

    A lifeboat, not a luxury liner

    The technical reality is where the plan runs into difficulty. Running a severed version of Azure in a European data centre sounds feasible in a press release. In practice, as Kirschner explained, Azure is millions of lines of code updated daily. Disconnected from Microsoft’s global security intelligence, engineering updates, and optimisation pipelines, a local copy would degrade rapidly.

    “This is a lifeboat, not a luxury liner,” Kirschner said. “Your disaster recovery plans must account for the fact that a sovereign cloud in crisis mode will likely be a static, maintenance-only environment.”

    The hardware question compounds the problem. Azure runs on proprietary, custom-designed server infrastructure. If geopolitical tensions are severe enough to block software access, sourcing replacement hardware under the same sanctions regime becomes equally difficult. And if a crisis lasts months rather than weeks, the global Azure platform will have evolved while the European fork remains frozen — creating what Kirschner described as “a technological dead end that requires a total rebuild to reconnect.”

    Even the legal framework is untested. “This agreement will have to be tested in court once the problem happens, when it could be too late,” Maisto noted. “This is not compliance as much as risk management.”

    The sovereignty paradox deepens

    There is an irony at the heart of this deal that Kirschner identified clearly: by offering a break-glass option for European sovereignty, Microsoft has paradoxically strengthened its own position. The single biggest political risk to using American hyperscalers in the European public sector — the theoretical possibility of a forced disconnection — has been partially neutralised. European governments and enterprises can now point to a contingency plan, however imperfect, and continue building on US infrastructure.

    As maddaisy’s earlier analysis of Capgemini’s sovereignty framework noted, Capgemini CEO Aiman Ezzat has been candid that “there is no such thing as absolute sovereignty” because no entity controls the entire value chain. The SAP deal underscores that position. Europe is not building an alternative to American cloud infrastructure. It is building contingency plans that assume American cloud infrastructure remains the default.

    For hardliners in France and elsewhere who want European-built alternatives at the highest sovereign classification levels, this approach will be unsatisfying. But the practical question — what is the alternative? — remains unanswered. The European Cybersecurity Certification Scheme continues to evolve, yet the gap between regulatory ambition and infrastructure reality shows no sign of closing.

    What practitioners should take from this

    For enterprise architects and CIOs managing European workloads, the SAP-Microsoft-Capgemini deal changes the conversation without changing the underlying calculus. It provides a political answer to a political risk — a contingency plan that reassures procurement committees and satisfies sovereignty checkboxes. It does not, however, solve the fundamental dependency.

    The practical takeaway is threefold. First, organisations should treat this as risk management, not a guarantee — the plan’s viability in a real crisis remains unproven and potentially short-lived. Second, workload portability and multi-cloud strategies become more important, not less, in a world where even the contingency plans assume degraded service. Third, the sovereignty requirements that Capgemini estimated would feature in over 50% of European service contracts by 2029 are becoming structurally embedded in how deals are structured — and this agreement is part of that shift.

    Europe’s cloud sovereignty story is not moving toward independence. It is moving toward managed dependency, with increasingly elaborate safety nets. Whether those nets would hold under real stress is a question no one can answer yet — and the honest participants in this deal are not pretending otherwise.

  • McKinsey’s 25,000 AI Agents: First-Mover Advantage or the Industry’s Biggest Experiment?

    McKinsey now counts 25,000 AI agents among its workforce — roughly one for every 1.6 human employees. That ratio, disclosed by CEO Bob Sternfels at the Consumer Electronics Show and confirmed by the firm, makes the consultancy’s internal agentic build-out one of the most aggressive in professional services.

    The numbers have moved quickly. Eighteen months ago, McKinsey operated a few thousand agents. Today, through its AI arm QuantumBlack, AI-related work accounts for 40% of the firm’s output. The agents have saved an estimated 1.5 million hours on search and synthesis tasks. Non-client-facing headcount has fallen 25%, yet output from those teams has risen 10%.

    Sternfels’s stated ambition is to pair every one of McKinsey’s 40,000 employees with at least one AI agent within the next 18 months.

    Scale versus substance

    The scale is eye-catching. Whether it is meaningful depends on what you count as an agent and how you measure the return.

    McKinsey’s rivals are openly sceptical. EY’s global engineering chief has argued that “a handful of agents do the heavy lifting” and that value should be tracked through efficiency KPIs, not headcount. PwC’s chief AI officer has called agent count “probably the wrong measure”, advocating instead for quality and workflow optimisation. Their counterargument is clear: a smaller fleet of high-performing agents, rigorously measured, may deliver more than a vast deployment still being calibrated.

    The critique lands on familiar ground. As maddaisy examined earlier today, PwC’s 2026 Global CEO Survey found that 56% of chief executives still cannot point to revenue gains from their AI investments. The deployment-versus-outcomes gap is the central tension in enterprise AI right now, and McKinsey’s bet raises the question of whether the firm is racing ahead of the same problem — or solving it.

    From advisory to infrastructure

    The more consequential shift may be in McKinsey’s business model. Sternfels described a move away from the firm’s traditional fee-for-service approach toward a model where McKinsey works with clients to identify joint business cases and then helps underwrite the outcomes.

    This is a significant departure for a firm built on advisory fees and billable hours. It positions McKinsey less as a strategic counsellor and more as an infrastructure partner — one that brings its own AI workforce to bear on client problems and shares in the measurable results.

    QuantumBlack, with 1,700 people, now drives all of McKinsey’s AI initiatives. Alex Singla, the senior partner who co-leads the unit, has described the firm’s evolving recruitment profile: candidates who can move fluidly between traditional consulting and engineering, and who can work alongside AI rather than simply directing it.

    Boston Consulting Group is pursuing a similar direction, deploying “forward-deployed consultants” who build AI tools directly on client projects. But McKinsey’s scale of internal adoption — 25,000 agents embedded across the firm — gives it a data advantage that is harder to replicate. Every internal deployment generates operational insight into what works, what fails, and how agentic systems behave at enterprise scale.

    The governance question maddaisy has been tracking

    The timing of McKinsey’s announcement is worth noting against the backdrop of the agentic AI governance gap maddaisy covered earlier this week. Deloitte’s data showed that only 21% of companies have mature governance models for agentic AI, even as three-quarters plan to deploy it within two years. And a broader pattern has emerged across maddaisy’s recent coverage: enterprises are strategically confident about AI but operationally underprepared.

    McKinsey, as both a deployer and an adviser, sits at the intersection of this tension. If the firm can demonstrate that 25,000 agents operate reliably at scale — with governance, measurement, and accountability frameworks to match — it will have built the most persuasive case study in the industry. If the agents outrun oversight, the reputational exposure is equally significant. When an AI agent produces an analysis and the recommendation proves wrong, the liability question is not academic.

    What practitioners should watch

    For consulting professionals and enterprise leaders watching this play out, three things matter more than the headline number.

    First, the metric that matters is not agent count but outcome attribution. McKinsey’s 1.5 million hours saved is a process metric. The firm’s shift to underwriting client outcomes suggests it understands the need to move beyond efficiency and toward measurable business impact — the same gap that PwC’s CEO Survey identified industry-wide.

    Second, the talent model is changing faster than many firms acknowledge. McKinsey’s search for hybrid consultant-engineers, and BCG’s forward-deployed model, signal that the traditional consulting skill set is being augmented, not just supported, by AI fluency. Firms that treat AI as a productivity tool rather than a workforce design challenge will fall behind.

    Third, scale creates its own governance requirements. As McKinsey’s own Carolyn Dewar argued in Fortune, the real risk is not the technology but how leaders manage the fear and trust dynamics that surround it. Deploying 25,000 agents without the organisational infrastructure to govern them would validate every concern the firm’s rivals have raised.

    McKinsey’s wager is that first-mover scale in agentic AI creates a compounding advantage — more data, better workflows, stronger client proof points. The industry is about to find out whether volume leads to value, or whether a smaller, sharper approach gets there first.

  • The Governance Gap No One Is Closing: Why Agentic AI Is Outrunning Enterprise Oversight

    Three-quarters of enterprises plan to deploy agentic AI within two years. Only one in five has a mature governance model for it. That arithmetic should concern anyone responsible for enterprise technology strategy.

    The figures come from Deloitte’s 2026 State of AI in the Enterprise report, and they represent something more specific than the familiar story of AI adoption outpacing regulation. The challenge with agentic AI is not that rules do not exist — as maddaisy recently examined, the EU AI Act, Colorado’s AI Act, and a growing patchwork of global regulations are creating real enforcement deadlines. The challenge is that agentic systems demand a fundamentally different kind of oversight, and most organisations have not built the operational machinery to provide it.

    What makes agentic AI different

    Conventional AI systems — including the generative AI tools that have dominated enterprise adoption over the past two years — operate in an advisory mode. They suggest, summarise, draft, and classify. A human reviews the output and decides what to do with it. Governance for these systems, while imperfect, fits within existing frameworks: you audit the model, monitor outputs, and maintain a human in the decision loop.

    Agentic AI breaks that model. These systems are designed to plan, execute, and adjust autonomously — booking flights, approving procurement decisions, triaging customer complaints, or managing collections workflows without waiting for human sign-off. Oracle’s new agentic banking platform, launched in February 2026, illustrates the trajectory: domain-specific agents handle loan originations, credit decisioning, and compliance checks, with human oversight positioned as a “human-in-the-loop” role rather than a gatekeeping one.

    The distinction matters because it changes where governance must operate. With advisory AI, oversight happens after the model produces an output and before a human acts on it. With agentic AI, the system is the actor. Governance must be embedded in real time — monitoring agent behaviour as it happens, enforcing boundaries on what an agent can and cannot do, and maintaining audit trails that capture not just decisions but the full chain of reasoning and actions that led to them.

    The 21% problem

    Deloitte’s finding that only 21% of companies have a mature agentic AI governance model is striking, but the detail beneath it is more revealing. In Singapore, where deployment ambitions are among the highest globally — 72% of businesses plan to deploy agentic AI across multiple operational areas within two years, up from 15% today — the mature governance figure drops to just 14%.

    As maddaisy noted in its analysis of the broader Deloitte report, this fits a wider pattern: organisations are increasingly confident in their AI strategy but declining in readiness on the operational foundations needed to execute it. The agentic governance gap is perhaps the sharpest expression of this paradox — a technology that is advancing from pilot to production while the controls needed to run it safely remain in early stages.

    Half of Singapore respondents reported using a patchwork of public and internal proprietary frameworks to assess agent risk and performance. That is not a governance model — it is improvisation.

    Why existing frameworks fall short

    The AI Trends Report 2026, published by statworx and AI Hub Frankfurt, identifies three operational disciplines that are becoming foundational for reliable agentic AI: AI governance, DataOps, and what the report terms AgentOps — the operational layer for managing autonomous AI agents in production.

    AgentOps is a useful concept because it captures what most enterprise governance frameworks currently lack. Traditional AI governance focuses on model development: training data quality, bias testing, documentation, and approval workflows before deployment. That is necessary but insufficient for systems that learn, adapt, and take actions in production environments.

    Agentic systems require runtime governance: clear boundaries on agent autonomy (what decisions can the agent make independently, and which require escalation?), real-time monitoring of agent behaviour against expected parameters, kill switches for when agents drift outside acceptable bounds, and comprehensive audit trails that regulators can inspect after the fact.

    The EU AI Act’s requirements for high-risk systems — documented risk management, technical logging, human oversight mechanisms, and conformity assessments — implicitly assume this kind of operational infrastructure. But most organisations have not yet translated those requirements into engineering reality.

    Deployment is not waiting for governance

    The uncomfortable truth is that agentic AI is entering production regardless of whether governance is ready. Oracle is shipping banking agents now. Companies like AMD and Heathrow Airport are deploying autonomous agents in customer experience roles. Gartner predicts agentic systems will autonomously resolve 80% of customer service issues by 2028.

    Constellation Research offers a useful counterweight to the hype, arguing that agentic AI is “more of a feature than a revolution” and that the real measure of value is decision velocity — how quickly smaller decision trees and processes can be automated at scale. This framing is helpful because it reduces the abstraction. An AI agent rebooking a flight is not a paradigm shift; it is a process automation with a more sophisticated reasoning layer. But that reasoning layer is precisely what makes governance harder. The agent is not following a static script — it is making contextual judgements, and those judgements need oversight.

    What the governance gap actually costs

    The business case for closing the governance gap is not primarily about regulatory fines, though those are real. It is about operational risk. When an agentic system autonomously commits to a procurement decision, misprices a financial product, or gives a customer incorrect information with real-world consequences, the liability question is immediate and the reputational exposure is direct.

    It is also about scaling. Deloitte’s data shows that companies with stronger governance foundations are deploying agentic AI more successfully — they start with lower-risk use cases, build governance capabilities alongside deployment, and scale deliberately. Organisations that skip the governance step find themselves either slowing down when something goes wrong or, worse, not knowing that something has gone wrong until a regulator or customer tells them.

    What needs to happen

    The gap between agentic AI deployment and agentic AI governance is not going to close on its own. Three practical steps can narrow it.

    Define agent autonomy boundaries explicitly. For every agentic AI deployment, organisations need a clear specification of what the agent can do independently, what requires human approval, and what is prohibited. These boundaries should be codified in the system, not just written in a policy document. The Oracle banking platform’s “human-in-the-loop” architecture is one model, but even that needs specificity about when and how the loop engages.

    Invest in runtime monitoring, not just pre-deployment testing. The governance challenge with agentic AI is that it operates continuously and adapts to context. Pre-deployment audits are necessary but not sufficient. Organisations need real-time monitoring that tracks agent decisions against expected parameters and flags anomalies before they compound.

    Build audit trails as engineering infrastructure. When a regulator asks how an agent arrived at a specific decision — and under the EU AI Act, they will — the organisation needs to produce a complete chain of the agent’s reasoning, data inputs, and actions. This is not a reporting challenge; it is an engineering one that needs to be designed into the system from the start, not retrofitted after deployment.

    The agentic AI governance gap is not a future problem. It is a present one, widening with every new deployment. The organisations that treat governance as a technical discipline — building it into the engineering of their agentic systems rather than bolting it on as a compliance afterthought — will have a structural advantage as the technology matures. Those that do not will discover, as many enterprises have with earlier waves of technology adoption, that the cost of retrofitting oversight always exceeds the cost of building it in.

  • The AI ROI Crisis: Why 56% of CEOs Still Cannot Prove Value From Their AI Investments

    More than half of the world’s chief executives cannot point to revenue gains from their AI investments. That is not a fringe finding from an alarmist report — it is the central conclusion of PwC’s 2026 Global CEO Survey, which polled 4,454 business leaders and was released at Davos in January.

    The figure — 56% reporting no measurable revenue uplift from AI — lands at a moment when enterprise AI spending continues to accelerate. Budgets are growing, headcounts in AI-adjacent roles are expanding, and the consulting industry’s order books are thick with transformation mandates. Yet the returns remain stubbornly elusive for the majority. Only 12% of CEOs surveyed reported achieving both revenue growth and cost reduction from their AI programmes.

    This is not, as some commentary has framed it, evidence that AI does not work. It is evidence that most organisations have not yet figured out how to make it work — a distinction that matters enormously for practitioners and consultants navigating the current landscape.

    The pattern maddaisy has been tracking

    PwC’s data confirms a dynamic that maddaisy has examined from several angles over the past fortnight. Deloitte’s 2026 State of AI report revealed that enterprises feel more strategically confident about AI than ever, yet less operationally ready — a paradox the report’s authors attributed to weak data infrastructure, insufficient talent, and immature governance. Research published in Harvard Business Review, which maddaisy covered last week, found that AI tools were making employees busier rather than more productive, with efficiency gains absorbed by task expansion rather than redirected toward higher-value work.

    The ROI gap is what happens when these operational failures compound. Organisations adopt AI tools without redesigning workflows. They measure deployment (how many teams have access) rather than outcomes (what changed as a result). They invest in the technology layer while underinvesting in the organisational layer — the process changes, role redesigns, and measurement frameworks that turn a pilot into a production capability.

    A measurement problem masquerading as a technology problem

    One of the more revealing aspects of the PwC data is what it implies about how enterprises are tracking AI value. CEO revenue confidence sits at a five-year low of 30%, and this pessimism correlates with the inability to demonstrate AI returns. But the question is whether the returns genuinely are not there, or whether organisations simply lack the instrumentation to detect them.

    The answer, for many enterprises, is likely both. Some AI deployments are genuinely failing to deliver — deployed in the wrong processes, aimed at the wrong problems, or undermined by poor data quality. But others may be generating real value that never surfaces in the metrics that CEOs review. Time saved in middle-office processes, reduced error rates in document handling, faster iteration cycles in product development — these are real gains, but they rarely appear on a revenue line unless someone has built the measurement architecture to capture them.

    This is a familiar pattern in enterprise technology adoption. The early years of cloud computing saw similar complaints: organisations spent heavily on migration but struggled to quantify the business impact beyond infrastructure cost reduction. The value was real — in agility, speed to market, and developer productivity — but it took years for finance teams to develop frameworks that could track it. AI is following the same trajectory, with the added complication that its benefits are often diffuse, spread across many small improvements rather than concentrated in a single, measurable outcome.

    What the 12% are doing differently

    PwC’s survey found that the minority of organisations achieving both revenue and cost benefits from AI share common characteristics. They have invested in data foundations — not just data lakes and pipelines, but governance structures that ensure data quality, accessibility, and appropriate use. They have moved beyond isolated pilots to embed AI into core business processes. And they treat AI adoption as an organisational change programme, not a technology deployment.

    None of this is conceptually new. Consultancies have been advising clients on change management, data governance, and process redesign for decades. What is new is the speed at which the gap between leaders and laggards is widening. The 12% who have cracked the ROI equation are pulling ahead, using AI-generated insights to inform strategy, AI-automated processes to reduce costs, and AI-enhanced products to capture new revenue. The 56% who have not are still running pilots, still debating governance frameworks, and still struggling to answer the board’s most basic question: what are we getting for this money?

    The consulting industry’s uncomfortable position

    For the consulting sector, PwC’s findings create an awkward tension. The firms advising enterprises on AI strategy are, in many cases, the same firms whose clients cannot demonstrate returns. This is not necessarily a reflection of poor advice — the operational barriers to AI value are genuine and deep — but it does raise questions about what consulting engagements are actually delivering.

    As maddaisy noted when examining Capgemini’s recent results, CEO Aiman Ezzat explicitly framed the company’s direction as a shift “from AI hype to AI realism.” Generative AI bookings exceeded 8% of Capgemini’s total for the year, but the company’s 2026 revenue guidance fell slightly below analyst expectations — a reminder that even firms positioning AI at the centre of their strategy face questions about whether the investment is translating into proportional growth.

    The risk for consultancies is that the ROI gap erodes client confidence in AI-related engagements. If more than half of CEOs see no revenue benefit, the appetite for further AI spending — and the advisory services that accompany it — may tighten. The counter-argument, which PwC’s own data supports, is that the solution to poor AI returns is not less AI but better AI implementation. That is a consulting engagement waiting to happen, provided firms can credibly demonstrate that they know how to close the gap.

    What practitioners should watch

    Three developments will shape whether the AI ROI picture improves or deteriorates over the coming quarters.

    First, the regulatory environment is tightening. As maddaisy recently examined, the EU AI Act’s high-risk system requirements become enforceable in August 2026, and state-level legislation in the US is creating a fragmented compliance landscape. Compliance costs will add to the total cost of AI ownership, making the ROI equation harder to balance for organisations that have not already built governance into their deployment model.

    Second, the rise of agentic AI — systems that plan and execute tasks with minimal human oversight — will test whether organisations can capture value from more autonomous AI without losing control. Deloitte’s data showed a nearly fivefold increase in planned agentic AI deployments over the next two years, but only one in five companies has a mature governance model for autonomous agents. The ROI potential is significant; so is the risk of expensive failures.

    Third, and perhaps most importantly, watch for a shift in how organisations measure AI value. The enterprises that move beyond “did revenue go up?” to more granular metrics — cycle time reduction, error rate improvement, employee capacity freed for strategic work — will be better positioned to demonstrate returns and justify continued investment. The measurement framework may matter as much as the technology itself.

    PwC’s 56% figure is striking, but it is a snapshot of a transition, not a verdict on AI’s potential. The technology is not the bottleneck. Execution is. And for consultants and practitioners, that distinction is where the real work — and the real opportunity — lies.

  • From Stablecoins to Smart Settlement: How Distributed Ledger Technology Is Quietly Rebuilding Financial Plumbing

    For years, distributed ledger technology sat in the awkward space between genuine innovation and inflated expectations. Blockchain was going to revolutionise everything from supply chains to voting systems, or so the pitch decks claimed. Most of those promises went nowhere. But while the hype cycle burned itself out, something quieter and more consequential was happening: major financial institutions started rebuilding their core infrastructure around the technology, and regulators began writing the rules to let them do it at scale.

    The result, now visible across multiple industry reports and institutional moves in early 2026, is that distributed ledger technology has become financial plumbing — not a speculative asset class, not a retail phenomenon, but the infrastructure layer that processes, settles, and records transactions between institutions. The shift is less dramatic than the headlines of 2021 suggested, but arguably more significant.

    The Regulatory Unlock

    The single biggest catalyst has been regulatory clarity. In the United States, the GENIUS Act, signed into law in July 2025, established the first comprehensive framework for stablecoins. It requires permitted payment stablecoin issuers to maintain 100% reserve backing with liquid assets, submit to federal or state supervision, and implement anti-money laundering programmes equivalent to those of traditional banks.

    The effect was immediate. According to BDO’s 2026 fintech predictions, stablecoin transaction volumes surged from $6 billion in February 2025 to $10 billion by August, a direct consequence of regulatory certainty reducing institutional hesitancy. In Europe, the Markets in Crypto-Assets regulation (MiCA) added a parallel framework, creating a passportable licensing regime for crypto-asset service providers across the EU.

    This matters because the absence of clear rules was the primary barrier to institutional adoption. Banks and asset managers were not sceptical of the technology itself — they were sceptical of deploying it without knowing which regulations would apply. That uncertainty is now largely resolved in the two largest financial markets.

    Institutional Moves, Not Startup Promises

    The institutions acting on this clarity are not fintech startups chasing venture capital. They are some of the largest names in global finance.

    JPMorgan Chase launched its JPMD deposit token on Coinbase’s Base blockchain in 2025, representing US dollar deposits on a distributed ledger — a significant step toward tokenised banking. Visa piloted stablecoin payouts through Visa Direct, enabling businesses to send payments directly to stablecoin wallets. Mastercard has been investing in crypto transaction processing infrastructure, including its reported interest in acquiring Zero Hash.

    These are not experiments. They are strategic commitments backed by significant capital. When a bank the size of JPMorgan puts deposit tokens on a public blockchain, it signals that the technology has passed internal risk assessments, compliance reviews, and board-level scrutiny. The pilot phase, for these institutions at least, is over.

    What the Technology Actually Changes

    The practical impact centres on three areas: settlement speed, asset accessibility, and cross-border payments.

    Settlement is the most consequential. Traditional securities settlement operates on a T+1 or T+2 cycle — transactions take one to two business days to finalise. Distributed ledger technology enables near-instant settlement, reducing counterparty risk and freeing up capital that would otherwise be locked during the settlement window. For large institutions managing billions in daily transactions, even marginal improvements in settlement efficiency translate to meaningful cost savings.

    Tokenisation is extending access to asset classes that were previously illiquid or inaccessible to smaller investors. Real estate, private equity, commodities, and corporate bonds are being represented as digital tokens on distributed ledgers, enabling fractional ownership. BDO notes that analysts estimate more than $30 billion of assets are now tokenised globally, with Standard Chartered projecting a multi-trillion-dollar market as the technology matures. The practical effect is that an investor can now purchase a fraction of a commercial property or a private equity fund for as little as $1,000, something that was structurally impossible under traditional ownership models.

    Cross-border payments stand to gain the most from stablecoin infrastructure. Transfers that previously took days and incurred significant fees through correspondent banking networks can now settle in seconds at a fraction of the cost. For businesses operating across multiple jurisdictions, particularly in emerging markets with currency volatility, this is a material operational improvement.

    The Barriers That Remain

    None of this means the transition is complete or without friction. Several significant challenges persist.

    Accenture’s 2026 banking trends report estimates that $13 trillion in transaction value could shift to alternative payment methods by 2030, putting approximately $13 billion in payment fees at risk for traditional banks. That creates a powerful incentive for incumbents to adopt the technology, but also a defensive posture that can slow genuine transformation. Seventy-six per cent of financial institutions surveyed reported they still have work to do to enable smart money capabilities.

    Regulatory fragmentation is another concern. While the US and EU have made progress, global alignment is far from complete. Different jurisdictions impose different AML and KYC requirements, creating compliance complexity for institutions operating across borders. BDO’s analysis highlights that sponsor banks are now demanding detailed AML compliance specifications from fintech partners before deals can proceed — a sign that the industry is maturing, but also that the compliance burden is substantial.

    Cybersecurity risk is also evolving alongside the technology. As more value moves onto distributed ledgers, the attack surface expands. Financial services already accounts for 33% of all AI-powered cyberattacks, and blockchain systems introduce additional vectors around consensus protocols and smart contract vulnerabilities. The security infrastructure needs to mature at the same pace as the financial infrastructure.

    What Practitioners Should Watch

    For consultants and technology leaders advising financial services clients, the key shift is one of framing. Distributed ledger technology is no longer a conversation about cryptocurrency adoption. It is a conversation about infrastructure modernisation — how institutions settle trades, manage assets, process payments, and comply with regulation.

    The practical questions for 2026 are specific: Should a bank position itself as a stablecoin issuer, custodian, or facilitator? How does tokenisation change the custody and compliance requirements for an asset management firm? What does near-instant settlement mean for treasury operations and capital allocation?

    These are not speculative questions. They are operational ones, driven by technology that is already deployed and regulation that is already in force. The organisations that treat distributed ledger technology as infrastructure — rather than innovation theatre — will be the ones that capture the efficiency gains and competitive advantages it offers. The rest will find themselves paying someone else’s transaction fees.

  • Deploy First, Fix Later: How Poor AI Rollouts Are Engineering Burnout Into the System

    Over the past week, maddaisy has examined two dimensions of the AI-burnout nexus: the work intensification mechanisms that make employees busier rather than better, and the organisational frameworks that might address the people side of the equation. But there is a third dimension that sits upstream of both: the technology deployment itself.

    Before burnout becomes a management problem, it is often an implementation problem. And a growing body of evidence suggests that the way organisations roll out AI tools — rushed timelines, inadequate capability-building, and a persistent belief that go-live equals readiness — is baking unsustainable working conditions into the system from day one.

    The capability gap that no one budgets for

    A detailed analysis published by CIO.com this month draws on workforce upskilling data from large-scale enterprise rollouts to make a blunt assessment: systems rarely fail because the technology does not work. They fail because the organisation has not built the infrastructure to support the people using them.

    The numbers are sobering. Organisations routinely experience productivity drops of 30 to 40% within the first 90 days of a major technology go-live when workforce capability has not been adequately addressed. Support tickets triple. Workaround behaviours — offline spreadsheets, manual reconciliations, shadow systems — proliferate as employees revert to what feels safe and controllable. Post-go-live support costs run 40 to 60% over budget. ROI timelines slip by six to 12 months.

    None of this is caused by employee resistance or technological failure. It is the predictable consequence of treating capability-building as a training event rather than a systemic requirement.

    The 10% problem

    Perhaps the most striking data point concerns training transfer. Research on technology implementation training suggests that only 10 to 20% of skills learned in formal programmes translate into sustained on-the-job performance. The issue is not poor course design. It is the assumption that a three-day training session two weeks before go-live can substitute for the repeated practice, pattern recognition, and feedback loops that genuine capability requires.

    This matters directly for the burnout conversation. When employees lack the capability to operate new systems confidently, every minor issue becomes a support ticket. Every exception becomes an escalation. The cognitive load of navigating unfamiliar tools while maintaining output expectations creates exactly the kind of sustained pressure that Harvard Business Review’s recent research identified as driving work intensification. The tools are not the problem — the deployment is.

    Where AI deployments differ from traditional IT rollouts

    Enterprise technology deployments have always carried capability risks. But AI introduces specific complications that amplify them.

    First, AI tools change the scope of work, not just the process. As maddaisy’s earlier analysis noted, employees using AI do not simply do the same tasks faster — they absorb new responsibilities, blur role boundaries, and take on work that previously justified additional headcount. A traditional ERP deployment changes how someone does their job. An AI deployment can change what their job is. Upskilling programmes designed around process training cannot address a shift that is fundamentally about role redesign.

    Second, AI output is probabilistic, not deterministic. An ERP system produces the same result given the same inputs. An AI tool might produce different outputs each time, requiring users to exercise judgement about quality, accuracy, and appropriateness. That judgement cannot be trained in a classroom — it is built through experience, and it demands a kind of cognitive engagement that is qualitatively different from following a process manual.

    Third, AI raises the visibility of individual output. When an AI tool enables someone to produce a first draft in minutes rather than hours, the speed becomes the new baseline expectation. Employees who are still building capability with the tool face pressure to match output norms set by early adopters or by the tool’s theoretical capacity. The result is the self-reinforcing acceleration cycle that researchers have now documented repeatedly.

    Governance as deployment infrastructure

    The CIO.com analysis proposes treating upskilling as a governance concern rather than a training administration task — embedding capability-building into the transformation workstream with the same rigour as data migration or integration testing. This means defining capability in behavioural terms tied to business processes, starting practice in sandbox environments as soon as process designs stabilise, and measuring performance data rather than training completion rates.

    One practical recommendation stands out: establishing a “performance council” distinct from the training team. This group — composed of process owners, frontline managers, and high-performing end users — meets weekly to review whether people are performing reliably in live operations. They examine error rates, support ticket patterns, workaround behaviours, and time-to-competency. Critically, they have the authority to pause rollouts when the data shows capability is not sticking.

    This is not a radical proposal. It is the kind of operational discipline that well-run technology programmes have always applied to infrastructure and integration. The fact that it needs to be articulated separately for workforce capability suggests how often organisations skip the people side of deployment planning.

    Connecting the implementation layer to the burnout evidence

    The burnout data that has emerged over recent weeks tells a consistent story. DHR Global reports that 83% of workers experience some degree of burnout, with engagement dropping 24 percentage points in a single year. Only 34% of employees say their organisation has communicated AI’s workplace impact clearly. Among entry-level staff, that figure falls to 12%.

    These are not just management failures. They are deployment failures. When organisations ship AI tools without building the capability infrastructure to support them — without adequate training transfer, without performance monitoring, without role redesign — they are not just creating a change management problem. They are creating a technical debt of human capability that compounds over time, manifesting as burnout, disengagement, and quiet resistance.

    As Deloitte’s 2026 AI report made clear, the gap between strategic confidence and operational readiness remains the defining feature of enterprise AI. The implementation evidence suggests that closing this gap requires treating workforce capability not as a soft skill or an HR deliverable, but as core deployment infrastructure — as essential as the data pipeline and as measurable as system uptime.

    What this means for the next phase

    The organisations that avoid engineering burnout into their AI deployments will share a common trait: they will treat go-live as the beginning of capability-building, not the end. They will budget for the 90-day productivity dip and plan for it rather than being surprised by it. They will measure whether people can perform reliably at scale under real business pressure, not whether they completed a training module.

    Most importantly, they will recognise that the fastest way to undermine an AI investment is not a technical failure — it is deploying capable technology to an unprepared workforce and then measuring success by adoption rates alone. The tools work. The question, as it has always been with technology transformations, is whether the organisation around them does too.

  • The Missing Half of AI Adoption: Why Organisations Are Failing the People Side of the Equation

    The evidence that AI tools can intensify work rather than lighten it has been building steadily. As maddaisy examined earlier this week, UC Berkeley researchers found that employees using AI worked faster but not better — absorbing more tasks, blurring work-life boundaries, and entering a self-reinforcing cycle of acceleration. The diagnosis is now well-documented. The question that remains largely unanswered is what organisations should actually do about it.

    New data suggests that most are doing very little — and that the gap between leadership confidence and frontline reality is wider than many boards realise.

    The engagement crisis hiding behind adoption metrics

    A 2026 workforce trends report from DHR Global paints a stark picture. Some 83% of workers report experiencing at least some degree of burnout, with the technology sector among the worst affected at 58%. That figure has held roughly steady since 2025 — but what has changed is burnout’s impact on engagement. More than half of employees (52%) now say burnout actively reduces their engagement at work, up from 34% just a year earlier.

    Meanwhile, overall engagement has dropped sharply. Only 64% of workers describe themselves as very or extremely engaged, down from 88% in 2025. That 24-percentage-point decline should concern any organisation that has been measuring AI success purely by deployment velocity.

    The report also reveals a telling asymmetry in how AI communication is handled. Only 34% of employees say their organisation has communicated AI’s workplace impact “very clearly.” Among entry-level staff, that figure falls to just 12%. Among C-suite leaders, it rises to 69%. The people making decisions about AI adoption and the people living with its consequences occupy different informational worlds.

    Resistance is not the problem — silence is

    A common framing in boardrooms is that employees resist AI because they fear change. The reality, according to recent research published in Harvard Business Review, is more nuanced. Workers’ responses to AI map onto three core psychological needs: competence (feeling effective), autonomy (feeling in control), and relatedness (maintaining meaningful connections with colleagues).

    When AI deployment satisfies these needs — by removing drudgery and enabling more skilled work — adoption tends to follow. When it frustrates them — by making expertise feel disposable, reducing worker agency, or replacing human collaboration with human-machine interaction — the response is not always visible protest. It is quiet disengagement. Some 31% of knowledge workers admit to actively working against their company’s AI initiatives. Among Gen Z workers, that figure rises to 41%.

    Perhaps more telling: 54% of workers say they would use unapproved AI tools, and 32% hide their AI use from employers entirely. The adoption numbers that leadership teams celebrate may obscure a more complicated picture of how AI is actually being used — and resisted — on the ground.

    From diagnosis to framework

    If the problem is well-documented, the response toolkit is still emerging. Two frameworks have gained traction in the consulting and HR strategy space this year, and both share a common starting point: organisations need to treat AI adoption as a change management challenge, not a technology deployment exercise.

    The first, proposed by Ranganathan and Ye in their HBR study on work intensification, is the concept of “AI practice” — deliberate organisational norms around how AI is used. This includes structured pauses before major decisions (to counteract the speed bias that AI creates), sequencing work to reduce context-switching, and protecting time for human connection. The researchers argue that without intentional guardrails, the default trajectory is escalation: faster output, rising expectations, and eventual burnout.

    The second is the AWARE framework, developed by researchers studying psychological resistance to AI. It stands for: Acknowledge employee concerns proactively; Watch for both adaptive and maladaptive coping behaviours; Align support systems with the psychological needs that AI may be disrupting; Redesign workflows around human-AI complementarities rather than simple substitution; and Empower workers through transparency and genuine participation in implementation decisions.

    Neither framework is complicated. Both amount to structured versions of what good management has always looked like: listen to your people, involve them in decisions that affect their work, and pay attention to unintended consequences. The fact that these principles need to be formalised into acronyms and published in academic journals suggests how far many organisations have drifted from applying them during AI rollouts.

    The consulting opportunity — and obligation

    For consultancies advising on AI transformation, this data represents both a market opportunity and a professional obligation. The firms that positioned AI adoption as primarily a technical challenge — choose the right model, integrate the right data pipeline, deploy at scale — are leaving the harder, more valuable work on the table.

    The harder work is organisational. It involves job redesign, not just process automation. It requires mapping which tasks benefit from AI augmentation and which need to remain human — not as a one-off exercise, but as an ongoing practice. It means building managerial capability in areas that technology deployments rarely prioritise: emotional intelligence, coaching, and the ability to recognise cognitive overload before it becomes a retention problem.

    Only 44% of business leaders currently involve workers in AI implementation decisions. That figure alone explains much of the resistance and burnout data. People who have no say in how their work changes will inevitably feel that change is happening to them rather than with them — regardless of whether the technology itself is genuinely useful.

    What comes next

    The most thoughtful organisations are beginning to reframe AI not as a productivity multiplier but as a capacity management tool. The question shifts from “how can AI help people do more?” to “how can AI help people think more clearly, recover more effectively, and focus on what actually matters?” It is a subtle but significant distinction — one that moves the conversation from output volume to work quality and sustainability.

    As maddaisy noted in its analysis of Deloitte’s 2026 AI report, the gap between strategic confidence and operational readiness remains one of the defining features of enterprise AI. The burnout and engagement data suggests that this gap is not just a technology integration problem. It is, at its core, a people management problem — and one that will not be solved by deploying more tools faster.

    The organisations that get this right will not be the ones with the most AI models in production. They will be the ones that treated adoption as a human process from the beginning.

  • The Productivity Trap: Why AI Tools Are Making Employees Busier, Not Better

    The promise of AI in the workplace has always rested on a simple equation: automate the routine, free up humans for higher-value work. It is the pitch that has launched a thousand consulting engagements and underpinned billions in enterprise software investment. But new research suggests the equation may be running in reverse.

    A study published in Harvard Business Review this month, based on eight months of ethnographic research at a US technology company with roughly 200 employees, found that AI tools did not reduce workload. They intensified it. Employees worked faster, took on more tasks, and felt busier than before — despite the efficiency gains that AI was supposed to deliver.

    The researchers, Aruna Ranganathan and Xingqi Maggie Ye from UC Berkeley’s Haas School of Business, identified three distinct mechanisms through which AI was quietly ratcheting up the pressure.

    Task expansion: doing more with less becomes doing more with the same

    The first mechanism was task expansion. When AI tools made certain tasks faster, employees did not use the freed-up time for strategic thinking or creative work. Instead, they absorbed responsibilities that had previously belonged to other roles. Product managers and designers started writing code. Researchers took on engineering tasks. Workers assumed duties that, in the researchers’ words, “might previously have justified additional help.”

    This is a pattern that will be familiar to anyone who has watched organisations respond to efficiency gains over the past two decades. The spreadsheet did not eliminate accounting departments — it gave accountants more to do. Email did not reduce communication overhead — it multiplied it. AI appears to be following the same trajectory, with one critical difference: the speed at which task expansion occurs is considerably faster.

    The study also identified a secondary burden. Engineers found themselves spending additional time reviewing their colleagues’ AI-assisted work, a form of quality assurance that had not existed before because the work itself had not existed before.

    The vanishing boundary between work and rest

    The second mechanism was the blurring of work-life boundaries. Employees began incorporating work into what had previously been downtime — lunch breaks, gaps between meetings, even waiting for files to load. Because interacting with an AI tool felt, as one participant put it, “closer to chatting than to undertaking a formal task,” the psychological barrier to picking up work during off moments effectively disappeared.

    This is a subtler form of intensification, and arguably a more dangerous one. The physical cues that traditionally separated work from rest — closing a document, leaving a desk, switching off a screen — lose their power when work can be initiated through a conversational prompt on any device. The distinction between being productive and being available collapses.

    The multitasking illusion

    The third mechanism was increased multitasking. With AI handling parts of each task, employees managed multiple active threads simultaneously, creating what the researchers described as “continual switching of attention.” Worse, because AI made fast output visible to colleagues, it raised speed expectations across teams. Employees felt pressure not just to work with AI, but to keep pace with the output norms that AI made possible.

    The result was a self-reinforcing cycle: AI accelerated certain tasks, which raised expectations for speed, which made workers more reliant on AI, which widened the scope of what they attempted. As one engineer told the researchers, they felt “busier than before” despite the supposed time savings.

    Not new, but newly documented

    It is worth noting that the HBR findings are not entirely without precedent. Research from the University of Chicago and the University of Copenhagen published last year found that AI chatbots saved workers only about an hour per week — and that the tools created enough new tasks to largely nullify even that modest gain. What Ranganathan and Ye have added is the ethnographic depth: eight months of direct observation, 40 interviews, and a detailed account of the mechanisms through which intensification occurs.

    For consulting practitioners, this distinction matters. The earlier studies quantified the problem. This one explains the pathways. And that makes it actionable.

    Connecting the dots: from strategy gap to people gap

    The timing of this research is significant. As maddaisy noted last week, Deloitte’s 2026 State of AI in the Enterprise report revealed a widening gap between strategic confidence and operational readiness. Forty-two per cent of companies consider their AI strategy highly prepared, yet fewer feel equipped to execute it.

    The burnout research suggests one reason that gap persists: organisations are measuring AI adoption by deployment metrics — tools rolled out, processes automated, tasks per hour — while ignoring the human cost of that adoption. The operational readiness problem is not just about data pipelines and integration architecture. It is about whether the people using these tools can sustain the pace that the tools enable.

    This also has implications for the AI governance frameworks now coming into force across Europe and beyond. Most governance discussions focus on algorithmic bias, data privacy, and transparency. Workforce wellbeing — whether AI deployment is creating sustainable working conditions — barely features. As enforcement mechanisms sharpen, that gap may become harder to defend.

    What practitioners should watch

    The HBR researchers recommend what they call “AI practice” — a set of organisational disciplines designed to counteract intensification. These include intentional pauses (structured breaks for assessment), sequencing (deliberate pacing rather than continuous output), and human grounding (protected time for dialogue and connection).

    These are not revolutionary ideas. They are, in essence, good management practices adapted for an AI-augmented workplace. But the fact that they need to be articulated at all tells a story about how many organisations are deploying AI tools without thinking through the second-order effects on their people.

    For consultancies advising on AI transformation, this research is a prompt to broaden the conversation. Deployment is not the finish line. If the tools make employees faster but not better — busier but not more effective — then the productivity gains that justified the investment may prove temporary, eroded by turnover, cognitive fatigue, and declining work quality.

    The question, as the researchers put it, is not whether AI will change work, but whether organisations will actively shape that change — or let it quietly shape them.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.