The governance playbook for agentic AI is starting to take shape — and it looks nothing like the frameworks most enterprises currently rely on.
Over the past month, a cluster of regulatory bodies, law firms, and industry coalitions have published guidance specifically addressing agentic AI systems. The UK Information Commissioner’s Office released its first Tech Futures report on agentic AI in January. Mayer Brown published a comprehensive governance framework in February. The Partnership on AI identified agentic governance as the top priority among six for 2026. And FINRA’s 2026 oversight report now includes a dedicated section on AI agents as an emerging threat to financial markets.
Taken individually, none of these publications is remarkable. Taken together, they represent something maddaisy has been tracking since mid-February: the governance conversation is finally catching up to the deployment reality.
The problem these frameworks are trying to solve
When maddaisy examined the agentic AI governance gap in February, the numbers were stark: three-quarters of enterprises planning to deploy agentic AI, but only one in five with a mature governance model. Subsequent analysis of agentic drift — where systems degrade gradually without triggering alarms — and shadow AI adoption revealed that the risks are not hypothetical. They are already materialising in production environments.
The core issue, as these new frameworks make explicit, is that agentic AI does not fit within existing oversight models. Traditional AI governance assumes a human reviews the output before acting on it. Agentic systems are the actor. They plan, execute, and adapt autonomously — booking appointments, approving procurement, triaging complaints, managing collections. Governance cannot happen after the fact. It must be embedded in real time.
What the emerging frameworks have in common
Despite coming from different jurisdictions and institutions, the recent guidance converges on several principles that practitioners should note.
Least privilege by default. Mayer Brown’s framework emphasises restricting what an agent can access — not just what it can do. Agents should not have standing access to sensitive databases, trade secrets, or systems beyond their immediate task scope. This mirrors the zero-trust approach that cybersecurity teams have adopted over the past decade, now applied to autonomous software.
Human checkpoints at decision boundaries, not everywhere. The emerging consensus rejects both extremes: fully autonomous operation with no oversight, and human approval for every action (which would negate the point of agentic AI). Instead, the frameworks advocate for defined boundaries — moments where human approval is required before the agent proceeds. These include irreversible actions, decisions in regulated domains such as healthcare or financial services, and any step that falls outside the agent’s defined scope.
Real-time monitoring, not periodic audits. The ICO report and Mayer Brown both stress continuous behavioural monitoring after deployment. This addresses the drift problem directly: an agent that passed all review gates at launch may behave differently three months later after prompt adjustments, model updates, and tool changes. Logging the full chain of reasoning and actions — not just inputs and outputs — is becoming a baseline expectation.
Transparency to users. Multiple frameworks now explicitly require that organisations disclose when a customer is interacting with an AI agent rather than a human. The ICO notes that agentic AI “magnifies existing risks from generative AI” because these systems can rapidly automate complex tasks and generate new personal information at scale. Users need to know what they are dealing with — and they need a route to a human at any point.
Value-chain accountability. The Partnership on AI flags a gap that most enterprise governance programmes have not addressed: who is responsible when something goes wrong in a multi-agent system? If an agent calls another agent, which calls a third-party tool, which accesses a database — and the outcome is harmful — the liability chain is unclear. Their recommendation: establish an agreed taxonomy for the AI value chain before deployment, not after an incident.
Where the frameworks fall short
For all the convergence, there are notable gaps. None of the published guidance adequately addresses the measurement problem. Adobe’s 2026 AI and Digital Trends Report found that only 31% of organisations have implemented a measurement framework for agentic AI. Without clear metrics for what “good governance” looks like in practice, the frameworks risk becoming compliance theatre — policies that exist on paper but do not change how agents actually operate.
There is also limited guidance on cross-border deployment. The Partnership on AI calls for international coordination, but the practical reality — as maddaisy’s analysis of America’s state-by-state regulatory fragmentation highlighted — is that even within a single country, compliance requirements vary dramatically. An agent deployed in New York now faces transparency requirements under the RAISE Act that do not apply in Texas, where the Responsible AI Governance Act takes a different approach. For multinational enterprises, the compliance surface is formidable.
What this means for practitioners
The practical takeaway is straightforward, if demanding. Organisations deploying or planning to deploy agentic AI should be doing four things now.
First, audit existing governance frameworks against the agentic-specific requirements these publications outline. Most enterprises have AI policies designed for advisory systems. Those policies almost certainly do not cover autonomous execution, multi-agent coordination, or real-time behavioural monitoring.
Second, define decision boundaries before deployment. Which actions require human approval? What constitutes an irreversible decision in your context? Where are the regulatory tripwires? These questions are easier to answer before an agent is in production than after it has been running for six months.
Third, invest in observability infrastructure. As maddaisy noted in the agentic drift analysis, the systems that fail most dangerously are the ones that appear to be working. Full execution logging, behavioural baselines, and anomaly detection are not optional extras — they are the minimum viable governance stack for agentic systems.
Fourth, assign clear ownership. Mayer Brown’s framework identifies four distinct governance roles: decision-makers who set policy, product teams who implement it, cybersecurity teams who integrate agents into security procedures, and frontline employees who can identify and escalate issues. Most organisations have not mapped these responsibilities for their agentic deployments.
The governance race is just starting
The gap between agentic AI deployment and agentic AI governance remains wide. But the direction of travel is now clear: regulators, industry bodies, and legal advisors are converging on a set of principles that will become the baseline expectation. Organisations that build these capabilities now — real-time monitoring, defined decision boundaries, value-chain accountability, and clear ownership — will be better positioned than those scrambling to retrofit governance after their first incident.
The frameworks are not perfect. They will evolve. But the era of governing agentic AI with policies designed for chatbots is ending. For consultants and technology leaders advising enterprises on AI deployment, that shift should be shaping every engagement.