Tag: eu-ai-act

  • Brussels Blinked: The EU AI Act’s High-Risk Deadline Just Moved, but the Compliance Clock Has Not Stopped

    When maddaisy examined the shift from AI principles to penalties in February, the EU AI Act’s August 2026 deadline for high-risk AI systems sat at the centre of the analysis. That date — 2 August 2026 — was the moment when compliance stopped being theoretical and started carrying fines of up to seven per cent of global turnover.

    Four weeks later, Brussels blinked.

    On 13 March, the EU Council agreed its position on the Digital Omnibus package, pushing back the application of high-risk AI rules to December 2027 for standalone systems and August 2028 for those embedded in products. The proposal still requires negotiation with the European Parliament, but the direction is clear: the EU’s own regulatory infrastructure was not ready for its own deadline.

    What Actually Changed

    The delay is narrower than the headlines suggest. The EU AI Act’s prohibited practices — social scoring, manipulative AI targeting vulnerable groups, unauthorised real-time biometric surveillance — have been in force since February 2025 and remain untouched. Obligations for general-purpose AI model providers, including transparency and copyright requirements, still apply from August 2025. The Code of Practice requiring machine-readable detection techniques for AI-generated content is already published.

    What shifted is specifically the high-risk classification regime: the rules governing AI systems used in employment decisions, credit scoring, healthcare, education, law enforcement, and critical infrastructure. These are the provisions that demand conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. They are also the provisions that most enterprises have been scrambling to prepare for.

    The Council’s rationale is pragmatic rather than political. The European Commission missed its own February 2026 deadline for publishing the guidance and harmonised standards that enterprises need to demonstrate compliance. Without those standards, companies were being asked to hit a target that the regulator had not yet fully defined. As the Cypriot presidency put it, the goal is “greater legal certainty” and “more proportionate” implementation — diplomatic language for acknowledging that the implementation machinery was not keeping pace with the legislative ambition.

    The Compliance Paradox

    For enterprises that have spent the past 18 months building AI governance programmes, risk inventories, and compliance frameworks, the delay creates an awkward question: should they slow down?

    The short answer is no — and the reasoning matters more than the conclusion.

    First, the delay is conditional. The Council’s position sets fixed dates — December 2027 and August 2028 — but the Commission retains the ability to confirm earlier application if standards become available sooner. Organisations that pause their compliance programmes risk finding themselves back under pressure with less runway than they had before.

    Second, the regulatory landscape extends well beyond Brussels. As maddaisy has previously examined, the United States is building its own patchwork of state-level AI laws. Colorado’s AI Act takes effect in June 2026. California’s transparency requirements are already live. The EU delay does not change these timelines. An enterprise operating across both markets still faces near-term obligations.

    Third, and perhaps most importantly, the governance work itself has value beyond regulatory compliance. Organisations that have inventoried their AI systems, established accountability structures, and implemented monitoring processes are better positioned to manage operational risk, regardless of when a specific regulation takes effect. As the Ethyca governance framework notes, the shift from policy documentation to continuous operational evidence is happening independently of any single regulatory deadline.

    What the Delay Reveals

    The Digital Omnibus is not just a timeline adjustment. It is a signal about the structural challenges of regulating AI at the pace the technology is evolving.

    The EU built the world’s most comprehensive AI regulation. It classified systems by risk tier, defined obligations for providers and deployers, established penalties that exceed GDPR maximums, and applied the rules extraterritorially. What it did not build quickly enough was the operational layer: the harmonised standards, the conformity assessment procedures, the guidance documents that translate legal text into practical compliance steps.

    This mirrors a pattern maddaisy has observed across multiple regulatory domains. Europe’s cloud sovereignty push encountered similar friction — ambitious policy goals meeting incomplete implementation frameworks. The gap between legislative intent and operational readiness is becoming a recurring theme in European technology regulation.

    The Council’s position does include substantive additions alongside the delay. A new prohibition on AI-generated non-consensual intimate content and child sexual abuse material was introduced. Regulatory exemptions previously limited to SMEs were extended to small mid-cap companies. The AI Office’s enforcement powers were reinforced. These are not trivial changes — they show that the regulation is still being actively shaped even as its core provisions await full application.

    The ISO 42001 Factor

    One development running parallel to the regulatory delay is the accelerating adoption of ISO/IEC 42001, the international standard for AI management systems. Enterprise buyers are increasingly adding it to vendor procurement requirements, and AI liability insurers are beginning to factor governance certifications into risk assessments.

    For organisations uncertain about how to structure their compliance programmes during the delay, ISO 42001 offers a practical framework. It maps to the EU AI Act’s requirements without being dependent on them, meaning that compliance work done under the standard retains its value regardless of how regulatory timelines shift. Pega’s recent certification is one example of vendors using the standard to demonstrate governance readiness to enterprise clients.

    What Practitioners Should Do Now

    The EU AI Act delay changes timelines, not trajectories. The practical recommendations remain consistent with what maddaisy outlined in February, with one important addition:

    • Continue AI system inventories. Understanding what AI is deployed, where, and at what risk level is foundational work that no regulatory timeline change invalidates.
    • Monitor the Parliament negotiations. The Council position must be reconciled with the European Parliament before becoming final. The dates could shift again — in either direction.
    • Use the extra time for standards alignment. With harmonised standards still being developed, organisations now have an opportunity to align with ISO 42001 or the NIST AI Risk Management Framework before mandatory compliance begins.
    • Do not treat the delay as permission to deprioritise. Colorado, California, and other US state deadlines remain unchanged. Enterprise clients and procurement teams are not waiting for regulators — they are setting their own governance expectations now.

    The EU built the most ambitious AI regulation in the world, then discovered that ambition requires infrastructure. The delay is a concession to reality, not a retreat from intent. For enterprises, the message is straightforward: the destination has not changed, only the speed limit on the road getting there.

  • The Rules for Public Data Are Quietly Reshaping Who Gets to Build AI

    The question of who gets to train AI on publicly available data is quietly becoming one of the most consequential regulatory battles in technology. A new report from the Information Technology and Innovation Foundation (ITIF), published this week, lays out the stakes clearly: the jurisdictions that permit responsible access to public web data will lead in AI development, while those that restrict it risk falling permanently behind.

    This is not abstract. In 2025, US-based organisations produced 40 notable foundation models. China produced 15. The European Union managed three. The gap is driven by many factors — investment, talent, compute infrastructure — but the rules governing access to training data are an increasingly significant one.

    The Transatlantic Divide

    The US and EU have taken fundamentally different approaches to how publicly available data can be used for AI training.

    The US operates what the ITIF describes as a “gates up” framework. Publicly accessible web data is generally available for automated collection unless a site owner implements technical barriers — robots.txt files, authentication walls, or rate-limiting mechanisms. This permissive posture has given American AI labs broad access to the digital commons as training material.

    The EU, by contrast, applies GDPR protections to personal data regardless of whether it appears on a public website. Even a name and job title scraped from a company’s “About Us” page may require a lawful processing basis under European law. The EU AI Act adds a further layer: Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, and rights holders can opt out of their content being used. The European Commission’s November 2025 Digital Omnibus proposal aims to simplify some of this regulatory burden, but the fundamental constraints on data use remain.

    The result is that AI development gravitates toward more permissive jurisdictions. This is not a theoretical concern — it is visible in where companies locate their model training infrastructure and where they hire.

    The US Is Not Unified Either

    As maddaisy examined in February, the United States has its own regulatory fragmentation problem. California’s AB 2013, which took effect on 1 January 2026, requires developers of publicly available generative AI systems to disclose detailed information about their training data — including the sources, whether the data contains copyrighted material, whether it includes personal information, and when it was collected. That transparency obligation applies retrospectively, meaning developers must document historical training practices.

    Colorado’s AI Act addresses the deployment side, with impact assessments and discrimination safeguards for high-risk systems due to take effect in June 2026. Illinois, New York City, and Texas each have their own targeted requirements.

    The federal government wants to consolidate this into a single framework, but as maddaisy noted when AI governance entered its enforcement era, the White House’s December 2025 executive order is a statement of intent, not a statute. State laws remain in force, and the compliance burden is cumulative.

    Technical Governance Is Filling the Gap

    Where regulation is fragmented or slow, technical standards are emerging to manage access to public data for AI training. The ITIF report identifies several mechanisms that are gaining traction:

    Machine-readable opt-out signals extend beyond the familiar robots.txt protocol. New standards like LLMs.txt allow website operators to provide curated, machine-readable summaries of their content specifically for AI systems — a more nuanced approach than a binary allow/block decision.

    Cryptographic bot authentication using HTTP message signatures allows site operators to verify the identity of AI crawlers and grant or restrict access based on who is asking, not just what they are requesting.

    Automated licensing frameworks are experimenting with HTTP 402 (“Payment Required”) signals, creating the technical infrastructure for content owners to set terms for AI training use — including compensation.

    PII filtering tools such as Microsoft’s open-source Presidio project allow developers to detect and remove sensitive personal information during data preparation, addressing privacy concerns at the technical rather than legal level.

    These mechanisms are not yet standardised or universally adopted. But they point toward a model where access to public data is governed by a combination of technical protocols and market-based agreements, rather than solely by regulation.

    The Agentic Wrinkle

    The data access question becomes more complex as AI systems shift from static model training to live, agentic operations. When maddaisy examined the governance challenges for AI agents earlier this week, the focus was on operational controls — monitoring, auditing, and accountability chains. The ITIF report adds a further dimension: data that is technically accessible (visible through a browser or available via API) is not necessarily intended for AI consumption.

    Consider an AI agent authorised to access a company’s customer relationship management system. The data it encounters is not public, but it is available to the agent through delegated credentials. Current regulatory frameworks are largely silent on this category of “private-but-available” data, and the risks compound when agents combine information from multiple sources to surface connections that no individual source intended to reveal.

    What Practitioners Should Watch

    The ITIF report recommends that policymakers focus on three priorities: regulating AI outputs rather than training inputs, encouraging transparency norms for AI agents, and creating safe harbour protections for developers who respect machine-readable opt-out signals and filter sensitive data.

    For consultants and practitioners advising organisations on AI strategy, the practical implications are more immediate. Enterprises deploying AI — whether training proprietary models, fine-tuning foundation models, or deploying agentic systems — need to map their data supply chain with the same rigour they apply to physical procurement. That means understanding where training data originates, what rights framework governs its use, whether it contains personal information subject to GDPR or state privacy laws, and whether the technical mechanisms exist to honour opt-out requests.

    The organisations that treat training data governance as a compliance afterthought will find themselves exposed — not just to regulatory penalties, but to reputational risk and potential litigation. Those that build responsible data practices into their AI development lifecycle will have a genuine competitive advantage, particularly as transparency requirements tighten across jurisdictions.

    The rules for public data are not a peripheral regulatory detail. They are becoming one of the defining factors in who builds the next generation of AI systems, and where.

  • From Principles to Penalties: AI Governance Enters Its Enforcement Era

    For the better part of a decade, AI governance lived in the realm of principles. Organisations published ethics charters, governments convened expert panels, and everyone agreed that responsible AI mattered — without agreeing on what, precisely, that meant in practice.

    That era is ending. In 2026, AI governance is shifting from aspiration to obligation, from white papers to enforcement deadlines with real financial consequences. The transition is not sudden — it has been building for years — but the concentration of regulatory milestones in the coming months marks a genuine inflection point for any organisation deploying AI at scale.

    The regulatory calendar thickens

    Three developments are converging to make 2026 the year that AI compliance moves from a planning exercise to an operational requirement.

    First, the EU AI Act reaches its most consequential milestone on 2 August 2026, when requirements for high-risk AI systems become fully enforceable. These cover AI used in employment decisions, credit scoring, education, and law enforcement — areas where automated decisions directly affect people’s lives. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds the maximum GDPR fine by a significant margin.

    Second, the United States is developing its own patchwork of state-level AI laws. Colorado’s AI Act, taking effect in June 2026, requires deployers of high-risk AI systems to exercise reasonable care against algorithmic discrimination, conduct impact assessments, and provide consumer transparency. California’s SB 53 mandates that frontier AI developers publish safety frameworks and implement incident response measures. Illinois, New York City, Utah, and Texas have all enacted targeted AI requirements of their own.

    Third, the federal government has entered the fray — not to simplify matters, but to complicate them. President Trump’s December 2025 Executive Order signals an intent to consolidate AI oversight at the federal level and potentially pre-empt state regulations deemed “onerous.” A newly established AI Litigation Task Force has been directed to identify state laws for possible legal challenge. But the executive order itself does not create federal AI standards, nor does it suspend existing state laws. As legal analysts at Gunderson Dettmer have noted, it is “a statement of principles and set of tools,” not an amnesty or moratorium.

    The practical result is a compliance environment that is fragmented, fast-moving, and increasingly consequential.

    The operational gap maddaisy has been tracking

    This regulatory acceleration arrives at an awkward moment for most enterprises. As maddaisy recently examined through Deloitte’s 2026 State of AI report, organisations report growing confidence in their AI strategy but declining readiness on the operational foundations — infrastructure, data quality, risk management, and talent — needed to execute it. The governance gap is a specific instance of this broader pattern: many enterprises can articulate what responsible AI should look like, far fewer have built the internal machinery to demonstrate compliance under real regulatory scrutiny.

    The EU AI Act, for example, does not simply require that organisations have a policy. It mandates documented risk management systems, high-quality training data practices, technical logging, transparency to users, human oversight mechanisms, and conformity assessments — all subject to audit. For companies that have treated AI governance as a communications exercise rather than an engineering discipline, the compliance gap is substantial.

    A global picture with local friction

    The regulatory picture extends well beyond the EU and US. ISACA’s recent comparison of the EU AI Act and China’s AI Governance Framework 2.0 highlights how the two largest regulatory regimes take fundamentally different approaches. The EU classifies risk through a technology-centric, tiered system. China uses a dynamic, multidimensional model based on application scenario, intelligence level, and scale — and requires pre-launch government approval for any public-facing AI system.

    South Korea and Vietnam have implemented dedicated AI laws in 2026. India is hosting its AI Impact Summit this week, the first major global AI event hosted in the Global South, signalling the country’s intent to shape governance norms rather than simply receive them.

    For multinational organisations, this creates a familiar but intensifying challenge: complying with the strictest applicable standard while operating across jurisdictions with diverging requirements. The sovereignty dimensions that maddaisy explored in Capgemini’s approach to European digital sovereignty are directly relevant here. Data residency, model provenance, and supply chain transparency are no longer optional governance aspirations — they are becoming regulatory requirements with enforcement teeth.

    What practitioners should be doing now

    The shift from principles to enforcement carries specific implications for consultants and technology leaders.

    Inventory first, comply second. Before addressing any specific regulation, organisations need a clear map of where AI systems make or influence consequential decisions. Many companies still lack a comprehensive AI inventory — they cannot tell regulators what AI they are running, let alone demonstrate how it is governed.

    Build for the strictest standard. The temptation to wait for regulatory clarity — particularly in the US, where federal pre-emption remains uncertain — is understandable but risky. Companies operating across state lines or internationally will likely need to meet the EU AI Act’s requirements regardless, given its extraterritorial reach. Building governance infrastructure to that standard provides a defensible baseline.

    Treat governance as engineering, not policy. The new regulations demand technical evidence: logging, bias audits, explainability frameworks, documented data provenance. These are engineering problems that require engineering solutions. A well-written ethics policy will not satisfy a regulator asking for an audit trail of model decisions.

    Watch the US federal-state tension closely. The December 2025 executive order has created genuine uncertainty about whether state AI laws like Colorado’s will survive federal challenge. But as compliance analysts have noted, state attorneys general retain broad enforcement authority under existing consumer protection and anti-discrimination statutes — even if AI-specific laws are curtailed. The enforcement risk does not disappear; it simply shifts shape.

    The year governance becomes a line item

    None of this is unexpected. The trajectory from voluntary principles to binding regulation follows the same path that data protection, financial reporting, and environmental standards have all taken. What is notable about 2026 is the speed and breadth of the convergence: multiple jurisdictions, multiple enforcement mechanisms, and penalties calibrated to be genuinely material, all arriving within the same 12-month window.

    For organisations that have invested in governance infrastructure, this is the moment that investment begins to pay off — not as a cost centre, but as a competitive advantage. For those that have not, the runway is shortening. The question is no longer whether AI governance matters. It is whether the operational machinery exists to prove it.