Tag: us-ai-policy

  • The Rules for Public Data Are Quietly Reshaping Who Gets to Build AI

    The question of who gets to train AI on publicly available data is quietly becoming one of the most consequential regulatory battles in technology. A new report from the Information Technology and Innovation Foundation (ITIF), published this week, lays out the stakes clearly: the jurisdictions that permit responsible access to public web data will lead in AI development, while those that restrict it risk falling permanently behind.

    This is not abstract. In 2025, US-based organisations produced 40 notable foundation models. China produced 15. The European Union managed three. The gap is driven by many factors — investment, talent, compute infrastructure — but the rules governing access to training data are an increasingly significant one.

    The Transatlantic Divide

    The US and EU have taken fundamentally different approaches to how publicly available data can be used for AI training.

    The US operates what the ITIF describes as a “gates up” framework. Publicly accessible web data is generally available for automated collection unless a site owner implements technical barriers — robots.txt files, authentication walls, or rate-limiting mechanisms. This permissive posture has given American AI labs broad access to the digital commons as training material.

    The EU, by contrast, applies GDPR protections to personal data regardless of whether it appears on a public website. Even a name and job title scraped from a company’s “About Us” page may require a lawful processing basis under European law. The EU AI Act adds a further layer: Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, and rights holders can opt out of their content being used. The European Commission’s November 2025 Digital Omnibus proposal aims to simplify some of this regulatory burden, but the fundamental constraints on data use remain.

    The result is that AI development gravitates toward more permissive jurisdictions. This is not a theoretical concern — it is visible in where companies locate their model training infrastructure and where they hire.

    The US Is Not Unified Either

    As maddaisy examined in February, the United States has its own regulatory fragmentation problem. California’s AB 2013, which took effect on 1 January 2026, requires developers of publicly available generative AI systems to disclose detailed information about their training data — including the sources, whether the data contains copyrighted material, whether it includes personal information, and when it was collected. That transparency obligation applies retrospectively, meaning developers must document historical training practices.

    Colorado’s AI Act addresses the deployment side, with impact assessments and discrimination safeguards for high-risk systems due to take effect in June 2026. Illinois, New York City, and Texas each have their own targeted requirements.

    The federal government wants to consolidate this into a single framework, but as maddaisy noted when AI governance entered its enforcement era, the White House’s December 2025 executive order is a statement of intent, not a statute. State laws remain in force, and the compliance burden is cumulative.

    Technical Governance Is Filling the Gap

    Where regulation is fragmented or slow, technical standards are emerging to manage access to public data for AI training. The ITIF report identifies several mechanisms that are gaining traction:

    Machine-readable opt-out signals extend beyond the familiar robots.txt protocol. New standards like LLMs.txt allow website operators to provide curated, machine-readable summaries of their content specifically for AI systems — a more nuanced approach than a binary allow/block decision.

    Cryptographic bot authentication using HTTP message signatures allows site operators to verify the identity of AI crawlers and grant or restrict access based on who is asking, not just what they are requesting.

    Automated licensing frameworks are experimenting with HTTP 402 (“Payment Required”) signals, creating the technical infrastructure for content owners to set terms for AI training use — including compensation.

    PII filtering tools such as Microsoft’s open-source Presidio project allow developers to detect and remove sensitive personal information during data preparation, addressing privacy concerns at the technical rather than legal level.

    These mechanisms are not yet standardised or universally adopted. But they point toward a model where access to public data is governed by a combination of technical protocols and market-based agreements, rather than solely by regulation.

    The Agentic Wrinkle

    The data access question becomes more complex as AI systems shift from static model training to live, agentic operations. When maddaisy examined the governance challenges for AI agents earlier this week, the focus was on operational controls — monitoring, auditing, and accountability chains. The ITIF report adds a further dimension: data that is technically accessible (visible through a browser or available via API) is not necessarily intended for AI consumption.

    Consider an AI agent authorised to access a company’s customer relationship management system. The data it encounters is not public, but it is available to the agent through delegated credentials. Current regulatory frameworks are largely silent on this category of “private-but-available” data, and the risks compound when agents combine information from multiple sources to surface connections that no individual source intended to reveal.

    What Practitioners Should Watch

    The ITIF report recommends that policymakers focus on three priorities: regulating AI outputs rather than training inputs, encouraging transparency norms for AI agents, and creating safe harbour protections for developers who respect machine-readable opt-out signals and filter sensitive data.

    For consultants and practitioners advising organisations on AI strategy, the practical implications are more immediate. Enterprises deploying AI — whether training proprietary models, fine-tuning foundation models, or deploying agentic systems — need to map their data supply chain with the same rigour they apply to physical procurement. That means understanding where training data originates, what rights framework governs its use, whether it contains personal information subject to GDPR or state privacy laws, and whether the technical mechanisms exist to honour opt-out requests.

    The organisations that treat training data governance as a compliance afterthought will find themselves exposed — not just to regulatory penalties, but to reputational risk and potential litigation. Those that build responsible data practices into their AI development lifecycle will have a genuine competitive advantage, particularly as transparency requirements tighten across jurisdictions.

    The rules for public data are not a peripheral regulatory detail. They are becoming one of the defining factors in who builds the next generation of AI systems, and where.

  • Anthropic’s Pentagon Walkaway and the Price of AI Principles

    When maddaisy examined the Pentagon-Anthropic standoff earlier this month, the focus was on a new category of vendor risk – the possibility that political pressure could turn a trusted AI supplier into a regulatory liability overnight. That analysis ended with a warning: organisations in the defence supply chain needed to start modelling political risk alongside technical capability.

    Nine days later, the story has taken a turn that few risk models would have predicted. The company the US government tried to destroy is winning the consumer market.

    The market’s unexpected verdict

    Claude, Anthropic’s AI assistant, surged to the number one spot on Apple’s App Store in the days following the ban. The #QuitGPT movement saw 2.5 million users cancel ChatGPT subscriptions or publicly pledge to boycott OpenAI. Protesters gathered outside OpenAI’s San Francisco headquarters. Reddit and X filled with migration guides.

    The paradox is striking. Anthropic refused to remove ethical guardrails from its Pentagon contract – specifically, restrictions on mass surveillance and fully autonomous weapons. The government designated it a supply-chain risk, a label previously reserved for foreign adversaries. And the market responded by rewarding Anthropic with the most effective brand-building campaign in AI history – one the company never planned and could not have bought.

    What OpenAI’s ‘win’ actually cost

    OpenAI moved quickly to fill the gap. Sam Altman announced a deal to deploy models on the Pentagon’s classified networks, claiming the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human oversight requirements. The critical difference, as maddaisy noted at the time, is that OpenAI permits “all lawful uses” and relies on existing Pentagon policies rather than contractual restrictions.

    Altman defended the decision in Fortune, arguing that “the absence of responsible AI companies in the military space” would be worse than engagement on imperfect terms. It is a reasonable argument in isolation.

    But the commercial consequences arrived faster than the contracts. OpenAI’s head of robotics resigned over ethical concerns related to the Pentagon deal. Eleven OpenAI employees signed an open letter protesting the government’s treatment of Anthropic – even as their employer stood to benefit from it. OpenAI rushed out GPT-5.4 within 48 hours of GPT-5.3 Instant, in what looked less like a planned release and more like a company trying to change the subject.

    The talent cost may prove more significant than the user exodus. In an industry where the top researchers can choose where they work, being seen as the company that took the contract Anthropic refused is not a recruitment advantage.

    The strategic calculus of principles

    For decades, ethical positioning in technology was treated as a cost centre – a brand exercise that might prevent negative press but would never drive revenue. The Anthropic-Pentagon episode is the first major test of whether that assumption holds in the AI era.

    The early data suggests it does not. Anthropic is being rewarded by consumers and punished by government. OpenAI is being rewarded by government and punished by consumers. Both companies are learning, in real time, that the AI market has bifurcated into constituencies with fundamentally different values – and that serving one may come at the expense of the other.

    This is new territory for the technology industry. Previous ethical standoffs – Apple versus the FBI over iPhone encryption in 2016, Google’s Project Maven withdrawal in 2018 – were significant but did not produce the same kind of mass consumer response. The difference is that AI tools are personal. People interact with ChatGPT and Claude daily, often for sensitive tasks. When those tools become entangled with military use and surveillance, the reaction is visceral in a way that an encryption debate never was.

    The $60 billion question

    None of this means Anthropic’s position is comfortable. The company has received more than $60 billion in total investment from over 200 venture capital funds. If the supply-chain risk designation holds, enterprise clients in the defence ecosystem will be contractually barred from using Anthropic’s products. Anthropic has announced it will challenge the designation in court, but legal processes move slowly and government procurement decisions move fast.

    The Pro-Human Declaration, a framework released by a bipartisan coalition of researchers and former officials, was finalised before the standoff but published in its aftermath. Its signatories include Steve Bannon and Susan Rice – a pairing that underlines just how broadly the anxiety about unchecked AI development has spread. Among its provisions: mandatory pre-deployment testing, prohibitions on fully autonomous weapons, and an outright moratorium on superintelligence development until scientific consensus on safety is established.

    Whether such frameworks gain legislative traction remains an open question. As maddaisy has previously noted, more than 1,000 AI-related bills were introduced across all 50 US states in 2025 alone, while federal action has been conspicuously absent. The Pentagon-Anthropic episode may be the catalyst that changes that – or it may simply add another chapter to the regulatory vacuum.

    What this means for practitioners

    For organisations evaluating AI vendors, the lesson is not that principles are good and pragmatism is bad. It is that ethical positioning has become a material factor in AI vendor strategy – one that affects talent retention, consumer trust, regulatory exposure, and government access in ways that are increasingly difficult to hedge.

    The vendor assessment frameworks that most enterprises use were not designed for this. They evaluate uptime, security certifications, data residency, and pricing. They do not evaluate whether a vendor’s ethical commitments might make it a target of government action, or whether a vendor’s willingness to accept government contracts without restrictions might trigger a consumer backlash that affects product development velocity.

    Both of those scenarios played out in the space of a single week. Any vendor strategy that does not account for them is incomplete.

    The AI industry is discovering what the defence, pharmaceutical, and energy industries learned long ago: when your products touch questions of public safety and national security, your commercial strategy and your ethical positioning become the same thing. The companies that figure this out first – not just as a brand exercise, but as a genuine constraint on what they will and will not do – will be the ones that retain the trust of all their constituencies, not just the most powerful one.

    Anthropic’s stand cost it a $200 million contract. It may also have bought something more durable: a market position built on trust at a time when trust in AI companies is in short supply.

  • The Pentagon-Anthropic Standoff Exposes a New Category of AI Vendor Risk

    When maddaisy examined America’s fragmented AI regulation landscape last week, the focus was on states pulling in different directions while the federal government tried to impose order from above. That piece ended with the observation that organisations face a compliance labyrinth with no clear exit. Five days later, the labyrinth got a new wing — and this one has armed guards.

    On 27 February, President Trump ordered all federal agencies to immediately cease using Anthropic’s AI systems. Hours later, Defence Secretary Pete Hegseth moved to designate the company a “supply-chain risk to national security” — a label previously reserved for foreign adversaries. The same evening, OpenAI CEO Sam Altman announced that his company had struck a deal to deploy its models on the Pentagon’s classified networks.

    The sequence of events was not subtle. An American AI company refused to remove ethical guardrails from its military contract. The government threatened to destroy it. A rival stepped in to take the work. For anyone who manages AI vendor relationships — and that now includes most enterprise technology leaders — the implications are significant and immediate.

    What actually happened

    The conflict had been building for months. Anthropic holds a contract worth up to $200 million with the Pentagon and, through its partnership with Palantir, was one of only two frontier AI models cleared for use on classified defence networks. The arrangement worked — until the Pentagon insisted on access to Claude for “all lawful purposes,” without the ethical restrictions Anthropic had built into its acceptable use policy.

    Anthropic’s red lines were specific: no mass domestic surveillance and no fully autonomous weapons without human oversight. CEO Dario Amodei argued that current AI systems “are simply not reliable enough to power fully autonomous weapons” and that “using these systems for mass domestic surveillance is incompatible with democratic values.”

    The Pentagon disagreed, and the situation escalated rapidly. Defence Secretary Hegseth gave Anthropic an ultimatum: agree to the government’s terms by 5:01 p.m. on 27 February or face designation as a supply-chain risk and potential invocation of the Cold War-era Defence Production Act. Anthropic refused. Trump’s ban and Hegseth’s designation followed within hours.

    What makes this more than a contract dispute is the weapon the government chose. As former Trump AI policy adviser Dean Ball wrote, the supply-chain risk designation would cut Anthropic off from hardware and hosting partners — “effectively destroying the company.” Ball, hardly an opponent of the administration, called it “attempted corporate murder.”

    OpenAI steps in — with familiar language and different terms

    OpenAI’s Pentagon deal arrived with careful framing. Altman claimed the agreement included the same safety principles Anthropic had sought: prohibitions on mass surveillance and human responsibility for the use of force. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote.

    The critical difference is what “agreement” means in practice. Anthropic sought contractual guarantees — binding restrictions written into the terms of service. OpenAI’s approach permits “all lawful uses” and relies on the Pentagon’s existing policies and legal frameworks rather than company-imposed limitations. As TechCrunch noted, it remains unclear how — or whether — the safety measures in OpenAI’s deal differ substantively from the terms Anthropic rejected.

    When maddaisy covered OpenAI’s Frontier Alliance with McKinsey, BCG, Accenture, and Capgemini last week, the story was about a vendor that needed consulting partners to scale its enterprise business. The Pentagon deal adds a wholly different dimension to OpenAI’s ambitions — and a wholly different category of risk.

    The institutional gap

    The deeper issue is structural. For decades, Pentagon technology contracts were dominated by slow-moving, heavily regulated defence contractors — Raytheon, Lockheed Martin, Northrop Grumman. These companies built institutional muscle for navigating political transitions, managing classified programmes, and absorbing the long-term volatility of government work. They were not exciting. They were durable.

    AI startups are neither slow-moving nor heavily regulated. They operate on venture capital timelines, consumer brand logic, and talent markets where a single ethical controversy can trigger an employee exodus. OpenAI has already seen 11 of its own employees sign an open letter protesting the government’s treatment of Anthropic — even as their employer benefits from it.

    This is the mismatch that matters for practitioners. The AI companies building the tools that enterprises depend on are now also becoming national security infrastructure — but they have none of the institutional frameworks that role demands. They lack the political risk management, the bipartisan relationship-building, and the organisational resilience to weather what comes next.

    The vendor risk no one modelled

    For organisations that have built their AI strategies around Anthropic or OpenAI, the past week introduced a category of risk that does not appear in most vendor assessment frameworks: political risk.

    Anthropic’s designation as a supply-chain risk, if upheld, would prevent any military contractor or supplier from doing business with the company. Given how deeply Anthropic’s Claude is integrated with Palantir’s systems — which are themselves critical Pentagon infrastructure — the practical implications cascade well beyond the original dispute. The CIO analysis of the situation compared it to the FBI-Apple standoff over iPhone encryption in 2015, but noted that the current administration “seems less willing to be patient.”

    For enterprises in the defence supply chain, the risk is direct: continued use of Anthropic’s technology may become a contractual liability. For everyone else, the risk is precedential. If the federal government can threaten to destroy an American company for negotiating contract terms, the calculus changes for every AI vendor evaluating government work — and for every enterprise evaluating those vendors.

    What consultants and technology leaders should watch

    Three things matter in the weeks ahead.

    First, the legal challenge. Anthropic has stated it will contest the supply-chain designation in court. Legal analysts suggest the designation is unlikely to survive judicial scrutiny — it was designed for foreign adversaries, not domestic companies in active contract negotiations. But legal proceedings take time, and the commercial damage from even a temporary designation could be severe.

    Second, the talent signal. AI companies compete fiercely for a small pool of researchers and engineers. The Pentagon standoff has made the ethical positioning of AI labs a hiring issue, not just a branding one. OpenAI’s internal tension — benefiting commercially from the deal while employees publicly protest the government’s tactics — is a dynamic that affects product roadmaps, not just press coverage.

    Third, the precedent for vendor governance. As the Council on Foreign Relations observed, the Anthropic standoff raises a fundamental question about AI sovereignty: can a private firm constrain the government’s use of a decisive military technology, and should it? Whichever way that question resolves, it will reshape the terms on which AI vendors provide their products — to governments and to enterprises alike.

    The irony is hard to miss. Just days after maddaisy reported on America’s fragmented state-level AI regulation, the federal government demonstrated that its own approach to AI governance is no less chaotic — merely higher-stakes. For organisations navigating vendor relationships in this environment, the lesson is uncomfortable: the AI companies they depend on are now players in a political contest they did not design and cannot control.

  • America’s Fragmented State-by-State AI Regulation Is Creating a Compliance Labyrinth

    When maddaisy examined the shift from AI principles to penalties earlier this month, the United States’ growing patchwork of state-level AI laws featured as one of three converging forces. That brief mention understated the scale of what is now unfolding. With more than 1,000 AI-related bills introduced across all 50 states in 2025 alone, the US is rapidly building the most fragmented AI regulatory landscape of any major economy — and a federal government that wants to stop it but may lack the tools to do so.

    The federal-state standoff

    The tension is now explicit. In December 2025, the White House released a framework titled “Ensuring a National Policy Framework for Artificial Intelligence”, arguing that diverging state AI rules risk creating a harmful compliance patchwork that undermines American competitiveness. An accompanying fact sheet went further, suggesting the administration may consider litigation and funding mechanisms to counter state AI regimes it considers obstructive.

    This follows Executive Order 14179, issued in January 2025, which rescinded the Biden administration’s 2023 AI order and established a competitiveness-first federal posture. The July 2025 AI Action Plan outlined more than 90 measures to support AI infrastructure and innovation. The message from Washington is consistent: the federal government wants to set the rules, and it wants states to fall in line.

    The states, however, are not listening.

    Four approaches, one country

    What makes the US situation distinctive is not simply that states are regulating AI — it is that they are doing so in fundamentally different ways. Four models are emerging, each with different implications for organisations operating across state lines.

    Colorado: the comprehensive framework. Colorado’s AI Act (SB 24-205) remains the most ambitious state-level attempt at broad AI governance. It requires deployers of high-risk AI systems to conduct impact assessments, provide consumer transparency, and exercise reasonable care against algorithmic discrimination. Although enforcement was delayed to June 2026 through SB25B-004, the statutory framework is intact. Legal analysts note that the delay is strategic, not a retreat — Colorado is buying time to refine its approach while preserving regulatory leverage.

    California: regulation through existing powers. Rather than passing a single comprehensive AI act, California is weaving AI governance into its existing regulatory apparatus. The California Privacy Protection Agency finalised new regulations in September 2025 covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), with phased effective dates running through January 2027. The ADMT obligations are particularly significant: they convert what many organisations treat as aspirational governance practices — impact assessments, decision transparency, audit trails — into enforceable system design requirements.

    Washington: the multi-bill expansion. Washington has taken a fragmented-by-design approach, advancing separate bills for high-risk AI (HB 2157), transparency (HB 1168), training data regulation (HB 2503), and consumer protection in automated systems (SB 6284). The logic is pragmatic: AI risk does not fit into a single statutory box, so Washington is addressing it through multiple, targeted legislative vehicles. For compliance teams, this means tracking not one law but several, each with its own scope and requirements.

    Texas: government-first regulation. The Responsible AI Governance Act (HB 149), effective since January 2026, governs AI use within state government, prohibits certain high-risk applications, and establishes an advisory council. It is narrower than Colorado or California’s approaches but signals that even politically conservative states see a role for AI regulation — just a different one.

    The preemption problem

    The White House clearly wants federal preemption — a single national framework that would override state laws. But there are reasons to doubt this will materialise quickly, if at all.

    First, there is no comprehensive federal AI legislation on the table. The administration’s tools are executive orders and agency guidance, which carry less legal weight than statute and are vulnerable to court challenges. Second, the pattern from privacy law is instructive: the US still lacks a federal privacy law, and state laws like the California Consumer Privacy Act have effectively set national standards by default. AI regulation appears to be following the same trajectory.

    Third, states have institutional momentum. According to the National Conference of State Legislatures, all 50 states, Washington DC, and US territories introduced AI legislation in 2025. That level of legislative activity does not simply stop because the federal government asks it to. As one legal analysis from The Beckage Firm observes, states are not ignoring the federal signal — they are interpreting it differently, rolling out requirements more slowly or in smaller pieces, but maintaining control rather than ceding it.

    What this means for organisations

    For any company deploying AI across multiple US states — which is to say, most enterprises of any scale — the practical implications are significant and immediate.

    Multi-jurisdictional compliance is now unavoidable. Organisations cannot build a single AI governance programme around one state’s requirements and assume it covers the rest. Colorado’s impact assessment obligations differ from California’s ADMT rules, which differ from New York City’s bias audit requirements for automated employment tools. Each demands different documentation, different processes, and in some cases, different technical controls.

    Effective dates are stacking up. Colorado’s enforcement begins June 2026. California’s ADMT obligations take full effect January 2027. New York City’s AEDT rules are already being actively enforced. Texas’s government-focused requirements are live now. Organisations treating these as isolated events rather than overlapping compliance waves risk being caught unprepared.

    The privacy playbook applies. Companies that built adaptable privacy programmes — mapping data flows, documenting processing purposes, conducting impact assessments — when GDPR and the CCPA emerged are better positioned than those that treated each regulation as a separate exercise. The same principle holds for AI governance. The organisations that will manage this landscape most effectively are those building flexible frameworks capable of mapping risk assessments, accountability structures, and documentation across multiple jurisdictions simultaneously.

    The consulting opportunity — and obligation

    For consultants and practitioners, the US regulatory fragmentation creates both demand and responsibility. Demand, because multi-state AI compliance is genuinely complex and most organisations lack in-house expertise to navigate it. Responsibility, because the temptation to oversimplify — to sell a single “AI compliance solution” that claims to cover all jurisdictions — is real and should be resisted. The details matter, and they vary state by state.

    As maddaisy has noted in covering shadow AI governance challenges and the agentic AI governance gap, the operational machinery for AI oversight is still immature in most enterprises. Adding a fragmented regulatory landscape on top of that immaturity does not just increase cost — it increases the likelihood that organisations will get something materially wrong.

    The US may eventually get a federal AI framework. But the states are not waiting, and neither should the organisations that operate in them. The compliance labyrinth is already being built, one statehouse at a time.